The index is
where the spider-collected data are stored. When you perform a search on a
major search engine, you are not searching the web, but the cache of the web
provided by that search engine’s index.
Search engines
organize their content in what is called a reverse
index. A reverse index sorts web documents by words. When you search Google
and it displays 1-10 out of 143,000 websites, it means that there are
approximately 143,000 web pages that either have the words from your search on
them or have inbound links containing them. Also, note that search engines do
not store punctuation, just words.
The following
is an example of a reverse index and how a typical search engine might classify
content. While this is an oversimplified version of the real thing, it does
illustrate the point. Imagine each of the following sentences is the content of
a unique page:
The dog ate the cat.
The cat ate the mouse.
Since search
engines view pages from their source code in a linear format, it is best to
move JavaScript and other extraneous code to external files to help move the page
copy higher in the source code.
Some people
also use Cascading Style Sheets (CSS) or a blank table cell to place the page
content ahead of the navigation. As far as how search engines evaluate what
words are first, they look at how the words appear in the source code. I have
not done significant testing to determine if it is worth the effort to make
your unique
page code appear ahead of the navigation, but if it does not take
much additional effort, it is probably worth doing. Link analysis (discussed in
depth later) is far more important than page copy to most search algorithms,
but every little bit can help.
Google has also hired some people from Mozilla and is
likely working on helping their spider understand how browsers render pages.
Microsoft published visually segmenting research that may help them understand
what page content is most important.
As well as storing the position of a word, search engines
can also store how the data are marked up. For example, is the term in the page
title? Is it a heading? What type of heading? Is it bold? Is it emphasized? Is
it in part of a list? Is it in link text?
Words that are in a heading or are set apart from normal
text in other ways may be given additional weighting in many search algorithms.
However, keep in mind that it may be an unnatural pattern for your keyword
phrases to appear many times in bold and headings without occurring in any of
the regular textual body copy. Also, if a page looks like it is aligned too
perfectly with a topic (i.e., overly-focused so as to have an abnormally high
keyword density), then that page may get a lower relevancy score than a page
with a lower keyword density and more natural page copy.
By storing where the terms occur, search engines can
understand how close one term is to another. Generally, the closer the terms
are together, the more likely the page with matching terms will satisfy your
query.
If you only use an important group of words on the page
once, try to make sure they are close together or right next to each other. If
words also occur naturally, sprinkled throughout the copy many times, you do
not need to try to rewrite the content to always have the words next to one
another. Natural sounding content is best.
Words that are common do not help search engines
understand documents. Exceptionally common terms, such as the, are called stop words. While search engines index stop words,
they are not typically used or weighted heavily to determine relevancy in
search algorithms. If I search for the
Cat in the Hat, search engines may insert wildcards for the words the and in, so my search will look like
* cat * * hat.
Each
page is standardized to a size. This prevents longer pages from having an
unfair advantage by using a term many more times throughout long page copy.
This also prevents short pages for scoring arbitrarily high by having a high
percentage of
their page copy composed of a few keyword phrases. Thus, there is no magical
page copy length that is best for all search engines.
The uniqueness of page content is far more important than
the length. Page copy has three purposes above all others:
•
To be unique enough to get indexed and ranked in the search result
•
To create content that people find interesting enough to
want to link to
•
To convert site visitors into subscribers, buyers, or
people who click on ads
Not every page
is going to make sales or be compelling enough to link to, but if, in
aggregate, many of your pages are of high-quality over time, it will help boost
the rankings of nearly every page on your site.
Term Frequency
(TF) is a weighted measure of how often a term appears in a document. Terms
that occur frequently within a document are thought to be some of the more
important terms of that document.
If a word
appears in every (or almost every) document, then it tells you little about how
to discern value between documents. Words that appear frequently will have
little to no discrimination value, which is why many search engines ignore
common stop words (like the, and, and or).
Rare terms,
which only appear in a few or limited number of documents, have a much higher
signal-to-noise ratio. They are much more likely to tell you what a document is
about.
Inverse
Document Frequency (IDF) can be used to further discriminate the value of term
frequency to account for how common terms are across a corpus of documents.
Terms that are in a limited number of documents will likely tell you more about
those documents than terms that are scattered throughout many documents.
When people
measure keyword density, they are generally missing some other important
factors in information retrieval such as IDF, index normalization, word
proximity, and how search engines account for the various element types. (Is
the term bolded, in a header, or in a link?)
Search engines
may also use technologies like latent semantic indexing to mathematically model
the concepts of related pages. Google is scanning millions of books from
university libraries. As much as that process is about helping people find
information, it is also used to help Google understand linguistic patterns.
If you
artificially write a page stuffed with one keyword or keyword phrase without
adding many of the phrases that occur in similar natural documents you may not show up for
many of the related searches, and some algorithms may see your document as
being less relevant. The key is to write naturally, using various related
terms, and to structure the page well.
Search engines
may use multiple reverse indexes for different content. Most current search
algorithms tend to give more weight to page title and link text than page copy.
For common
broad queries, search engines may be able to find enough quality matching
documents using link text and page title without needing to spend the
additional time searching through the larger index of page content. Anything
that saves computer cycles without sacrificing much relevancy is something you
can count on search engines doing.
After the most
relevant documents are collected, they may be re-sorted based on
interconnectivity or other factors.
Around 50% of
search queries are unique, and with longer unique queries, there is greater
need for search engines to also use page copy to find enough relevant matching
documents (since there may be inadequate anchor text to display enough matching
documents).