The Anatomy of a Large Scale Web Search Engine

Abstract

Global search engines are an integral part of the World Wide Web.
They have made the rapid growth of the web possible by allowing users to
find web pages relevant to their interests in a sea of information.

However, to engineer a search engine is a challenging task. Search
engines index tens to hundreds of millions of web pages involving a comparable
number of distinct terms. They answer tens of millions of queries every
day. Apart from the problems of scaling traditional search techniques to
data of this magnitude, there are new technical challenges because of the
demands of uncontrolled hypertext collections which are only starting to
be met.

Web search engines are very different from traditional search engines
in that they operate on hypertext. Therefore, they have to deal with crawling
and can make use of links for searching. Furthermore, due to rapid advance
in technology and web proliferation, creating a web search engine today
is very different from three years ago.

In this paper, we present Google, a prototype of a scalable search
engine. Google is designed to crawl and index the Web efficiently, using
limited disk storage. To address scalability in search, Google makes use
of hypertextual information found in links to produce higher quality results.
Theprototype with a full text of 24 million pagesis available at http://google.stanford.edu/

1. Introduction

Since 1993, the World Wide Web has grown incredibly by almost any measure,
from 130 web servers in June 1993 to 650,000 web servers in January 1997
[?] serving 100 million web pages. By making all of this information easy
for users to locate, search engines have played a critical role in allowing
the Web to scale to its present size. In this paper, we address the scalability
of search engines in terms of both performance and search quality.

1.1 Web Search Engines: 1994 - 2000

Search engine technology has had to scale dramatically to keep up with
the growth of the web. One of the first web search engines, the World Wide
Web Worm (WWWW) [McBryan]
had an index of 110,000 web pages and web accessible documents in 1994.
As of November, 1997, the top search engines claim to index from 2 million
(WebCrawler) to 100 million web documents [www.searchenginewatch.com].
It is foreseeable that by the year 2000, a comprehensive index of the WWW
will contain over a billion documents. At the same time, the number of
queries search engines handle has grown incredibly too. In March and April
1994, the World Wide Web Worm received an average of about 1500 queries
per day. In November 1997, Altavista claimed it handled roughly 20 million
queries per day. With the increasing number of users on the web, and automated
systems which query search engines, it is likely that top search engines
will handle hundreds of millions of queries per day by the year 2000. In
this paper, we present the Google search engine developed at Stanford.
The goal of Google is to address many of the problems introduced by scaling
search engine technology to such extraordinary numbers.

1.2. Scaling with the Web

Creating a search engine which scales even to today's web presents many
challenges. First, there are the performance considerations. It is necessary
to have fast crawling technology to gather the web documents and keep them
up to date. Storage space must be used efficiently to store indices and,
optionally, the documents themselves. The indexing system must process
hundreds of gigabytes of data efficiently. Queries must be handled quickly,
at a rate of hundreds to thousands per second.

All of these tasks are becoming increasingly difficult as the Web grows.
However, hardware performance and cost has also been improving dramatically
to partially offset the difficulty. Disk cost has fallen to below $50 per
gigabyte with transfer rates close to 10MB per second. Memory prices are
below $5 per megabyte and 300MHz CPUs are available for little cost. There
are, however, several notable exceptions this progress. Disk seek times
have remained fairly high at about 10 ms because of the phyisical limitations
of moving a disk head. To illustrate, it is now possible to read 100K in
roughly the same time as performing one disk seek. Another problem is that
operating system robustness still leaves much to be desired; operations
on gigabytes of data often corrupt small portions of the the data.

In designing Google, we have considered both the rate of growth of the
WWW and technological changes. Google is designed to be scalable to extremely
large data sets. It makes efficient use of storage space to store the index.
The other data structures are optimized to allow for fast and efficent
access (see section XXX). Further, we expect that the cost to index and
store text or HTML is declining relative to the amount that will be available
(see section XXX). This will result in favorable scaling properties for
systems like Google.

1.3 Design Goals

Our primary design goal was to build an architecture that can support
novel research activites on web data. Another important design goal was
to build systems that reasonable numbers of people can actually use. Usage
was important to us because we think some of the most interesting research
will involve leveraging the vast amount of usage data that is available
from modern web systems. For example, there are many tens of millions of
searches performed every day. However, it is very difficult to get this
data, mainly because it is considered commercially valuable (see section
XXX).

To support novel research uses, Google stores all of the actual documents
it crawls in compressed form. One of our main goals in designing google
was to set up an enviornment where other researchers can come in quickly
and process large chunks of the web and produce interesting results that
would have been very difficult otherwise. In the short time the system
has been up, there have already been several papers using databases generated
by Google, and about half a dozen others are underway. Indeed, our PageRank
algorithm, described in greater detail below, would have been very difficult
to build and evaluate with access to large chunks of the link structure
of the web. Othere projects using Google include "Dynamic Data Mining"
[dynamic data mining reference], shiva, junghoo. We are interested in setting
up a Spacelab like enviornment where researchers can propose and do interesting
experiments on our systems and data during a limited timeframe.

1.3.1 Quality of Search

Besides supporting varied research, we have a strong goal to improve
the quality of web search engines. In 1994, some people believed that a
complete search index would make it possible to find anything easily. According
to [XXX?], ``The best navigation service should make it easy to find almost
anything on the Web (once all the data is entered).'' However, the WWW
of 1997 is quite different. Anyone who has used a search engine recently,
can readily testify that the completeness of the index is not the only
factor in the quality search results. ``Junk results'' often wash out any
results that a user is interested in. In fact, as of November 1997, only
one of the top four commercial search engines finds itself (returns its
own search page in response to its name in the top ten results) One
of the main causes of this problem is that the number of documents in the
indexes has been increasing many orders of magnitude, but the user's ability
to look at douments has not. People can still only be willing to look at
the first few tens of results. Because of this, as the collection size
grows, we need tools that have high precision (number of relevant
documents returned, say in the top tens of results) even at the expense
of recall (the total number of relevant documents the system is
able to return). There is no end in sight to this problem, as the number
of documents on the web is still growing very rapidly. However, there is
optimism that the use of more hypertextual information will help return
better results. [two papers from www 97]. In particular, link structure
[XXX?] and link text provide a lot of information for making relevance
judgements and quality filtering. Google makes use of both link structure
and anchor text (see Sections ?? and ??).

1.3.2 Academic vs. Commercial Search Engines

Aside from tremendous growth, WWW has also become increasingly commercial
over time. In 1993, 1.5% of web servers were on .com domains. This number
grew to over 60% in 1997. At the same time, search engines have migrated
from the academic domainto the commercial. Currently most search engine
development has gone on at companies with little publication of technical
details. This causes search engine technology to remain largely a black
art and to be advertising oriented (see Section ?). With Google, we have
a strong goal to push more development and understanding into the academic
realm.

2. System Features

The Google search engine has several features that help it produce better
quality results. First, it makes use of link structure of the Web to calculate
a quality ranking for each web page. This ranking is called PageRank [?].
Second, Google treats link text specially for search purposes.

2.1 PageRank

XXXXXthis needs to be expanded The PageRank of a web page is
a rough measure of the page's importance. Roughly speaking, a page is important
if there are many pages that point to it or if the pages that point to
it are important. In order to calculate page rank, the entire link structure
of the Web (or the portion that has been crawled) is used.

2.2 Anchor Text

The text of links is treated in a special way in our search engine.
Normally, the text of a link gets associated with the page that the link
is on. Instead, we associate it with both the page the link is on and the
page the link points to. This has several advantages. First, anchors often
provide more accurate desciptions of web pages than the pages themselves
(see Section ??). Second, anchors may exist for documents which cannot
be indexed by a text- based search engine such as images, programs, and
databases. Third, this makes it possible to return web pages which have
not actually been crawled. Note, there are problems with this approach,
since pages are never checked for validity before being returned to the
user. Because of this, it is possible that the search engine will return
a page that never actually exisited, but had hyperlinks pointing to it.
However, because of the way that results are sorted, this problem happens
rarely.

This idea of propegating anchor text to the page it refers to was implemented
in the World Wide Web Worm [refXXXX] especially because it helps search
non-text information, and expands the search coverage with fewer downloaded
documents. We use it mostly for the first reason which is that anchor text
can help provide better quality results. Using anchor text efficiently
is technically difficult because of the large amounts of data which must
be processed. In our current crawl of 24 million pages, we had over
259 million anchors which we indexed. XXXRANKDEX CITE?

2.3 Other Features

Aside from PageRank and the use of anchor text, Google has several other
features. First, it has location information for all hits and so it makes
extensive use of proximity in search. Second, Google keeps track of some
visual presentation details such as font size of words. Words in a larger
or bolder font are weighted higher than other words. Third, full raw HTML
of pages is available in a repository.

3 Related Work

Search research on the web has a short and consise history. The WWWW
was one of the first web search engines. It was subsequently followed by
WebCrawler, Lycos,
Altavista, Infoseek,
Excite, HotBot
(Inktome), and others. Of these
more recent search engines, little has been published [Mauldin]
[Pinkerton]. Compared
to the growth of the Web and the importance of search engines there are
precious few documents about these search engines. According to Michael
Mauldin (chief scientist, Lycos Inc) [Mauldin],
"the various services (including Lycos) closely guard the details
of these databases". However, there has been a fair amount of work
on specific features of search engines. Especialy well represented is work
which can get results by post-processing the results of existing commercial
search engines, or produce small scale "individualized" search
engines. Finally, there has been a lot of research on information retrieval
systems, especially on well controlled collections. In the next two sections,
we discuss some areas where this research needs to be extended to work
better on the web.

3.1 Information Retrieval

Work in information retrieval systems goes back many years and is well
developed. However, most of the research on information retrieval systems
is on small well controlled homogenous collections such as collections
of scientific papers or news stories on a related topic. Indeed, the primary
benchmark for information retrieval, the Text REtrieval Conference [TREC
96], uses a fairly small, well controlled collection for their benchmarks.
The "Very Large Corpus" benchmark is only 20GB compared to the
147GB from our crawl of 24 million web pages. Things that work well on
TREC often do not produce good results on the web. For example, the standard
vector space model tries to return the document that most closely aproximates
the query, given that both query and document are vectors defined by their
word occurance. On the web, this strategy often returns very short documents
that are the query plus a few words. For example, we have seen a major
search engine return a page containing only "Bill Clinton Sucks"
and picture from a "Bill Clinton" query. Some argue that on the
web, users should specify more accurately what they want and add more words
to their query. We disagree vehemently with this position. If a user issues
a query like "Bill Clinton" they should get reasonable results
since there is a enoumous amount of high quality information available
on this topic. Given examples like these, we belive that the standard information
retrieval work needs to be extended to deal effectively with the web.

3.2 Differences Between the Web and Well Controlled Collections

The web is a vast collection of completely uncontrolled heterogenous
documents. Documents on the web have extreme variation internal to
the documents, and also in the external meta information that might
be available. For example, documents differ internally in their
language (both human and programming), vocabulary (email
addresses, links, zip codes, phone numbers, product numbers), type or
format (text, HTML, PDF, images, sounds), and may even be machine generated
(log files or output from a database). On the other hand, we define external
meta infomation as information that can be inferred about a document,
but is not contained within it. Examples of external meta information include
things like reputation of the source, update frequency, quality, popularity
or usage, and citations. Not only are the possible sources of external
meta information varied, but the things that are being measured vary many
orders of magnitude as well. For example, compare the usage information
from a major home page, like Yahoo's which currently recieves XX million
page views every day with an obscure historical article which might recieve
one view every ten years, differing by a huge order of magnitude. Clearly,
these two items must be treated very differently by a search engine.

Another big difference between the web and traditional well controlled
collections is that there is virtually no control over what people can
put on the web. Couple this flexibilty to publish anything with the enourmous
influence of search engines to route traffic and "fooling" or
spamming search engines becomes a serious problem. There is enourmous economic
incentive to mislead search engines. If you can convince a search engine
that your page should be returned for a popular query, even if you page
has nothing to do with that query, you can reap enoumous economic benefit
becase a large number of people will see your page; it is like free advertising.
Because of this economic feedback system, people are willing to spend huge
amounts of time tailoring their pages for search engines so they come up
high in the results for important search terms. If the search engine keeps
its index up to date, the problem is worsened since people have more feedback
to adjust their pages. Finally, it is hard to keep track of offending parties,
since they can eaisily create a new identitiy on the net. This "spamming"
or misleading of search engines is a problem that has not been addressed
in traditional information retrieval systems. Also, it is interesting to
note that metadata efforts have largely failed with web search engines,
because any text on the page which is not directly represented to the user
is abused to "spam" search engines. There are even numerous companies
which specialize in manipulating search engines for profit.

4 System Overview

A web search engine must perform several major functions: crawling,
indexing, and searching. In this section we describe at a high level how
each of these processes happens in Google. More detailed descriptions are
in following sections.

Crawling is the most fragile task since it involves interacting with
hundreds of thousands of web servers and various name servers which are
all beyond the control of the system. Polite robot policies mandate that
no server should be visited unreasonably often and the robots
exclusion protocol should be heeded. Any errors in crawling invariably
lead to many angry emails from webmasters. In fact, even polite behavior
leads to some angry or confused emails from webmasters (see Section [?]).
One way we have mitigated some of these problems is by only crawling sites
that look like they are in the US (.com, .edu...). This reduces the number
of people our system is in contact with, and gives us a denser sample.

In Google, the crawling is done by several distributed crawlers. There
is a URLserver that sends lists of URLs to be fetched to the crawlers.
The web pages that are fetched are then sent to the storeserver. The storeserver
then compresses and stores the web pages into a repository. Every web page
has an associated ID number called a docID which is assigned whenever a
new URL is parsed out of a web page. The indexing function is performed
by the indexer and the sorter. The indexer performs a number of functions.
It reads the repository, uncompresses the documents, and parses them. Each
document is converted into a set of word occurrences called hits. The hits
record the word, position in document, an appoximation of font size, and
capitalization. The indexer distributes these hits into a set of ``barrels''.

The indexer performs another important function. It parses out all the
links in every web page and stores important information about them in
an anchors file. This file contains enough information to determine where
each link points from and to and the text of the link.

The URLresolver read the anchors file and converts relative urls into
absolute urls and in turn into docids. It puts the anchor text into the
forward index, associated with the docid that the anchor points to. It
also generates a database of links which are pairs of docids. The links
database is used to compute PageRanks for all the documents.

The sorter takes the forward index, which is sorted by docID (this is
a simplification, see Section ??), and resorts it by wordID to generate
the inverted index. This is done in place so that little temporary space
is needed for this operation. The sorter also produces a list of wordids
and offsets into the inverted index. A program called dumplexicon takes
this list together with the lexicon produced by the indexer and generates
a new lexicon to be used by the searcher. The searcher is run by a web
server and uses the lexicon built by dumplexicon together with the inverted
index and the PageRanks to answer queries.

Major Data Structure

Google is composed of a number of important data structures. These
data structures are optimized so that a large document collection can be
crawled, indexed, and searched with little cost. They are designed
to scale well with large amounts of data and make use of technological
changes that have happened in hardware. Although, CPUs and bulk input output
rates have improved dramitically over the years, a disk seek still requires
about 10 ms to complete. Therefore, it would take three days to perform
a seek for every document in our current 24 million page collection.
This is not completely unreasonable and further speedup is possible through
parrellizing disk seeks. However, seeks are a major bottleneck and it seems
unlikely that seek performance will improve as rapidly as other parameters
assuming traditional disk technology. Therefore, Google is designed
to avoid disk seeks whenever possible, and does not even preform a disk
seek for every document.

BigFiles

Many of Google's data structures are are often larger than 4 gigabytes
(the maximum file size addressable by a 32 bit integer) and they must be
spread among drives. We considered using a database but chose to
implement our own bigfile data structure for portability, efficiency, and
fine level control. Currently BigFiles are virtual files spanning
multiple file systems and are addressable by 64 bit integers. The
allocation among multiple file systems is handled automatically.
BigFiles also support rudimentary compression options. We plan to extend
bigfiles to inherently support distributed processing by allowing multiple
readers and writers.

Repository

The repository contains the full HTML of every web page. Each
page is compressed using zlib (see RFC 1950).
The choice of compression technique is a tradeoff between speed and compression
ratio. We chose zlib's speed over a significant improvement in compression
offered by bzip. The compression
rate of bzip was ~4 to 1 on the repository compared to zlib's ~3 to 1 compression.
In the repository, the documents are stored one after the other and are
prefixed by docid, length, and url. The repository requires no other data
structures to be used in order to access it. This helps with data consistency
and make development much eaiser; we can rebuild all the other data structures
from only the repository and a file which lists crawler errors.

Document Index

The document index keeps information about each document. It is
a fixed width ISAM (Index sequential access mode) index, ordered by docid.
The information stored in each entry includes the current document status
(reference seen, sent to urlserver, crawled, ...), a pointer into the repository,
a document checksum, and various statistics (number of words, number of
question marks, last crawl date, ...). If the document has been crawled,
it also contains a pointer into a variable width file called docinfo which
contains its URL and title. Otherwise the pointer points into the
URLlist which contains just the URL. This design decision was driven by
the desire to have a reasonably compact data structure, and the ability
to fetch a record in one disk seek. The compactness is important since
our list of 76.5 known URLs generated by crawling 24 million URLs takes
over 4GB of space with no titles or empty space.

Additionally, there is a file which is used to convert URLs into docIDs.
It is a list of URL checksums with their corresponding docIDs and is sorted
by checksum. In order to find the docID of a particular URL, the
URL's checksum is computed and a binary search is performed on the checksums
file to find its docid. URLs may be converted into docIDs in batch
by doing a merge with this file. This is the technique the URLresolver
uses to turn URLs into docids. This batch mode of update is crucial
because otherwise we must perform one seek for every link which would take
more than one month for our 24 million page, 322 million link dataset.

Lexicon

The lexicon has several different forms. One important change
from earlier systems is that the lexicon can fit in memory for a reasonable
price. In the current implementation we can keep the lexicon in memory
on a machine with 256 MB of main memory. The current lexicon contains
14 million words (though some rare words were not added to the lexicon).
It is implemented in two parts -- a list of the words (concatenated together
but separated by nulls) and a hash table of pointers. For various
functions, the list of words has some auxilliary information which is beyond
the scope of this paper to explain fully.

Hit Lists

A hit list corresponds to a list of occurences of a particular word
in a particular document including position, font, and capitalization information.
Hit lists account for most of the space used in both the forward and the
inverted indices. Because of this, it is important to represent them as
efficiently as possible. We considered several alternatives for encoding
position, font, and capitalization - simple encoding (a triple of integers),
a compact encoding (a hand optimized allocation of bits), and Huffman coding,
In the end we chose a hand optimized compact encoding since it required
far less space than the simple encoding and far less bit manipulation than
Huffman coding.

Our compact encoding uses two bytes for every hit. There are two types
of hits: fancy hits and plain hits. Fancy hits include hits occurring in
a URL, title, anchor text, or meta tag;. plain hits include everything
else. A plain hit consists of a capitalization bit, font size (relative
to the rest of the document) in three bits (only 7 values are actually
used because 111 is the flag that signals a fancy hit), and position as
number of words in 12 bits (all positions higher than 4095 are labeled
4096). A fancy hit consists of a capitalization bit, 7 possible font sizes,
4 bits to encode the type of fancy hit, and 8 bits of position. For anchor
hits, the 8 bits of position are split into 4 bits for position in anchor
and 4 bits for a hash of the docid the anchor occurs in. We use font size
relative to the rest of the document because when searching, you do not
want to rank otherwise identical documents differently just because one
of the documents is in a larger font.

The way the length of a hit list is encoded varies between the forward
index and the inverted index. XXX Expand.

Forward Index

The forward index is actually already partially sorted. It is stored
in a number of barrels (we used 64). Each barrel holds a range of the lexicon.
If a document contains words tha fall into a particular barrel, the docid
is recorded into the barrel, followed by a list of wordid's with hitlists
which correspond to those words. This scheme requires slightly higher storage
because of duplicated docids but the difference is very small for a reasonable
number of buckets and saves considerable time and coding complexity in
the final indexing phase done by the sorter.

Inverted Index

In order to generate the inverted index, the sorter takes each of the
forward barrels and sorts it by wordid to produce an inverted barrel. This
process happens one barrel at a time, thus requiring little temporary storage.
Also, we parrallelize the sorting phase to use as many machines as we have
simply by running multiple sorters, which can process different buckets
at the same time. It required roughly about a day of wall clock time to
sort our 26 million page index on three machines with 256MB of RAM each
(a significant amount of this time was spent doing slow file IO over NFS).

Crawling the Web

Running a web crawler is a challenging task. There are tricky performance
and reliability issues and even more importantly, there are social issues.

Efficient Crawling

In order to scale to hundreds of millions of web pages, Google has a
fast distributed crawling system. A single urlserver serves lists of urls
to a number of crawlers (we typically ran about 3). Both the urlserver
and the crawlers are implemented in Python. Each crawler keeps roughly
300 connections open at once. This is necessary to retrieve web pages at
a fast enough pace. At peak speeds, the system can crawl over 100 web pages
per second using four crawlers. This amounts to roughly 600K per second
of data. A major performance stress is DNS lookup. Each crawler maintains
a its own DNS cache so it does not need to do a DNS lookup before crawling
each document.

XXX TALK ABOUT IO MANAGEMENT - DIFFERENT QUEUES

Crawler Reliability

The WWW is very heterogeneous, which is a delight to surfers who like
variety but quite a burden on a program which must handle anything. In
our crawls, we encountered infinite web pages, infinite URLs, many varied
kinds of communication errors, and anything else one might imagine. As
an amusing example, a number of hosts had their IP address resolve to 127.0.0.1
- the local host. As a result, during early runs, we were surprised how
many web pages matched terms from our own home page.

Social Issues

It turns out that running a crawler which connects to more than half
a million servers, and generates tens of millions of log entries generates
a fair amount of email and phone calls. Because of the vast number of people
coming on line, there are always those users who do not know what a crawler
is, because this is the first one they have seen. Almost daily, we receive
an email something like, "Wow, you looked at a lot of pages from my
web site. How did you like it?" There are also some people who do
not know about the robots exclusion protocol, and think their page should
be protected from indexing by a statement like, "This page is copyrighted
and should not be indexed", which needless to say is difficult for
web crawlers to understand. Also, because of the huge amount of data involved,
unexpected things will happen. For example, our system was trying to crawl
an online game. This resulted in lots of garbage messages in the middle
of their game! It turns out this was an easy problem to fix. But this problem
had not come up until we had downloaded tens of millions of pages. Because
the immense variation in web pages and servers, it is virtually impossible
to test a crawler without running it on large part of the Internet. Invariably,
there are hundreds of obscure problems which may only occur on one page
on the whole web and cause the crawler to crash, or worse, unpredictable
or incorrect behavior.

Since such large numbers of people are looking at their web logs every
day, if only one out of ten thousand people contact us we will be drowning
in email. As a result, systems which access large parts of the Internet
need to be designed to be very robust and carefully tested. Since large
complex systems such as crawlers will invariably cause problems, there
needs to be significant resources devoted to reading the email and dealing
with these problems as they come up.

8 Indexing the Web

The Parser

Dumping Data into Barrels

Sorting

10 Searching

The goal of searching is to provide quality search results efficiently.

The Ranking Function

Google maintains much more information about web documents than typical
search engines. Every hitlist includes position, font, and capitalization
information. Additionally, we factor in hits from anchor text and the pagerank
of the document. Combining all of this information into a rank is difficult.
We designed our ranking function so that no one factor can have too much
influence. expand

Results

SHOW SOME SAMPLE RESULTS -- help me find some good ones
if you want http://z.stanford.edu:4200/ (search the web)

13 Conclusions

Scalability

Scability of Google

We have designed Google to be scalable in the near term to a goal of
100 million web pages. We have disk and machines on the way to handle roughly
that amount. All of the time consuming parts of the system are parrallelizable
and roughly linear time. These include things like the crawlers, indexers,
and sorters. We also think that most of the data structures will deal gracefully
with the expansion. However, at 100 million web pages we will be very close
up aganist all sorts of operating system limits in the commmon operating
systems (currently we run on both Solaris and Linux). These include things
like addressable memory, number of open file descriptors, network sockets
and bandwidth, and many others. To expand to a lot more than 100 million
pages would likely greatly increase the complexity of our system.

Scalability of Centralized Indexing Architectures

As the capabilities of computers have increased, it is possible to index
an every larger amount of text for a reasonable cost. Of course, other
bandwidth intensive media such as video is likely to become more pervasive.
But, because the cost of production of text is low compared to media like
video, text is likely to remain very pervaisive. Also, it is likely that
soon we will have speech recognition that does a reasonable job converting
speech into text, exapanding the amount of text available. All of this
provides amazing possibilities for centralized indexing. Here is an illustrative
example. We assume we want to index everthing everyone in the US has written
for a year. We assume that there are 250 million people in the US and they
write an average of 10k per day. That works out to be about 850 terrabytes.
Also assume that indexing a terrabyte can be done now for a reasonable
cost. We also assume that the indexing methods used over the text are linear,
or nearly linear in their complexity. Given all these assumptions we can
compute how long it would take before we could index our 850 terrabytes
for a reasonable cost assuming certain growth factors. Morre's Law was
defined in 1965 as a doubling every 18 months in processor power. It has
held remarkably true, not just for processors, but for other important
system paramaters such as disk as well. If we assume that Morre's law holds
for the future, we need only 10 more doublings, or 15 years to reach our
goal of indexing everthing everyone in the US has written for a year for
a price that a small company could afford. Of course, Moore's Law may not
continue to hold, but there are certainly a lot of interesting centralized
applications even if we only get partway to our hypothetical example.

Because humans can only type or speak a finite amount, and as computers
continue improving, text indexing will scale even better than it does now.
So we are optimistic that our centralized web search engine architecture
will improve in its ability to cover the pertinant text information over
time. Of course there will always be many problems where distributed systems
like Gloss [Gravano 94] will be the best solution, but it often
seems difficult to convince the world to use them. Distributed systems
suffer in large part from high administration costs of setting up many
systems, which is a problem that may be solved in the future.

[Gravano 94] Luis Gravano, Hector Garcia-Molina, and A. Tomasic. The
Effectiveness of GlOSS for the Text-Database Discovery Problem. Proc.
of the 1994 ACM SIGMOD International Conference On Management Of Data,
1994.

[Page 98] Lawrence Page and Sergey Brin. PageRank, an Eigenvector
based Ranking Approach for Hypertext. Submitted to the 21st Annual
ACM/SIGIR International Conference on Research and Development in Information
Retrieval. Melbourne, Australia, August 24 - 28, 1998.

Appendix A: Advertising and Mixed Motives

Currently, the predominant business model for commercial search engines
is advertising. The goals of the advertising business model do not always
correspond to providing quality search to users. For example, in our prototype
search engine the top result for cellular phone is "The Effect of
Cellular Phone Use Upon Driver Attention", a study which explains
in great detail the distractions and risk associated with conversing on
a cell phone while driving. This search result came up first because of
its high importance as judged by the PageRank algorithm, an approximation
of citation importance on the web [Page, 98]. It is clear that a search
engine which was taking money for showing cellular phone ads would have
difficulty justifying the page that our system returned to its paying advertisers.
For this type of reason and historical experience with other media [Bagdikian
83], we expect that advertising funded search engines will be inherently
biased towards the advertisers and away from the needs of the consumers.
Since it is very difficult even for experts to evaluate search engines,
search engine bias is particularly insidious. A good example was OpenText,
which was reported to be selling companies the right to be listed at the
top of the search results for particular queries. This type of bias is
much more insidious than advertising, because it is not clear who "deserves"
to be there, and who is willing to pay money to be listed. This business
model resulted in an uproar, and OpenText has ceased to be a viable search
engine. But less blatant bias are likely to be tolerated by the market.
For example, a search engine could add a small factor to search results
from "friendly" companies, and subtract a factor from results
from competitors. This type of bias is very difficult to detect but could
still have a significant effect on the market. Furthermore, advertising
income often provides an incentive to provide poor quality search results.
For example, we noticed a major search engine would not return a large
airline’s home page when the airline’s name was given as a query. It so
happened that the airline had placed an expensive ad, linked to the query
that was its name. A better search engine would not have required this
ad, and possibly resulted in the loss of the revenue from the airline.
In general, it could be argued from the consumer point of view that the
better the search engine is, the fewer advertisements will be needed for
the consumer to find what they want. This of course erodes the advertising
supported business model of the existing search engines. However, there
will always be money from advertisers who want a customer to switch products,
or have something that is genuinely new. But we believe the issue of advertising
causes enough mixed incentives that it is crucial to have a competitive
search engine that is transparent and in the academic realm.