Google PageRank-like algorithm dates back to 1941

Since the 1940s, PageRank-like iterative algorithms have been used to rank industries, journals, and people.

(PhysOrg.com) -- When Sergey Brin and Larry Page developed their PageRank algorithm for ranking webpages in 1998, they certainly knew that the seeds of the algorithm had been sown long before that time, as is evident from their paper's references. But the Google founders may not have known just how far back PageRank's predecessors reach - nearly 70 years, according to Massimo Franceschet, who dug up a 1941 paper with a similar ranking method, as well as several other pre-Google papers with algorithms that show remarkable similarities to PageRank. Yet Brin and Page may have expected as much; after all, as Franceschet notes, the motto of Google Scholar is "Stand on the shoulders of giants."

In a recent study, Franceschet, a computer scientist at the University of Udine in Italy, has presented a brief history of iterative ranking methods that predate PageRank. He also explains how the circular PageRank concept of determining the importance of a webpage based on the number of links it receives from important webpages, rather than by subjective expert evaluation, has provided an alternative way to define the quality of an item.

The 1941 predecessor of PageRank is a paper by the economist Wassily W. Leontief, who developed a method for ranking the values of a nation’s various industrial sectors. Each industrial sector relies on the others, both for building materials (inputs) to manufacture its own products, and by selling its finished products (outputs) to other industries so they can manufacture their own products. Leontief developed an iterative method of valuing each industry based on the importance of the industries with which it is connected through input and outputs (similar to web links in PageRank). In 1973, Leontief earned the Nobel Prize in economics for his work in this area.

Other more recent PageRank-like algorithms have been used for ranking items in areas such as sociology and bibliometrics. In 1965, 33 years before Page and Brin developed PageRank, the sociologist Charles Hubbell published a method for ranking individuals. His premise was that “a person is important if it is endorsed by important people.” Like PageRank and Leontief’s algorithm, Hubbell’s method is also iterative, with its outputs influencing its inputs, ad infinitum.

Later, in 1976, Gabriel Pinski and Francis Narin developed a journal ranking method in the field of bibliometrics. Here, the premise is that the importance of a journal is determined by the importance of the journals that cite it, which again uses the same circular reasoning as PageRank.

Most recently, the computer scientist Jon Kleinberg of Cornell University developed a ranking approach very similar to PageRank, which was published around the same time of Brin and Page’s publication (Brin and Page reference Kleinberg’s work in their own paper). Kleinberg’s method was also aimed at optimizing Web information retrieval. The algorithm, called Hypertext Induced Topic Search (HITS), referred to webpages as “hubs” and “authorities.” These definitions are purely functional; hub pages point to authority pages, and authority pages are pointed to by hub pages. Mathematically, HITS is strikingly similar to PageRank, even though both were developed independently. Since they’ve been published, both papers have received widespread recognition and thousands of citations.

While PageRank has made Google a very powerful search engine, it had to radically reformulate the concept of quality to do so. The algorithm must constantly reevaluate each page as the importance of other pages varies - making quality seem fleeting, and no longer permanent.

As Franceschet has shown, the new concept of value goes beyond webpages. Today, this “popularity contest” style of determining quality is stirring debate in academic circles in the area of research quality evaluation. Traditionally, evaluation of academic papers is done through expert peer review; the alternative is to use the PageRank-inspired Eigenfactor metric, which uses bibliometric indicators to evaluate research quality. Most likely, there will be other areas that see the use of PageRank-inspired methods redefining the concept of value.

Related Stories

Google's algorithm for ranking web-pages can be used to determine which species are critical for sustaining ecosystems. Drs. Stefano Allesina and Mercedes Pascual find that "PageRank" can be applied to the study of food webs, ...

(PhysOrg.com) -- Sports fans may be interested in a new system that ranks NFL and college football teams in a simple, straightforward way, similar to how Google PageRank ranks webpages. The new sports algorithm, ...

(PhysOrg.com) -- The beta version of WOWD, the Internet's newest search engine, was launched last week at the 2009 Web 2.0 Summit in San Francisco. It aims to differentiate itself from other search engines ...

Non-profit organizations should be exploiting the strategies of online marketers to gain traffic to their websites, raise awareness of their "brand" and its aims and convert visitors into donors, according to a study published ...

(AP) -- Google Inc. co-founders Larry Page and Sergey Brin are relinquishing some of their control over the Internet search leader with the sale of 10 million shares worth $5.5 billion at current prices.

The chairman of a U.S. Senate antitrust panel will look into the release of a confidential Federal Trade Commission report on an investigation of Google Inc.'s search business, which was closed in 2013 without an enforcement ...

User comments : 5

Its worth digging into old science as it often contains solutions for specific problems at that time but carry a much broader scope than forseen.

Take the famous Fast Fourier transformation, Fourier probably did not forsee the intensive reliance on FFT math in present digital age. Without FFT, the internet, mobile phone networks, consumer electronics would not have taken such a flight.

It might be worth to learn an old dog new tricks, but a new dog and some old tricks might work as well

Some Lawyer food, are these old algorithms similar enough in their basic methodology to count as prior art??, in other words, If one was to build a new search engine based on these old papers, and Google would sue you for suspected patent infringements....Are some parts of Google search patents, weakened by this to an extend that a judge could conclude "been there, done that, these searchalgorithms can no longer be regarded as exclusive Google inventions/implementations, it has been done such long time ago, covering such a broad scope of areas that the algoritme becomes part of the public domain"????

Well, actually Leontief's theory just shows how to define a sustainable economy through the fixpoint of a linear operator. I see no connection with PageRank, except that PageRank is a linear fixpoint, too. But then Markov's work (1906) predates PageRank even more, as any work about fixed point of linear operators. Rather, Seeley (1949) introduced reputation as a recursive concept using exactly PageRank's formulation, and is the oldest currently known instance of the idea. Kats (1953) and Wei (1952) did essentially the same using different starting points. They are psychologists and sociologists, but they can all be found in Wasserman and Faust's book about social networks (1994), which makes this news IMHO quite a (late) red herring. A rather detailed mathematical account can be found here: http://arxiv.org/abs/0912.0238

a) Leontief closed system is essentially Pinski and Narin bibliometric method, which is endorsed by Larry Page in PageRank patent;b) The stochastic reformulation of Leontief closed system is a weighted teleportation-free version of PageRank.c) The solution to such reformulation is the leading eigenvector of a stochastic matrix, in analogy with the solution of the PageRank problem (the leading eigenvector of Google matrix);d) Individual solution scores correspond to total revenues of industries. In particular, the revenue of an industry B depends on the revenue of industries A that produce products for B weighted by the proportion of product that A produces for B. Highly remunerated sectors are those that receive inputs from other highly remunerated industries with low propensity to differentiate their outputs among the other industries. Sounds familiar? It's PageRank logic!

These connections are very vague, and mostly related to the mathematical tools involved, which are quite common. They are no stronger, say, than those with Markov's work (1906!), as observed also by Massimo Marchiori (of HyperSearch fame). Seeley's paper, on the other hand, has the specific purpose of estimating reputation recursively starting from directed endorsement.

Please sign in to add a comment.
Registration is free, and takes less than a minute.
Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.