Better search algorithms

I’m doing a bit of research for a few side projects that I’m working on. It basically has to do with data mining the new symantec web which is becoming oh-so-prevalent on the modern day internet. Anyways, I read an interesting article on search algorithms. Basically, there are many different approaches to getting a relevant data. Some methods use sophisticated algorithms with a single dataset. Other approaches use a simpler, more basic, algorithms but uses sets of different types of data as the dataset.

Guess which strategy works best?

Correct, the simpler one. The point is is that more, independent data usually beats out smarter algorithms. If you have different datasets from a different sources, you can usually get a more accurate response than trying to squeeze every last drop out of performance our of your favorite algorithm. Google adopts a similar strategy in the search. Not only does it index web pages, but it bases rankings on the links that users actually click on. Combined, the two datasets form a more relevant answer than what would have happened if each algorithm were to be performed individually. Well, that’s a simple example anyway. In many ways, it’s like finding the location of an object in a two dimensional space by triangulating three radars at different locations.