You currently have javascript disabled. Several functions may not work. Please re-enable javascript to access full functionality.

Register a free account to unlock additional features at BleepingComputer.com

Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site.

Hey, I know that modern web-search engines, such as Google, bump up results mostly based on references on other sites for those results. That doesn't interest me. I would like to know how did companies like 'Jerry and David's guide to the World Wide Web (Yahoo)' or 'Google' managed to index all the thousands of websites around 1998? I mean, I get that you can write a program that could scan web-pages and their metadata, but how do you tell it where to scan if there isn't a service where you can look up all the various url's? I don't understand, can anyone clarify it a little bit. Thank you once more - this is the best Internet Forum I've ever seen!

The first effort, other than library catalogs, to index the Internet was created in 1989, as Peter Deutsch and Alan Emtage, students at McGill University in Montreal, created an archiver for ftp sites, which they named Archie. This software would periodically reach out to all known openly available ftp sites, list their files, and build a searchable index of the software. The commands to search Archie were unix commands, and it took some knowledge of unix to use it to its full capability.

eeeh, catching breath, in from HA! some exercise, probably shouldn't even post anything, but I LIKED THE QUESTION, but I'm shot, can't type, tired.

must be books or a book on it,

nice bit on the ftp

part

Hey, I know that mode

I get that you can write a program that could scan web-pages and their metadata, but how do you tell it where to scan if there isn't a service where you can look up all the various url's? I don't understand, can anyone clarify it a little bit. Thank you once more - this is the best Internet Forum I've ever seen!

Oh those are really nice, where`d you get them done at? YOUR NAILS SILLY! Banter/Wit is a primary member requirement to colossal project solution. Not to toot my horn...☯... ( ( ( ( but, who else will!? teeehehehehehheeee!~~~8: P Additionally::: "Do, or do not. There is no try." - Yoga