Warning heeded, but I saw this on a blog post at commoncrawl.org. [commoncrawl.org]

This bucket is marked with Amazon Requester-Pays flag, which means all access to the bucket contents requires an an http request that is signed with your Amazon Customer Id. The bucket contents are accessible to everyone, but the Requester-Pays restriction ensures that if you access the contents of the bucket from outside the EC2 network, you are responsible for the resulting access charges. You don’t pay any access charges if you access the bucket from EC2, for example via a map-reduce job, but you still have to sign your access request. Details of the Requeser-Pays API can be found here: http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?RequesterPaysBuckets.html [amazonwebservices.com]

If I understood that right, at least getting started with the tutorial will not result in me coughing up $200. Correct me if I am mistaken.

You don't need to pay for accessing it, but you still need to pay for the processing power, storage and RAM in your EC2. Of course you can start by only accessing specific day like in the video so you don't need so much processing power for it, and hence pay less. But then you also won't be able to process 99.9% of the crawl data.

MapReduce is an implementation of an algorithm first presented in a 1970's issue of the ACM. I would commend to startups membership and ownership of the patent-expired content composed therein. There's a lot of untapped potential in there yet - and much dross. If we will stand on the shoulders of giants though it's good to know where the giants were and what they did. Brin was a good scholar here, and Page gave something new. It was the fusion of old ideas and new that made Google. If you want to be t

The problem I've got is that searches with Google and the like turn up a lot of junk that I'm not looking for, with the file search engines like FilesTube simply ignoring the numeric years specified in my search queries.

What I want to do is find PDF files of specific issues (Month and Year combinations) of certain magazine titles. But when I try these searches, the results contain a lot of years that I had not specified, with the year I did specify not falling anywhere in the resulting pages.

I've recently created a crawler to collect certain information from a website, that would help me gather data sets for a small machine learning project.While I've followed robots.txt and nofollow links, site's TOU was against it. After confirming with the admin, I was told that it's not allowed to gather information, as the site owns it (as it's written in the TOU).

The data however is publicly available, so you actually wouldn't have to agree to a TOU to collect the data, and as it's some data I wanted, I still concluded I should get a small sample (less than 1% of the total data, around 200MB) at least, to see if something's even possible to be done with it.

What are your thoughts/.? Should I have abandoned the attempt, have I done right or even should I disregard their plead and simply get as much as I please (during a long period of time, as to not hammer on it's bandwidth)?