Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

mikkl666 writes "In their official blog, Google announces that they are experimenting with technologies to index the Deep Web, i.e. the sites hidden behind forms, in order to be 'the gateway to large volumes of data beyond the normal scope of search engines'. For that purpose, the engine tries to automatically get past the forms: 'For text boxes, our computers automatically choose words from the site that has the form; for select menus, check boxes, and radio buttons on the form, we choose from among the values of the HTML'. Nevertheless, directions like 'nofollow' and 'noindex' are still respected, so sites can still be excluded from this type of search.'"

Actually, we (the web) have had problems with this before. Web accellerators started following links on pages before you clicked them. If the link happened to link to an action deleting something, it would delete it just by visiting a page with the delete link on it. Granted you should never do anything destructive with a get request, but now Google is starting to submit forms. I wonder how much stuff they will end up deleting with their program that automatically submits forms with values it think shoul

I've seen a number of users come crying in the mythtv forum that somehow all of their recordings mysteriously disappeared. Seems having your mythweb completely unsecured isn't such a good thing.For those people, this move by Google is great news. You see, the delete links were all simple GET requests, so the spiders were able to delete content. However, the scheduling is all done via POST'ed forms, so nothing would ever get recorded. This move on Google's part is really just an attempt to combat this. The o

I had similar problems a few years ago. The database had a lot of data in a compact format, and I wrote some retrieval pages that would extract the data and run it through any of a list of formatters to give clients the output format they wanted. Very practical. Over time, the list of output formats slowly grew, as did the database. Then one day, the machine was totally bogged down with http requests. It turned out that a search site had figured out how to use my format-conversion form, and had requested all of our data in every format that my code delivered.

Google wasn't too bad, because at least they spread the requests out over time. But other search sites hit our poor server with requests as fast as the Internet would deliver them. I ended up writing code that spotted this pattern of requests, and put the offending searcher on a blacklist. From then on, they only got back pages saying that they were blacklisted, with an email address to write if this was an error. That address never got any mail, and the problem went away.

Since then, I've done periodic scans of the server logs for other bursts of requests that look like an attempt to extract everything in every format. I've had to add a few more gimmicks (kludges) to spot these automatically and blacklist the clients.

I wonder if google's new code will get past my defenses? I've noticed that googlebot addresses are in the "no CGI allowed" portion of my blacklist, though they are allowed to retrieve the basic data. I'll be on the lookout for symptoms of a breakthrough.

Yeah, maybe your machine... That SQL-error looks more like bad session handling on the server hosting your Drupal installation than Google trying to do an SQL-injection... Actually, it looks nothing like an SQL-injection at all. MySQL is merely being asked to insert a duplicate value in a column specified as unique (`sid`), which it refuses because it's not unique. Don't expect an answer, since it's most likely not an error on Google's end.

I remember an article while back where someone had cut/pasted some articles from one section of their site to another.. and as a result had edit and delete links in the live content instead of on their internal web interface.

And a search engine (I think it was google) crawled the site, hit the delete links and deleted all the pages of the site. At that time it was stated that any link that performs an action, such as delete, should be a post, via form so that search engines wouldnt do that very thing..

And now, they are gonna start submitting forms? the fallout is gonna be entertaining.

Several years ago, I tried a demo of Bright Planet's Deep Query Manager [brightplanet.com] that would essentially do these searches through a client on your machine in batch-like jobs. Oh, the bandwidth and resources you'll hog!

Their stats on how much of the web they hit that Google missed was always impressive (true or not) but perhaps their days are numbered with this new venture by Google.

Quite an interesting concept if you think about it. I always presupposed that companies would hate it but never got 'blocked' from doing it to sites.

Here, suck up my bandwidth without generating ad revenue! Sounds like a lose situation for the data provider in my mind...

You could build a really interesting "Deep Web" crawler by ignoring robots.txt. In fact, an index just of robots.txt files would be pretty cool in its own right. Call it "Sweet Sixteen" (10**100 in binary) or something.

One time when I was Deep Crawling a particular website I decided to take a peek at their robots.txt file. To my amazement they had listed all the folders that they didn't want anyone to find, yet had provided absolutely no security to prevent you accessing the content if you knew the location.

It's cases like that where doing a half-arsed job is worse than not trying at all.

What's it to Google (or a third party) if they mess up your pathetically-designed form? It's not like they're going to "accidentally purchase something" (like some people suggested) unless they have their robots equipped with billing information submission functions (somehow I doubt it).

What's it to Google (or a third party) if they mess up your pathetically-designed form?

That depends. If they effectively launch a denial-of-service attack and eat zilliabytes of people's limited bandwidth by attempting to submit with all possible combinations of form controls and large amounts of random data in text fields, would that be:

antisocial?

negligent?

the almost immediate end of their reign as most popular search engine as numerous webmasters blocked their robots?

Well that does bring up a point. Should you have to include extra coding in your html to block google, or should google only be allowed to deep search sites that have extra coding that invites them in.

Google in a way is saying that if you fail to properly secure your site that they have a right to data mine it and generate profits from your data. Perhaps, mind you, just perhaps, that really, legally, is not appropriate and perhaps a legal investigation is required to clarify this before everyone starts do

Just what we need, some 'bot adding it's insightful comments based on other words in the same document...then again, on most sites, would you be able to tell the difference between Google posting something and some 1337 kiddiez?!?!!1eleven?

The usual excuse for that is that they want a link — for aesthetic purposes, to put in an email, etc. If you're using a form anyway, those reasons disappear. I'm sure there are a few developers who screw this up, but it won't be anywhere near as common as the problems GWA uncovered.

This brings up a concern from the description.So Googlebot will come across a web page.It follows a link.The link leads to a page with a form.Googlebot fills out the form based on content already on the site.Googlebot clicks submit.Googlebot goes to the next page, and continues to follow links.

The problem comes when that form was a post form like the one I am typing on right now for a forum, or some other type of form to create user generated content. This makes it seem like google will see the text box an

Google indexes more than any other search engine by expanding the web themselves. It was moving too slow for them.

Really though i don't think this will be a problem. People at google are pretty smart and i'm sure they've thought of this. Even if you believe google is evil there no evil corporate benefit to spamming garbled text to the entire internet.

I am tempted to copy and paste that and post it as my reply, but I think that would be insufferably clever. So, too, is referring the fact that I could be insufferably clever, but choose not to be. Etc...

What keeps googlebot from becoming a nonsensical spambot? Yes, you can use nofollow, but there is such a huge quantity of web forms that don't have that now because they've never needed it. Retrofitting all of them web wide is not the most realistic of goals.

The captcha or other anti-bot mechanism. Any forum that can't stop a "good" bot is going to have spam all over it anyway from the "bad" ones...

Seems to me it would be easy enough to detect the googlebot user agent, then if so, automatically redirect it to the page on the other end (or even send it to a random 404 page or something), all without processing the form data at all.

Well first of all, it's about time they learn how to read advanced sites! If your site is dependent on input from the user to display content, you're basically invisible to google. Now all they need is something to read text in flash files and they've got something going. But on the other hand, this is almost auto-fuzzing which could be considered hacking and I bet they'll often get results they didn't intend to and expose data that's supposed to be protected and private.

And should we not make any progress because we might step on a few toes while doing it? If Google can get your into uber-secret-private-database, so ran random user, or random Russian cracker. Fix your damn site if you're worried about this particular attack.

Now all they need is something to read text in flash files and they've got something going.

They've indexed Flash for about four years now.

I bet they'll often get results they didn't intend to and expose data that's supposed to be protected and private.

No doubt. There are a lot of clueless developers out there who insist on ignoring security and specifications time and time again. I have no sympathy for people bitten by this, you'd think they'd have learnt from GWA that GET is not off-limits to automated software.

If you haven't already noticed, AdSense has features now to tell Google how to log into your website so it can catalog your user-only pages. You know what that means. Porn sites are going to start using this so that Googlebot can confirm that it's age is over 18. We'll be showered with a gigantic wave of pornographic information. We will soon have to press juvenile charges against a corporate entity because it lied about its age on web forms to gain access to pornography and forum discussions.

Nevertheless, directions like 'nofollow' and 'noindex' are still respected, so sites can still be excluded from this type of search.

Maybe they shouldn't be, at least not in all cases. Several years back I had done many Google searches for some information that was very important to me, but never could find anything. Then a few months later (too late to be of use), pretty much by a fortunate combination of factors but with no help from Google, I came across the exact information, on a.GOV website in a publicly filed IPO document. As far as I can tell, our US government aggressively marks websites not to be indexed, even when they contain information that is posted there to be public record. When these nofollow directives are over used by mindless and unaccountable bureaucrats, perhaps someone needs to make the decision that these records should be public and that isn't best served by hiding them deep down a long list of links where they are hard to locate. In cases like this I would applaud any search engine that ignores the "suggestion" not to index public pages just because of an inappropriate tag in the HTML. In fact, if I knew of any search engine that was indexing in spite of this tag, I would switch to them as my first choice search engine in an instant. For starters, I would suggest that any.GOV and any State TLD website should have this tag ignored unless there were darn good reason to do otherwise.

But they don't want you to find out that the moon landing was faked and that Jimmie Hoffa shot Kennedy while driving a car that runs on water.
I agree with you. If you don't want people to know something don't put it on the web. If you want people to know put it on the web and let google send the people to you. It's all bureaucracy inaction.

As far as I can tell, our US government aggressively marks websites not to be indexed, even when they contain information that is posted there to be public record.

I'd mod you up if I had some points. I'm sure there are ethical implications or something when it comes to respecting the website owner's wishes not to index, but it's all public information anyway. If it's on the web and I can look at it, then Google should be able to look at it and index it.

I had no idea that government sites don't allow themselves to be indexed. That is BULLSHIT. People often NEED information from.gov sites and ALL of it should be made easy to find. Refusing to allow indexing

While I don't see Google doing it because of the backlash I'm a bit surprised that no other search engine has touted ignoring "nofollow" and "noindex" as a "feature" of their search engine in the attempt to look like they are better than Google.

I never understood the point of robots.txt crap. Why put the site up if you don't want people to find it?

Well I'm glad you asked. The presence (and continued following) of the robots.txt standard is crucial for these reasons:

- Scripts with potentially infinite results. If you have a calendar script on your site, that shows this month's calendar with a link to "next month" and "previous month" then without Robots.txt, the search engine could index back into prehistoric times and past the death of the Sun, with blank event calendars for each month. This is stupid. With your robots.txt file you tell the spider what URLs it's in BOTH your best interests not to crawl. You save server resources and bandwidth, Google saves their time and resources.

- If you have a duplicate copy or copies of your site for development, or perhaps an experimental "beta" version of your site, you don't want it competing with the real site for search engine placement, or worse, causing SE spiders to think you're a filthy spammer with duplicate content all over the place. So you disallow the dupes with robots.txt. Now sure ideally that server could be inside your firewall instead of on the Web, but it gets more challenging when your dev team is on a different continent.

- Temporary crap that has no value to the outside world, once again, it's a waste of both yours and the search engine's time to index it.

The above are all reasons why you might want some or all of the content on a site not indexed.

How will this work for forms that perform translations, validations and similar kinds of operations on other websites? Try to pull the entire internet through each such site it finds?

And then not every web development environment forces GET to not change data. In Ruby on Rails, adding "?methond=post" to the end of a url fakes a post, even though it is actually a GET, which I disabled in the company I work for. Not everyone is going to do that.

I wouldn't be surprised if they did that, after all they did a similar thing with GWA and URLs with query strings. But I can't help but think it's a silly path to take. It makes an "unwritten rule" of HTTP that certain magic strings are off-limits, and of course, no specification contains a list of these magic strings, you have to reverse engineer other software for them.

When I interned at Google, someone told me a funny anecdote about a guy who emailed their tech support insisting that the Google crawler had deleted his web site. At first, I think he was told that "Just because we download a copy of your site, doesn't mean your local copy is gone." (a'la obligatory bash [bash.org].) But, the guy insisted, and finally they double checked and his site was in fact gone. Turns out that it was a home-brewed wiki-style site, and each page had a "delete" button. The only problem was, the "delete" button sent its query via GET, not POST, and so the Google spider happily followed those links one-by-one and deleted the poor guy's entire site. The Google guys were feeling charitable and so they sent him a backup of his site, but told him he wouldn't be so lucky the next time, and he should change any forms that make changes to POSTs -- GETs are only for queries.

So, long story short, I wonder how Google will avoid more of this kind of problem if they're really going off the deep end and submitting random data on random forms on the web. Like the above guy, people may not design their site with such a spider in mind, and despite their lack of foresight this could kill a lot of goodwill if done improperly.

Sod worrying about zapping sites, what will happen when they crawl the nuclear launch site and enter random data into the authorisation field, and in a rare feat of sod's law end up getting the code just right....

(oh and what's the betting they'll put redmond in as a target string?)

That happened to me on a database demo site that I did. The 'edit,' 'details,' and, yes, 'delete' buttons were just plain old text links. I posted the URL of the page to a mailing list, Google came in through that, and methodically 'clicked' on each link, including the 'delete' ones. (There was even a confirmation page with 'Are you sure you want to delete this? _Yes_ or _No_'--as links, of course.) I went to show someone it one day and all the data was gone. It was just sample data, so no great loss. I fig

In a few months, there'll be a new blog post - Google will attempt to access and index all sites' password-protected pages by matching usernames found elsewhere on the site (e.g. from email addresses) with intelligent guesses at passwords, based on information it's gleaned regarding those individuals. Failing that, it'll run through entries found in various cracker dictionaries.

Google has announced that Google Phones (beta) will soon unveil the results of its having wardialed all 6,800,000,000 U.S. telephone numbers. Visitors to the Google Phones site will be able to search individual phone numbers to determine (without personally dialing the number) whether the number belongs to a landline telephone, cell phone, fax, or modem.On phone numbers where a VMS is detected, Google plans to dial "#0#" and other codes in order to determine how to reach a human.

Repeatedly querying to extract every permutation of their API could be much larger than their underlying data (think of the combinatorics of only 5 query fields of only 5 values each, against only a couple of hundred values in the database, like many at sites), and far too much traffic for small sites (and probably for big sites, too, if their combinations of queries at all matches their traffic).What could be even better would be if sites that don't want get that huge load just to have their data searchabl

Do you realize the amount of wasted time the operators of some websites will spend, processing the trash data that doing this will create? I speak mainly of feedback forms, e-mail signups, and the like.

If your site uses GET for a non-idempotent action like sending a feedback form or signing up for an email newsletter, you're doing it Wrong.

Do you realize the amount of wasted time the operators of some websites will spend, processing the trash data that doing this will create?

If any forms which feed your DB are GET style, aren't user authenticated and/or don't use a CAPTCH then you already have a huge trash data problem. At least the googlebot won't offer to enlarge your penis.

They are only submitting forms with a GET method. According to the HTTP specs, GET requests should always be idempotent. If you've got forms that use the GET method and aren't idempotent you should *already* be taking extra precautions avoid accidental use by bots and other automated tools.

And I hate it when a search result goes to... another page of search results. "You searched for 'perpetual motion engine'. Here are links to pages of us doing that search on other sites as well." Not very useful.

It isn't easy to programmatically tell the difference, but this seems like this would make that happen much more often.

Does that mean I'll have to introduce methods that waste people's time in order to prevent google from registering on my site multiple times?

Yes, if you require all your human visitors to read your robots.txt [robotstxt.org], and then require them to check a checkbox to mean that they clearly read and understood the entire body of your robots.txt. Then yes, you'll have to introduce some sort of almost impossible-to-read translucent captcha written in classical Chinese.

Of course they could link to a site and make the browser perform a POST. That's trivial. A form and some javascript will do that no problem. They seem to not be doing that because GET forms should be non-destructive, whereas POST forms can be quite destructive.