Posted
by
timothyon Monday January 18, 2010 @08:48AM
from the stuck-in-a-rut dept.

at_slashdot writes "The Perl CPAN Testers have been suffering issues accessing their sites, databases and mirrors. According to a posting on the CPAN Testers' blog, the CPAN Testers' server has been being aggressively scanned by '20-30 bots every few seconds' in what they call 'a dedicated denial of service attack'; these bots 'completely ignore the rules specified in robots.txt.'"
From the Heise story linked above: "The bots were identified by their IP addresses, including 65.55.207.x, 65.55.107.x and 65.55.106.x, as coming from Microsoft."

We are currently unable to serve your requestWe apologize, but an error occurred and your request could not be completed.This error has been logged. If you have additional information that you believe may have caused this error please report the problem here.

Not necessary. A Bing Product Manager has already commented on the CPAN Testers blog entry [perl.org] upon which the article is based:

Hi,I am a Program Manager on the Bing team at Microsoft, thanks for bringing this issue to our attention. I have sent an email to barbie@cpan.org as we need additional information to be able to track down the problem. If you have not received the email please contact us through the Bing webmaster center at bwmc@microsoft.com.

As said below, never ascribe to malice that which can be adequately explained by stupidity. (Insert lame joke about MSFT being full of stupidity here).

"as we need additional information to be able to track down the problem."

IP addresses aren't enough? You're MS--if you can't fix the problem and IP addresses are given, damn, that's just sad. You're freaking massive multi-billion dollar tech companies, and this is the best you can do?

No wonder Chinese hackers own our asses.

Then again, it took Comcast 9 months to fix a security hole in customer accounts (which would have required an s to http to make pages SSL'd), and the only reason it was "fixed" was because they did their annual website makeover and changed their entire system to something Flash based. Then again, I had contacted a VP, VP's security, referred to web security, and talked to web security 3x, talked to a manager. The last 3 groups verified the problem. It was referred to their web applications team by that point, who sat on it.

IP addresses aren't enough? You're MS--if you can't fix the problem and IP addresses are given, damn, that's just sad. You're freaking massive multi-billion dollar tech companies, and this is the best you can do?

A quick guess? Identifying unique sites by domain name, rather than by IP address, and either the bot or server not respecting HTTP 301 redirects.

With Rosetta Code, I once had www.rosettacode.org serving up the same content as rosettacode.org. My server got pounded by two bots from Yahoo. I could set Crawl-Delay, but it was only partially effective; One bot had been assigned to www.rosttacode.org, while another to rosettacode.org, and they were each keeping track of their request delay independently. I've since corrected things such that www.rosettacode.org returns an HTTP 301 redirect to rosettacode.org, and have was eventually able to remove the Crawl-Delay entirely.

I've since worked towards only serving up content for any particular part of the site on a single domain name, and have subdomains such as "wiki.rosettacode.org" redirect to "rosettacode.org/wiki", and "blog.rosettacode.org" to "rosettacode.org/blog". Works rather nice, though it does leave me a bit more open to cookie theft attacks.

The REAL solution to your problem is for everyone to abandon the dumb-as-shite "www" prefix.

Why bother with www.example.com and example.com? Get rid of it. Anyone who still puts "www." on their business cards is a dufus.

REAL solutions to immediate problems don't depend on the rest of the world changing to suit my needs. Also, the fact remains that there are links out there that point to "http://www.rosettacode.org/w/index.php?something_or_other", not all of those links will (or can) change, and I would be an absolute fool to knowingly break them, if I want people to visit RCo via referral traffic.

As said below, never ascribe to malice that which can be adequately explained by stupidity. (Insert lame joke about MSFT being full of stupidity here).

Yeah, though this particular sort of stupidity has been going on for a long time, and not just at Microsoft (though they seem to be the worst culprit).

I run a couple of sites that, among other things, has links to return the "content" in a list of different formats (GIF, PNG, PS, PDF,...). Periodically, the servers get bogged down by search sites hitting them many times per second, trying to get every file in every format. The worst cases seem to come from microsoft.com and msn.com, though it happens with other search sites, too. Actually, the first attempts I saw at "deep search" like this came from googlebots around 10 years ago, though they quickly backed off and haven't been a serious problem since then. MS-origin "attacks" of this sort have been happening every few months, for nearly a decade.

I've generally handled them with a couple of techniques. One is to check the logs for successive requests from the same address, and insert sleep() calls with progressively longer sleeps as more messages arrive. The code prefixes the "content" with a comment explaining what's happening, in case a human investigates.

Another technique is to look for series of "give me this in all your output formats" requests, verify that it's a search bot, and add the address to a "banned" list of sites that simply get a message explaining why they aren't getting what they asked for, plus an email address if they want to get in contact. So far nobody at any search site has ever used that address. I did once get a response from a guy who was studying sites with such multi-format data, for a school project, to see how the various output formats compared in size and information content. I took his address off the banned list, and suggested that he add a couple-second delay between requests, and he finished his project a few days later.

I suspect that the googlebot folks may have read my explanation of the delays and added code to spread their requests out over time, since that's what their bots seem to do now. But I never heard from them. They must have gotten complaints (and bans) from lots of web sites when they started doing this, so they probably realized quickly that they should add code to prevent such flooding of sites.

You know, it's easy to poke fun at the Microsofty, but is it possible that he was just trying to find out what was being hit so that he could figure out who in his organization he should contact? Maybe there is some uber technical way he could have figured this out, or maybe he should have RTFB, but his response sounded well intentioned and responsive. What would you prefer? The microsoft of old?

Same reason other folks can't, they are human. Look, I despise MS for a variety of reasons and am one of the rabid anti-MS folks. But honestly, they do enough that is legit to gripe about, no need to blow a mistake like this out of proportion. Considering all they do it was inevitable to happen at some point. Shit happens, any one that codes has had a mega-woops at one point or an other, and if they haven't they they are cookie cutter coding and not risking creativity. Hate them for needlessly locking the g

Instead we have Slashtroglodytes screaming about conspiracies by MSFT.

Just for the record, since you're commenting under a thread I started, I do not believe that there was a conspiracy to attack CPAN. I think there is a conspiracy to continue accidentally attacking CPAN. The information provided ought to be more than sufficient to figure out what is going on. Remember, any time two people work to screw a third out of something, it's a conspiracy by definition.

As said below, never ascribe to malice that which can be adequately explained by stupidity.

Must be really easy to just beat you in the face, and say “Ooops, I’m sorry, I’m so st00pid! *drool*”
I call bullshit on that rule.

My rule: Don’t make judgements at all (either way), about things that you just don’t know.

How about: Don't mistake organizational stupidity for individual stupidity. This isn't the case of a single bad coder making a mistake, this is an organization that's chosen to how much effort to apply. How much testing and review? What failsafe's, logging and active monitoring? Will options for feedback be accessible and responsive? Stupidity and Malice aren't mutually exclusive for an individual, and certainly not for an organization.

As much spam as I get from ir@infousa.com , I wish that someone would DDOS that damned company. If I knew of a way to get extra spam to ir@infousa.com I would probably do it so that company could get a taste of its own medicine. ir@infousa.com sent me unsolicited spam and it drives me nuts. Thanks for nothing, ir@infousa.com . It makes me want to call the company at (402)593-4500 and complain, but I don't have time. I guess I'll email them at ir@infousa.com instead. maybe.

I'm always surprised by how people seem to think that any language has a monopoly of some sort on sloppy and/or lazy coders. Been doing IT a long time, and the one thing that never changes is the sloppy/lazy code issue. It even predates programming, you know - look at infrastructure around the world for examples of "just toss something out there, hope it works".

Until I read the summary I thought it was another article about windows botnets and was wondering why the "microsoft" was tacked on since windows is the default OS assumption. Of course it would be interesting if these were new CPAN mirrors that MS was settings up.

I manage some networks in my home city in Italy, and in the past year I've often seen strange traffic coming from some of their IP addresses.
Guess they have been exploited by someone long time ago, and didn't even notice it.

It's interesting to read this, as I've had some random and somewhat incomprehensible port scans coming from an IP address identified as one of theirs. If you're just an insignificant slob, you can't write to their abuse address, either; you'll get bounced. I simply blocked that particular IP address. Let them worry about who's gotten to them.

They admitted they were powerless to solve their own problems without help from their victims.

Heh. It's another "damned if you do; damned if you don't" scenario. Usually, people criticise Microsoft for developing software without bothering to consult or test with actual customers. Now we have a manager of a MS dev group that actually does communicate (though not exactly with "customers"), and acts on what they say, so he's criticised for needing help from his "victims".

Ya can't win that game.

But the fact is that if you're developing server-side web software, you need to test it against real-world sites, not just the toy sites you've set up in your lab. And we all know the "Sourcerer's Apprentice" sort of bug that produces a runaway test that tries to do something as many times as it can per second until it's killed. Good testers will be on the lookout for such events, but it's understandable that they might fail occasionally

Among web developers, MS does have a bit of a reputation for hitting your new site with a flood of requests, trying to extract everything that you have (even the content of your "tmp" directory which your robots.txt file says to ignore). There are lots of small sites that block MS address ranges for just this reason.

It should be considered good news that there's at least one MS manager who understands all this, and is willing to talk to the "victims" and fix the problems. Now if they could fix the next-level problem, that this sort of thing happens repeatedly and their corporate culture seems to have no way to prevent it from happening again.

Hi,I am a Program Manager on the Bing team at Microsoft, thanks for bringing this issue to our attention. I have sent an email to nospam@example.com as we need additional information to be able to track down the problem. If you have not received the email please contact us through the Bing webmaster center at nospam@example.com.

I mean, what additional information is needed wrt "respecting robots.txt" and "not letting loose more than one bot on a site at a time"?

The standard clearly specifies lower case. However, if you are correct there's a simple way to send bingbots one way and all other bots another: create Robots.txt and robots.txt with different contents.

The simple fact is that ignoring robots.txt is effectively evil, regardless of the intent. It's not like robots.txt is some new innovation...

Since when did Microsoft feel existing standards were something to honour? How many times have its browsers changed behaviour? Re-defined entrenched URL standards (you cannot specify username/password in an Internet Explorer URL but this is a legal standard form of URL)?

It stands to reason Microsoft would take no notice of anything your website has to say.

Unless.. of course.. Microsoft define a certificate type that can sign your Microsoft-specific format exception list after payment on an annual licens

What's amusing about the issue in the kb is that the problem that they're "solving" by breaking the username/password in a URL standard is NOT a problem with username/password URLs, but a problem with how IE displays the URLs. In other words, rather than fixing the behavior of IE's address and status bars to display such URLs correctly, they just stopped supporting them.

Incompetence at that level isn't just indistinguishable from malice, it IS malicious.

Nothing you listed under the "War on Drugs" has anything to do with the war on drugs.

The war on drugs has made America a police state where the government can seize any of your property and auction it for profit before your trial. Even if you are found innocent, or the charges are thrown out for insufficient grounds, you will not be compensated for your lost money or profit. It has made an America where more people are imprisoned than any other nation on earth. It has made a nation where the cheapest and mo

Wow, you must be new....to computers. I particularly liked you comment "A site could have quality links to non ignore sites." as justification for a bot to ignore robots.txt. Can I have your AOL email address so I can write you personally?

Occam's razor (or Ockham's razor[1]), entia non sunt multiplicanda praeter necessitatem, is the principle that "entities must not be multiplied beyond necessity" and the conclusion thereof, that the simplest explanation or strategy tends to be the best one.

Rough translation: "Never ascribe to malice that which can be adequately explained by stupidity."

The problem is, there is no evidence that:Never ascribe to stupidity that which can be adequately explained by malice.Is invoking more entities.In fact, claiming that the commercially most successfull software company got there through stupidity rather than malice sounds extremely implausible to me.

You think Microsoft was happy every time a user got the dreaded Blue Screen Of Death?

Yes, in a way. I never really thought about it until you asked, but it fits with their business model of forcing users into an expensive upgrade of their OS every few years. Look what has happened with XP. It doesn't blue screen [as] much, and they've met heavy resistance from folks not wanting to upgrade to Vista. (Never mind that Vista is crap.) So now they've re-packaged Vista as "Windows 7" and hope folks don't realize it looks the same and smells the same, because it basically is.

But MSFT is a corporation, which thanks to our corporate butt kissing congress and courts can just go "ooopsie", maybe cut a small check at most, and walk away scott free.

And as for your philosopher? I saw an interview with Joss Whedon on writing evil characters that I thought really hit the nail on the head. He said, and I paraphrase "The villain never sees himself or herself as evil. To them there is a perfectly justifiable reason for their actions. I have known some truly evil people, those that have intentionally hurt their fellow man out of pure malice, and to them their actions were justified and noble. They simply didn't see what they did as wrong."

Which is how you get MSFT and Intel paying backroom deals to crush competition, or Jack Trammell and his "business is war" philosophy. To the ones making the decisions "the other guy would do it to us if they could, so why shouldn't we do it to them?". I'm sure that if you talked to Gates or the head of Intel you could never get them to believe that crushing your competition any way you can is wrong. To them that was/is business 101 and not evil. That is why I think Whedon was right, the villain always thinks they are noble.

I had a registration page - static content basically. The only thing that was dynamic was that it was referred to by many pages on the site with a variable in the querystring. Bing decided that it needed check on this one page *thousands* of time per day.

They ignored robots.txt.I sent a note to an address on the Bing site that requested feedback from people having issues with the Bing bots - nothing.

The only thing they finally 'listened' to was placing "" in the header.

This kind of sucked because it took the registration page out of the search engines' index, however it was much better than being DDOS'd. Plus, the page is easy to find on the site so not *that* big a deal.

Bing has been open for months now and if you search around there are tons of stories just like this. Maybe now that a site with some visibility has been 'attacked', the engineers will take a look at wtf is wrong.

Seems like a better solution would have been to setup a test for the either the User-Agent, or the IP/blocks that Bing was attacking your site from, and dropping those requests in/dev/null - your site would still exist on 'real' search engines, and Bing doesn't pound on your bandwidth anymore.

Replying to myself: if testing the UA or the IP in the httpd itself was too much load, you could have also just nullrouted the IP blocks the Bing spider was coming from, either in the kernel table, or in your router.

I have noticed the microsoft crawlers (msnbot) being fairly inefficient on many of my sites...In contrast to googlebot and spiders from other search engines msnbot is far more aggressive, ignores robots.txt and will frequently re-request the same files repeatedly, even if those files haven't changed... Looking at my monthly stats (awstats) which groups traffic from bots, msnbot will frequently have consumed 10 times more bandwidth than googlebot, but is responsible for far less incoming traffic based on referrer headers (typically 1-2% of the traffic generated by google on my sites).

Other small search engines don't bring much traffic either, but their bots don't hammer my site as hard as msnbot does.

Are we sure this traffic comes from Microsoft? Could it not consist of forged network packets? You don't need a reply if you are running a DDOS. On the other hand, why would anyone, including Microsoft, want to bring down CPAN?

Are we sure this traffic comes from Microsoft? Could it not consist of forged network packets?

It's a TCP connection, so they need to have completed the three-way handshake for it to work. That means that they must have received the SYN-ACK packet or by SYN flooding. If they are SYN flooding, then that would show up in the firewall logs. If they've received the SYN-ACK packet then they are either from that IP, or they are on a router between you and that IP and can intercept and block the packets from thatIP.

You don't need a reply if you are running a DDOS.

You do if it's via TCP. If they're just ping flooding, then that's one thing, but they're issuing HTTP requests. This involves establishing a TCP connection (send SYN, receive SYN-ACK with random number, reply ACK with that number) and involves sending TCP window replies for each group of TCP packets that you receive.

On the other hand, why would anyone, including Microsoft, want to bring down CPAN?

Who says that they want to? It's more likely that their web crawler has been written to the same standard as the rest of their code.

Is it an 'agreement' to not scan the site at all (by a search engine bot), or is it meant to just not -display- those results in the search engine?
I'd assume, since everything on a site is more or less public, that it would be the second. And if so, I can't see anything wrong with what Microsoft's bots did.

I can see how scanning a site's content (even if you're not going to list the results in your search engine) can have some value to a company

It's basically a rough pattern filter that the bot is supposed to follow on parts of the site not to crawl. One reason it's used is that you can have dynamically generated pages that create an infinite loop that's impossible for the bot to detect.

It is a request not to scan part or all of a site. robots.txt [wikipedia.org]

And if so, I can't see anything wrong with what Microsoft's bots did.

Every site does not have dozens of powerful servers and terabytes of bandwidth, nor is every site an ad-supported one that wants to maximize traffic. Common courtesy requires that a bot operator minimize his impact on any given site and honor requests not to index. Of course "courtesy" and "honor" are concepts that baff

Linux IP Firewalling Chains, normally called ipchains, is free software to control the packet filter/firewall capabilities in the 2.2 series of Linux kernels. It superseded ipfwadm, but was replaced by iptables in the 2.4 series.

if it's a scan (TCP established stream, taxing the SERVERS, not the NETWORK) that's the problem, as opposed to a SYN flood etc, and the IP addresses are in a very small range, why aren't they just using a hardware firewall at the router and blocking the IPs? There's not a whole lot to "distributed" when it's coming from a pair of C's.

Not saying they should be DOING it, but this is not a Denial of Service, it's a Denial of Stupid.