Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

GSGKT writes "Google's Anti-Malware Team has made available some of their research data on malware distribution mechanisms while the research paper[PDF] is under peer review. Among their conclusions are that the majority of malware distribution sites are hosted in China, and that 1.3% of Google searches return at least one link to a malicious site. The lead author, Niels Provos, wrote, 'It has been over a year and a half since we started to identify web pages that infect vulnerable hosts via drive-by downloads, i.e. web pages that attempt to exploit their visitors by installing and running malware automatically. During that time we have investigated billions of URLs and found more than three million unique URLs on over 180,000 web sites automatically installing malware. During the course of our research, we have investigated not only the prevalence of drive-by downloads but also how users are being exposed to malware and how it is being distributed.'"

If I gathered this right, then Google can parse the content behind the links they serve, to the point of ientifying the drive-bys? Okay, so why not block them at that point? And why not throw enough CPU power to parse the results before they're returned, so as to protect the users? Yeah, tag this "whatcouldpossiblygowrong".What, then, about a browser that can identify a drive-by, by pre-parsing the content behind the links it shows. Heuristics would do that Real Well, too; I can think of a zillion methods t

<snip>
What, then, about a browser that can identify a drive-by, by pre-parsing the content behind the links it shows. Heuristics would do that Real Well, too; I can think of a zillion methods to do Just That off the top of my head. "If it ends up writing to disk, don't." How hard is THAT?<snip>

Harder than you'd think. I'm sorry to have to point this out, but security is not easy, no matter how much we'd like to think it is.

Security begins by looking at the application, get requirements and translate thm to code. The only network I/O a browser has to do is "send request, get reply". That problem is solved. Then, "render". Okay, parse the response and translate to screen. Solved. What security problems again?

During that time we have investigated billions of URLs and found more than three million unique URLs on over 180,000 web sites automatically installing malware
180,000 out of billions doesn't seem like a lot to me.

Also, it would likely be inaccurate to assume uniform randomness for the appearance of those pages in search results. They are likely optimized to turn up for very popular queries with every SEO [wikipedia.org] trick available. So it's still 3 million out of billions, but those 3 million likely get significantly more than traffic than an average page.

How does that insight fit into '1.3% of Google searches'? I find 1.3% a disturbing figure considering that a lot of people don't even know to handle a back and forward or even a stop button on there browser. More and more people connect without much more knowledge then starting up a browser screen and surfing from google or the (TRUE) history bar. So instead of responding to this kind of news from our own Geeky horizon we should try to keep an eye on the whole picture here. In this case the fact that crimin

I found it quite interesting that the methodology of the research doesn't even bother to check sites with Mac OS X or Linux operating systems. But on the server side, Apache websites running outdated versions of PHP were singled out for comment.

In all there were twice as many compromised IIS servers as Apache, but fully 50% of all compromised Apache servers were running some version of PHP.

It was also interesting to note that computer-related websites ranked second only to social networking sites as most likely to be compromised with redirections to malware sites. Seems we might want to tone down our holier-than-thou rhetoric. 8^)

Did you read the article? What Google and Dell are doing is irritating, but the article goes from 'the program has an obscure, confusing name' directly to 'it is hard to uninstall'. If there is an uninstall entry, it isn't hard to uninstall, and if it uninstalls properly, then it isn't misbehaving.It's certainly crapware, but I'm not real sure it is malware, and there is some sort of useful difference there(I guess, crapware is software that behaves reasonably and is installed with no consideration towards

I'd say it falls into the same category as WGA: borderline malware. The name "Browser Error Redirector" doesn't make its purpose clear to a non-technical user; it sends information to a third party without user confirmation; it is installed without user consent. The information it sends to a third party may be innocuous, and it may be possible to uninstall, but it's still far from respectable.

The name "Browser Error Redirector" doesn't make its purpose clear to a non-technical user

I would argue that there is no way to make its purpose clear to the non-technical user without using at least a full sentence, probably a paragraph. For those who are familiar with the concept of error page redirection in the first place, it's a very adequate description, very honest and the first thing I would suspect once I realized there was a problem. If it had been "Browser Helper" or "DNS Accelerator" or "Bonzai Buddy" then arguing that the name wasn't clear would be applicable; as it is, it's a specific name for a specific condition that doesn't hide what it is.

I read that article, and honestly, it comes off as someone trying to sound smart who really isn't. "Spyware" (used in the article) isn't the term for something that changes the behavior of the computer; it would be applicable if the software reports back to google about the browsing habits, but this isn't what's described in the article. It should be considered "malware" or "adware."

Further, the argument about the name seems frivolous. Expecting a non-technical user to even realize that their error pages

It occurred to me that if Google started desisting sites that tried to implant malware into visitors computers, then webmasters would be much more diligent about keeping the crap off their sites, or at least keep a few more hapless victims out of harm's way.

Did you notice that it says that most offending sites are hosted in China?I think this is kind of interesting. Who hosts sites *in China* that are meant for viewers outside of China? I guess there might be some sites, but not many, I think.

Also, very few Chinese people use Google, so if Google started taking out 'offending' sites from it's search results then very few people would be affected.

In fact, it seems to me that only good can come if they do so. Very few people use Google to find sites hosted in Ch

Eh?Their Chinese search web site already returns different results to their US one.

I guess my point is that they can still list infected sites in the results on their Chinese search engine, but remove them from everyone else's, and by doing so they'd not affect too many people in a negative way but have a more significant impact in a positive way.

This still supposes that:

1) the sites in question are hosted in China for a Chinese audience,2) visitors from outside China are there by accident because the sites

China remains one of the largest providers of email spam and fraudulent miracle cures and scams worldwide. As such a contributing host, and with Chinese legal authority poorly educated and frankly uninterested in punishing web and email fraud, it remains an email and webhost fraud hotspot. So you've left out this:3) The sites in question are hosted in China, by Chinese crackers and fraudsters, to defraud anyone with money or computer resources tricked into visiting the site, no matter where it is hosted.

This works because Chinese law enforcement is even more behind the times dealing with computer fraud than the USA.

This shows you have no idea whatever.

Almost every single example of the products/services/scams being served sends the money via a US based credit card company to a US based criminal. By far the majority of procuts or serveces promoted by these methods are not even available to anyone outside America. In simple terms: both supply and demand are American. China, and other countries are only invo

Where are you getting the claim that the fraudsters are mostly American? There's plenty of market for such frauds. The Americans are the logical victims of such fraud?I agree that the American prosecution efforts are pitiful, and would help reduce the problem massively. But have you ever tried to track a spammer or fraudster overseas to their hosting website and get anything done about it? The US ISP's are at least somewhat responsive to outright fraud accusations with proof provided.

That first one might not be true, since hosting servers in China is very cheap, so perhaps some entities host sites in China intended for non-Chinese audience in order to cut costs.I remember years ago that hosts used to have a "no porn" in there service agreements, for fear that their IP block might get blacklisted, Now we often run into the same thing due to virtual hosting, blocking one IP address might knock a 100 websites off the internet. Of course with China some of it may be the government trying t

Sites written in the Chinese language are, of course, mostly written for viewers who read Chinese, including in China and the widespread overseas Chinese populations.

But there's a huge business of websites in China that are used by spammers, phishers, and other parasites, because the Internet means that you can connect to anywhere in the world for the cost of a few hundred milliseconds, and China not only as a large technically skilled population, a lot of infrastructure, and an imbalance in bandwidth usage

er, ok.Nothing of *value* then. Certainly nothing that would stop me wanting to have their results filtered from my google search results.I guess I was talking about sites that had legitamate content but which had been poisoned by various malware or whatever.

The problem with that is the number of sites that happen to host malware without meaning to. Too often the malware comes through advertising services or sneak through in user generated content that would be fine if not for a browser vulnerability. Google does a lot as it is, outright blocking the sites goes too far (unless that's the only thing that the site is made for, which is rare and would probably mean that the site is ranked low in the first place).

One site I work on got hit by a PHPBB SQL injection attack and had a tiny iframe inserted into the forum header that pointed to a well-known malware site, hightstats.net (and if you're curious the malicious script is in the strong/044 folder). Google picked up on the iframe's contents being a malicious script and added the malware warning to the search results pertaining to the forums section of our website.

I just wonder how it is that hightstats.net can still be in existence when it contains known malicious stuff that hackers are inserting into unwary websites?!

The problem is with the client software. I can understand the danger of sites that try to fool you into downloading and running an application, or infected media that harnesses an exploit in an application - but automatically infecting the machine just by visiting the site is beyond belief. There's a serious problem with what the "web" has become, forced upon us by reckless and naive developers. The WWW and HTML was never meant to be something that runs active code on the client. Period. Most of us realise there is no way this problem can ever be solved without revising exactly what a browser is supposed to be, as long as browsers will run code instead of interpreting data there will always be malicious sites set up to exploit this.

I have to observe a cast iron policy in my work. It means that quite a few sites on the internet are unavailable, but since they are mostly entertainment based it isn't a serious loss. No Javascript, no ActiveX, no Macromedia Flash. My activities are limited to viewing HTML and PDFs, even animated GIFs are blocked. In many years we have had no malware incidents (that I know of). Sometimes it's absolutely necessary to view a site containing potentially insecure content, so there is a "dirty machine" which is not allowed to connect to anything else and is wiped and reinstalled weekly.

The problem is that even serious academic and scientific sites (that should know better) are starting to add Flash plugins and heavy scripting, so it's getting hard for conscientious users to maintain security even where they want to. Insecure technology is being forced upon us by the site developers.

It would be nice if Google could display whether a site needs JavaScript, Flash or whatever and be able to search for HTML only content. The difficult way is to use Google Cache in text only mode of course.

It's got nothing to do with active code. It's to do with browsers being large, complex applications. Breaking large parts of the web by stopping scripting reduces the surface area for attack but does not eliminate it. There have been too many image decoder or URL exploits for anybody to believe that.

You'll start seeing people use H1 for everything. If you are lucky they'll override it with a style sheet so it doesn't look obnoxious.I wonder if Google has ever considered a moderation system, allowing logged-in Google users to rank the results of their searches on a random and infrequent basis. It would be easy enough to have the "click here to open" link change to a "click here to open, and open survey in new tab/window" if the user said they were willing to moderate search results.

they have the vote for this on the tool bar. Which to my knowledge works rather well if you are a heavy user and consistently vote pages for which you do a search. I do about 40 to 80 search per day and I am sure that I vote on 90% of it, I have come back to the same topics to search and have seen changes which were major improvements ( lag time about 4 to 6 weeks )

How is this a good idea? Sure, having headings suggests that the author may have gone to some trouble to structure the page, but it's no real indicator of quality. A script can easily crank out reasonable-looking headings. Same goes for HTML/XHTML compliance.

Punishing JavaScript will punish everyone using Ruby on Rails, Wordpress, or anything else that does AJAX stuff. Sure, JavaScript can be used to do bad things, but a lot of UI enhancement and "Web 2.0" stuff depends on it.

Punishing JavaScript will punish everyone using Ruby on Rails, Wordpress, or anything else that does AJAX stuff. Sure, JavaScript can be used to do bad things, but a lot of UI enhancement and "Web 2.0" stuff depends on it.

Any website which requires JavaScript should be punished. Sites which degrade gracefully should not be. This would be a difficult thing to determine, however.

The GoogleBot doesn't execute JavaScript. Google listing any content from a given site means it does, to a certain point, degrade gracefully.

Also, what's your problem with JavaScript? If you ever used the Google front page (instead of your browser's quick search function or/search?q=your+query), you probably didn't mind not having to click into that textbox, now did you? JavaScript can cause some problems, but implemented sensibly (by the browser devs) it is no security threat and used responsibly (by web devs) has great benefits.

The GoogleBot doesn't execute JavaScript. Google listing any content from a given site means it does, to a certain point, degrade gracefully.

I browse with Javascript off. I've noticed many pages (indexed from Google) which have Javascript-requirements for navigation. Usually, it's a menu bar which doesn't degrade (something I can't understand, as it's got to be easy to do.)

Also, what's your problem with JavaScript? If you ever used the Google front page (instead of your browser's quick search function or/search?q=your+query), you probably didn't mind not having to click into that textbox, now did you? JavaScript can cause some problems, but implemented sensibly (by the browser devs) it is no security threat and used responsibly (by web devs) has great benefits.

With Javascript, you can do a lot of neat things, sure (though I almost always use my browser's box to search Google, so I never see the home page.) It's mostly a security thing. Your assertion that sensibly-implemented Javascript is no security threat hasn't really been

Your assertion that sensibly-implemented Javascript is no security threat hasn't really been tested, as there hasn't been a sensible implementation yet.

As strange as this may sound, IE7's JavaScript implementation does not seem to have any known security flaws. Firefox, Safari and Opera seem to all be plagued by recent problems.
Anyways, JavaScript may not be the biggest of the web's security problems. Cross-site-scripting can be accomplished almost as easy with pure html (e.g. instead of redirecting victi

This seems pretty silly - just because a website uses javascript doesn't mean it *requires* it. Well designed web sites work just fine without JS but if you have it then they give you an enhanced experience.

Well its in Google's best interest to fight this, as Malware has the potential to affect their business.Really, as much as I am not a MS basher, malware is almost entirely Microsoft's fault. If they had paid attention back in the day to security, we wouldn't have the steaming swamp of malware we have now.

The only serious way to fight malware is to reduce the potential infection hosts.

fighting this is just like fighting any sort of sickness or plague. If you have enough immunized hosts, they the issue won't

That's a nice theory, but you can't argue that Microsoft is the most popular vendor for a lot of software and is, therefore, the biggest target. While Microsoft seems to have a bigger security problem than other vendors, there's no way to tell if other vendors and products would fail miserably given the same scrutiny.

In other news Ford is blamed for the United States' foreign policy (If they hadn't built that stupid model T, nobody would be driving cars today not burning all the oil not making raiding middle eastern countries for their oil necessary) while Xerox is being accused for global warming as a whole (THEY started all that copying and printing. If it weren't for them, we wouldn't have had to chop down rainforests for paper which in turn would have photosynthized all that CO2 back to O2.)

It's a shame that Google chose to not identify the three AV vendors it tested. Their ability to protect against malware ranged from bad (~80%) to abysmal (~20%). To identify them would have been a public service for us and a motivation for them.

Having first been unable to use google translate and now google search due to the "Error- Your request appears to be virus related please scan your computer for malware" I do wonder how sound any google analysis of malware is. If they have problems distinguishing between my computer that is not malware infected and the transparent port 80 proxy for my home cable ISP which is shared by 100,000s of computers some of which are obviously malware infected, then what hope a useful analysis of the much more devious and murky world of drive-by installers?

Usually, good conferences and journals have an anonymous peer review process. I find if very odd that google researchers chose to publicize their paper before the peer review process is done. That is at least lack of decorum, IMO.

The underlying problem is that advertising space is often syndicated to other parties who are not known to the web site owner. Although non-syndicated advertising networks such as Google Adwords are not affected...

2/3 of all malware distribution sites & sites that link to them are hosted in China.
The next worst offender is the US with 1/6.
About 3.5M websites attempt to send you to exploits from 180K distribution sites.
63% of the 180K malicious sites are IIS, 33% are Apache, and a handful are other.
80% of malware from not in ads (e.g. iframes) was within 4 redirects of the malware distributor.
80% of malware from ads was more than 4 redirects from the distributor.
3/4 of distribution sites and 1/2 of landing sites are in 2 blocks occupying 6.5% of IP4.
Among drive-by downloads, 1/2 alter your startup, 1/3 attack your security, 1/4 corrupt your preferences, and 7% install BHOs.
87% of outbound connections the malware initiates are HTTP, 8.3% are IRC.
The three AV engines tested against malware retrieved by the study had detection rates of about 35, 50, and 70%.

The part I find scariest is the 3.5M malware fronts. I mean, there are only about 70M active hosts on the entire Internet - that's 5 percent! Since I think that trying to make programmers these days write secure code is a lost cause, we should focus on breaking up the software monoculture. This kind of shit really starts to lose it's efficacy if only 1/4 or 1/5 attempts even attack the right browser...

Recheck the paper. There were 3.5M bad urls, which through a series of redirects, pointed to only 9340 malware distribution sites (see Table 1, page 8) hosted on systems in only 500 autonomous systems. This is a solvable problem: 500 hosting companies (or their customers) are the source of it all.

The paper points out that most of the attacks involve redirection of some portion of page content. That's a useful piece of information, because, other than for advertising purposes, redirection of IFRAME items and images is quite rare. A useful blocking strategy would be to block all redirects below the top level page. Many ads will disappear; no great loss.

Checking for hostile full web pages is already being done. McAfee SiteAdvisor was the first to do that, then Google copied them. Our "bottom feeder filter", SiteTruth [sitetruth.com], does some of that too, although it throws out far more sites than McAfee or Google do, just by insisting that some identifiable business stand behind any page that looks commercial.

Google's revenue model depends, to some extent, on those "bottom feeder" sites: all those anonymous "landing pages", "directory pages", "made for AdWords pages", and similar junk. Those things bring in substantial AdWords revenue, although they don't usually generate much in the way of sales for advertisers. Throwing them out of the "Google Content Network" would cut Google's ad income. This is where "don't be evil" collides with Google's profitability.

This looks like a solveable problem, but the solution will come from the security companies, not the search companies. The search companies can't afford to fix it.

In the 10 months of data the researchers used, Google found 9,340 distribution sites. The other 180,000 sites simply redirect you to the the distribution site, which is where you download the malware.

It gets better - those 9340 distribution sites are under the aegis of only 500 autonomous systems. [wikipedia.org] Which means Google could send their list to those 500 AS's - and each would have (on average) around 20 malware sites to clean up. After this, Google could keep notifying AS's of the distribution sites found (le

A potential vector for redirects to malware sites lies in bogus registrations on bulletin boards. A board of which I'm a member has seen a large number of such registrations purporting to originate in England, with links to sites in eastern Europe. Redirection? Walks like a duck...

I think since Google has the technology to discover and index malware distributing sites, and they should provide a new feature which will put a small red warning beside malicious results. Like McAfee SiteAdvisor service dose. This will decrease the number of infected machines in the Internet, and this is very easy to be noticed by novice users.
ExtremeSecurity Blog Admin
http://extremesecurity.blogspot.com/ [blogspot.com]