Share this story

Update on March 24 at 7:03 California time:The Cisco blog post has been updated to change a key finding Ars reported in the following post. Contrary to Cisco's earlier reporting, the update says not all the servers compromised in the attack were running Linux version 2.6. "We have not identified the initial exploit vector for the stage zero URIs," the update stated. "It was not our intention to conflate our anecdotal observations with the technical facts provided in the listed URIs or other demonstrable data, and the below strike through annotations reflect that. We also want to thank the community for the timely feedback."

Earlier this week, Ars reported on attacks exploiting an extremely critical vulnerability in the PHP scripting language almost two years after the bug came to light. By going 22 months without installing crucial patches, the responsible administrators were menacing the entire Internet, in much the same way as the owner of a blighted building might contribute to increased urban decay or neighborhood crime.

Further Reading

Now comes word of a new mass compromise that preys on even more neglected Web severs, some running versions of the Linux operating system kernel first released in 2007. According to a blog post published late Thursday by researchers from Cisco, the people behind the attack appear to have identified a vulnerability that has since been patched in later Linux releases that allows them to dish malicious content to unsuspecting people who visit the site. The quick-spreading compromise took over 400 hosts per day on Monday and Tuesday, and so far, Cisco has counted more than 2,700 distinct URLs that are under the control of the attackers.

"This large-scale compromise of an aging operating system highlights the risks posed by leaving such systems in operation," Martin Lee, a threat intelligence technical lead in Cisco's Security Intelligence Operations group, wrote. "Systems that are unmaintained or unsupported are no longer patched with security updates. When attackers discover a vulnerability in the system, they can exploit it at their whim without fear of it being remedied."

Cisco Systems

The mass infection works against servers running version 2.6 of the Linux operating system kernel, some using releases from 2007 or earlier, Lee said. The attacks cause otherwise legitimate websites to serve fraudulent pages and pay-per-view ads to visitors. Lee said there's also anecdotal evidence that visitors are exposed to attacks that install malware on their computers. Underscoring the effect the infected servers are having on the Internet at large, one in 15 customers using a Cisco safe-browsing cloud service have had at least one user exposed to the attacks. The end-user attacks work in a two-stage process that mixes JavaScript code from multiple servers in a way that can ultimately become harmful.

Some antivirus products are flagging the attacks as those used by the Blackhole exploit kit, but Lee said the detection is probably erroneous. It's more likely, he said, that the attack code being served to end users is related to a malware campaign uncovered in January by researchers from security firm Sucuri. Infected servers are found all over the world, with large concentrations in Germany and the US.

Cisco's post is a potent reminder that people running any unpatched operating system put the entire Internet at risk. Given the likelihood that a significant percentage of the infected sites are run by hobbyists or mom-and-pop operations with modest resources and little security expertise, there's no clear antidote to the rash of outdated machines. One possibility is for Web hosts to begin mandating a set of criteria for the servers permitted to operate on their networks. Such a solution is probably unworkable, since it would likely result in increased workloads and strained customer relations for the hosting companies that take on the challenge. Until a fix comes along, it's "end user beware."

"Large numbers of vulnerable unpatched systems on the Internet are tempting targets for attackers," Lee wrote. "Such systems can be used as disposable one-shot platforms for launching attacks. This makes it all the more important that aging systems are properly maintained and protected."

Promoted Comments

Another excellent example of how the concept of herd immunity applies approximately as well to internet servers as it does to human beings.

If 98% of machines were immune to these attacks, it would take too many resources for them to spread around, and they would get squashed faster than they could spread.

However, since there are piles of mostly abandoned servers running all over the place, plus ancient entrenched software that doesn't support newer OS's, it's impossible for us to get that percentage high enough.

We just moved our servers off of a 9 year old Red Hat 2 install. Our last security update was... never. We bricked one of the boxes patching the Kernel once and honestly we just kind of forgot to update them. One of our boxes even had 2687 days (7.35 years) of uptime!

It's amazing we only got hacked once (that we know about)...

(Now we are on AWS and love the fact that we can use these newfangled things like sudo and package managers! And snapshots.)

Here's the problem when it comes to updating infrastructure systems like these for system administrators:

It's not a matter of security, it's a matter of "If it ain't broke, don't you even dare try to fix it."

If history as sysadmins has taught us nothing it's that the constant cycle of updates, especially on mission-critical machines, puts our job security on the lines. Especially when a lot of these machines are running custom code with dependencies that end up being the very security liabilities that get patched.

Another excellent example of how the concept of herd immunity applies approximately as well to internet servers as it does to human beings.

If 98% of machines were immune to these attacks, it would take too many resources for them to spread around, and they would get squashed faster than they could spread.

However, since there are piles of mostly abandoned servers running all over the place, plus ancient entrenched software that doesn't support newer OS's, it's impossible for us to get that percentage high enough.

Given the likelihood that a significant percentage of the infected sites are run by hobbyists or mom-and-pop operations with modest resources and little security expertise, there's no clear antidote to the rash of outdated machines.

That reminds me. How are those roll-your-own E-mail server articles coming along?

I think it is time for hosting companies to take a slighty more pro active approach to nastiness on their networks. If I drove down the road with my car belching smoke and oil, the cops would pull me over very quickly. Why don't we have something, even slightly, similar with unmaintained servers.

After this and the previous article, I keep wondering why White Hats haven't started writing viruses that take advantage of these vulnerabilities and shut down these servers, leaving a prominent message behind as to why it was shut down. If possible, maybe they could patch them as well, but leave it shut down to force the admin to restart it and see the message (if it's not abandoned.)

Of course there could be unforeseen problems, like having 40,000 servers trying to access the same patch server at the same time creating a functional DDOS attack.

I think it is time for hosting companies to take a slighty more pro active approach to nastiness on their networks. If I drove down the road with my car belching smoke and oil, the cops would pull me over very quickly. Why don't we have something, even slightly, similar with unmaintained servers.

Not possible. In theory a hosting company provides you with equipment and a line. Then you pay for usage.

And this is why no matter how suckass my shared webhost is. At least its managed as they maintain at least update the OS and core backend packages needed for me to run a website.

Given the likelihood that a significant percentage of the infected sites are run by hobbyists or mom-and-pop operations with modest resources and little security expertise, there's no clear antidote to the rash of outdated machines.

That reminds me. How are those roll-your-own E-mail server articles coming along?

As snarky as this is, it does have some truth. You cannot over emphasize the amount of responsibility it takes to run your own publicly accessible server of any kind. But even that said, people need to start somewhere.

I think it is time for hosting companies to take a slighty more pro active approach to nastiness on their networks. If I drove down the road with my car belching smoke and oil, the cops would pull me over very quickly. Why don't we have something, even slightly, similar with unmaintained servers.

The other side of this coin is just another level of control, more hoops, more regulation, more noses sniffing around other people's business "for the greater good". Pretty soon you'll have to get your server licensed, etc.

You'll probably downvote the sentiment, but I find it interesting that in some cases we can be so laissez faire and in others so big-brother, and often it seems to me that the deciding factor is the tone of the article we're responding to.

After this and the previous article, I keep wondering why White Hats haven't started writing viruses that take advantage of these vulnerabilities and shut down these servers, leaving a prominent message behind as to why it was shut down. If possible, maybe they could patch them as well, but leave it shut down to force the admin to restart it and see the message (if it's not abandoned.)

Of course there could be unforeseen problems, like having 40,000 servers trying to access the same patch server at the same time creating a functional DDOS attack.

Any thoughts?

Because once that code is out. An even lazier Blackhat just takes the same code. Modifies it to deploy more crap.

After this and the previous article, I keep wondering why White Hats haven't started writing viruses that take advantage of these vulnerabilities and shut down these servers, leaving a prominent message behind as to why it was shut down. If possible, maybe they could patch them as well, but leave it shut down to force the admin to restart it and see the message (if it's not abandoned.)

Of course there could be unforeseen problems, like having 40,000 servers trying to access the same patch server at the same time creating a functional DDOS attack.

Any thoughts?

Many years of jail time under the Computer Fraud and Abuse Act or similar legislation creates a strong disincentive.

Plus once you break into someone's house(server) and start breaking their stuff, it's hard to call yourself a white hat anymore.

After this and the previous article, I keep wondering why White Hats haven't started writing viruses that take advantage of these vulnerabilities and shut down these servers, leaving a prominent message behind as to why it was shut down. If possible, maybe they could patch them as well, but leave it shut down to force the admin to restart it and see the message (if it's not abandoned.)

Of course there could be unforeseen problems, like having 40,000 servers trying to access the same patch server at the same time creating a functional DDOS attack.

Any thoughts?

I'm guessing the actions you suggest would be a flagrant violation of the Computer Fraud and Abuse Act or similar statutes in place in the US and around the world. On top of that, people who wrote or propagated malware that shut down other people's servers would be exposed to massive legal liability.

In short, this suggestion isn't viable because white hats don't want to risk going to prison or losing their home in a lawsuit.

We just moved our servers off of a 9 year old Red Hat 2 install. Our last security update was... never. We bricked one of the boxes patching the Kernel once and honestly we just kind of forgot to update them. One of our boxes even had 2687 days (7.35 years) of uptime!

It's amazing we only got hacked once (that we know about)...

(Now we are on AWS and love the fact that we can use these newfangled things like sudo and package managers! And snapshots.)

After this and the previous article, I keep wondering why White Hats haven't started writing viruses that take advantage of these vulnerabilities and shut down these servers, leaving a prominent message behind as to why it was shut down. If possible, maybe they could patch them as well, but leave it shut down to force the admin to restart it and see the message (if it's not abandoned.)

Of course there could be unforeseen problems, like having 40,000 servers trying to access the same patch server at the same time creating a functional DDOS attack.

Any thoughts?

I like that idea. A less draconian solution would be for ISPs/hosting companies to probe servers on their network, and send several warnings about how to fix the problem, before eventually disabling access to the server.

The downside of this is that no company wants to lose business, so you'd need to make it a best practice and get the biggest companies to support it.

Here's the problem when it comes to updating infrastructure systems like these for system administrators:

It's not a matter of security, it's a matter of "If it ain't broke, don't you even dare try to fix it."

If history as sysadmins has taught us nothing it's that the constant cycle of updates, especially on mission-critical machines, puts our job security on the lines. Especially when a lot of these machines are running custom code with dependencies that end up being the very security liabilities that get patched.

After this and the previous article, I keep wondering why White Hats haven't started writing viruses that take advantage of these vulnerabilities and shut down these servers, leaving a prominent message behind as to why it was shut down. If possible, maybe they could patch them as well, but leave it shut down to force the admin to restart it and see the message (if it's not abandoned.)

Of course there could be unforeseen problems, like having 40,000 servers trying to access the same patch server at the same time creating a functional DDOS attack.

Any thoughts?

I like that idea. A less draconian solution would be for ISPs/hosting companies to probe servers on their network, and send several warnings about how to fix the problem, before eventually disabling access to the server.

The downside of this is that no company wants to lose business, so you'd need to make it a best practice and get the biggest companies to support it.

This reads like the problem is something in the kernel that's been fixed in a later release, but that's not necessarily the case. They just said that the common thread they found is that all the compromised machines were running 2.6.

The actual exploit is never disclosed (and they may not even know what it is) so it could be in php, apache, or some other service. Kernel-level stuff is rarely exposed to the outside world, though it could be used in a privilege-escalation attack when combined with some other exploit that lets you run code remotely.

Here's the problem when it comes to updating infrastructure systems like these for system administrators:

It's not a matter of security, it's a matter of "If it ain't broke, don't you even dare try to fix it."

If history as sysadmins has taught us nothing it's that the constant cycle of updates, especially on mission-critical machines, puts our job security on the lines. Especially when a lot of these machines are running custom code with dependencies that end up being the very security liabilities that get patched.

Sadly, I think SunnyD speaks the truth when it comes to the incentives for and against server updates. As important as upgrading is for security, most organizations place a much higher priority on stability and availability. That is, until they face the kind of compromise that hit Target. I don't support the line of thinking SunnyD describes, and I doubt SunnyD does either. But I think this line of thinking is common and a major impediment to security.

Here's the problem when it comes to updating infrastructure systems like these for system administrators:

It's not a matter of security, it's a matter of "If it ain't broke, don't you even dare try to fix it."

If history as sysadmins has taught us nothing it's that the constant cycle of updates, especially on mission-critical machines, puts our job security on the lines. Especially when a lot of these machines are running custom code with dependencies that end up being the very security liabilities that get patched.

Not updating systems is bad practice that too many admins still go by. When I came onboard with my current employer it took a great culture shift to get everybody to understand why security updates are so important. One year later and are update cycle is nearly perfected.

There is no excuse for this anymore. Virtualize your servers, snapshot VMs before making changes, update and revert if a problem occurs. Clone a VM and build a test environment to check before doing it in production. For every excuse there are established best practices and mitigation techniques to deal with them.

But, Linux made lots of headway as a cheap secure alternative to Microsoft. If I had a penny for every time someone said, "We'll be fine, it's a Linux box we're deploying on the internet and not a Microsoft server" ....

The thing is, like the Mac, Linux has been viewed as bulletproof. In 2007, I was working through the SANS 560 course and we utilized a publicly available kernel exploit for 2.6 to gain root. It was beautiful, just compile, run and BOOM, you were root. Linux was never bulletproof.

This is simply more (unnecessary) evidence that when we decide a platform is secure, we become complacent and end up in this situation. Anything with software should be treated as vulnerable as long as it has power and network connectivity.

Here's the problem when it comes to updating infrastructure systems like these for system administrators:

It's not a matter of security, it's a matter of "If it ain't broke, don't you even dare try to fix it."

If history as sysadmins has taught us nothing it's that the constant cycle of updates, especially on mission-critical machines, puts our job security on the lines. Especially when a lot of these machines are running custom code with dependencies that end up being the very security liabilities that get patched.

There is a concept for this, it's called "technical debt". I'm not saying it's any one person's fault, but it is a flawed system. Keeping pushing off the problem until you're painted into a corner.

My current VPS provider gave me a kernel that was built in 2011, and I'm in the market for a new one anyway. This article reminded me that I have a week off soon and I can actually do the transition then. Does anyone have suggestions for a good and cheap VPS provider that uses Debian?

There is no excuse for this anymore. Virtualize your servers, snapshot VMs before making changes, update and revert if a problem occurs. Clone a VM and build a test environment to check before doing it in production. For every excuse there are established best practices and mitigation techniques to deal with them.

Admins have to stop making excuses for not updating.

This is assuming you have the time or budget. Many admins do, many others don't: they might not have the disk or processor space, the person-hours and/or the wherewithal and support from upper management to get this done. And that all assumes you have an admin, or even a reputable services provider.

In theory this is easy enough to do. In practice, it ends up bottom of the list, especially in SMB IT.

Another excellent example of how the concept of herd immunity applies approximately as well to internet servers as it does to human beings.

If 98% of machines were immune to these attacks, it would take too many resources for them to spread around, and they would get squashed faster than they could spread.

However, since there are piles of mostly abandoned servers running all over the place, plus ancient entrenched software that doesn't support newer OS's, it's impossible for us to get that percentage high enough.

I wouldn't compare that to herd immunity in humans, actually. Humans have only limited ways of getting into contact with others and thereby spreading a disease, while any computer on the internet can literally come into contact with any other computer.

Herd immunity means that even unprotected individuals get shielded from a disease, because nearly everyone around them is immune and therefore the risk of contracting that disease gets extremely low, which naturally couldn't apply to computers where anyone can infect anyone regardless of where they are.

Here's the problem when it comes to updating infrastructure systems like these for system administrators:

It's not a matter of security, it's a matter of "If it ain't broke, don't you even dare try to fix it."

If history as sysadmins has taught us nothing it's that the constant cycle of updates, especially on mission-critical machines, puts our job security on the lines. Especially when a lot of these machines are running custom code with dependencies that end up being the very security liabilities that get patched.

Sadly, I think SunnyD speaks the truth when it comes to the incentives for and against server updates. As important as upgrading is for security, most organizations place a much higher priority on stability and availability. That is, until they face the kind of compromise that hit Target. I don't support the line of thinking SunnyD describes, and I doubt SunnyD does either. But I think this line of thinking is common and a major impediment to security.

And here's the scary rub...

I'm a developer.

But I also have a long, sordid history with system administration. And my current development role puts me upfront and center right in the focal spotlight of ... IT and software security. My take on it is that considerations have to be made on both sides of the fence - the developers of these custom software packages so that these sorts of dependencies either not be implemented in the first place or are easily overcome when security updates are warranted and by the IT admins as well as management who need to understand that downtime may be required, even on mission-critical systems especially in today's interconnected world.

The biggest hurdle we face in my opinon isn't either party though. It's time and money. Look at how quickly companies burn through software packages and vendors (and more importantly vendor support for software packages) comes and goes. Companies get painted into corners where updating their systems ends up meaning a full overhaul of their entire infrastructure... and that ultimately means spending the big bucks, something which shareholders usually don't look fondly to.

I'm just glad I've been able to work both sides of the coin, I'd like to think that it lets me be a little bit more understanding and compassionate in what I do for the next generation that comes along and has to deal with my version of this stuff.

edit: And most definitely for the record - for the love of god people PATCH those dang machines!

if by Jenny McCarthy you mean generic playboy bunny (and the tech expertise that one assumes they have) then i think i get the "slight"...but if you mean the Jenny McCarthy does not have a brain..., then i disagree... meaning Jenny McCarthy is only one of the two brain cells than are on "the View..." meaning she's one of the two intelligent people on "the view" LOL

Another excellent example of how the concept of herd immunity applies approximately as well to internet servers as it does to human beings.

If 98% of machines were immune to these attacks, it would take too many resources for them to spread around, and they would get squashed faster than they could spread.

However, since there are piles of mostly abandoned servers running all over the place, plus ancient entrenched software that doesn't support newer OS's, it's impossible for us to get that percentage high enough.

I wouldn't compare that to herd immunity in humans, actually. Humans have only limited ways of getting into contact with others and thereby spreading a disease, while any computer on the internet can literally come into contact with any other computer.

Herd immunity means that even unprotected individuals get shielded from a disease, because nearly everyone around them is immune and therefore the risk of contracting that disease gets extremely low, which naturally couldn't apply to computers where anyone can infect anyone regardless of where they are.

It's not a perfect analogy, but there are a lot of similarities that still make it very useful.

Another excellent example of how the concept of herd immunity applies approximately as well to internet servers as it does to human beings.

If 98% of machines were immune to these attacks, it would take too many resources for them to spread around, and they would get squashed faster than they could spread.

However, since there are piles of mostly abandoned servers running all over the place, plus ancient entrenched software that doesn't support newer OS's, it's impossible for us to get that percentage high enough.

I disagree. These sorts of exploits work perfectly well when even a tiny percentage of servers can be compromised. As long as there's a reliable supply of servers that have some marginal amount of traffic it is dirt simple for someone to scan for them automatically and infect them. My guess is there are only a small percentage of Linux 2.6 based servers out there right now, it could well be 1% or less, but that's still in absolute terms enough that a decent amount of traffic hits them. The traffic doesn't even have to be THAT much, it just has to represent hits by a lucrative amount of total users (and you can always use various techniques to drive users to your enslaved machines).

Frankly I think we need a new paradigm in terms of cheap low-end web servers. Why should people be burdened to run an entire OS? We need to develop a simple "web container" that has very little attack surface, can be automatically updated with little fuss, and thus which can simply be run virtually unmaintained and remain secure for years on end. Then the ISPs can construct an infrastructure for hosting them and updating them. Mom and Pop can rest easy and all are happy.

But, Linux made lots of headway as a cheap secure alternative to Microsoft. If I had a penny for every time someone said, "We'll be fine, it's a Linux box we're deploying on the internet and not a Microsoft server" ....

The thing is, like the Mac, Linux has been viewed as bulletproof. In 2007, I was working through the SANS 560 course and we utilized a publicly available kernel exploit for 2.6 to gain root. It was beautiful, just compile, run and BOOM, you were root. Linux was never bulletproof.

This is simply more (unnecessary) evidence that when we decide a platform is secure, we become complacent and end up in this situation. Anything with software should be treated as vulnerable as long as it has power and network connectivity.

My argument for using Linux as a web server is performance and setup simplicity. I do feel for documented security vulnerabilities (simple things like file system permissions, etc), that the configuration of many Linux distributions is easier, which means on average, if setup by someone who knows what they are doing, may generally be more secure. However, I would never argue that the operating system itself is globally "more secure", generalization about security almost always leads to false statements. I do think certain specific components of the Linux operating system are more secure, for example, the way that user login passwords are hashed, but I would never extrapolate that into "Linux is more secure". However, to be specific, I do feel safer leaving a really important password as my user login on a system connected to the internet.

It's an interesting mix of the classic aging infrastructure dilemma combined with shared responsibility. Not likely worth trying to solve perfectly, but sites that are identified should be required to fix their issue in a timely manner or face some sort of gradually increasing penalties. I assume it's not legal to spread malware, even unknowingly and unwittingly.

My current VPS provider gave me a kernel that was built in 2011, and I'm in the market for a new one anyway. This article reminded me that I have a week off soon and I can actually do the transition then. Does anyone have suggestions for a good and cheap VPS provider that uses Debian?

I use Hetzner.deThey provide KVM based VPS and you can choose Debian, Ubuntu, CentOS, Fedora, OpenSUSE or FreeBSD.My interaction with them is limited to paying the bill, so I'm happy.

There is hardly people that forget they still a server online, in particular if they are paying the electricity to run them each month.

The problems comes from hosting companies selling very cheap servers and VPS, this are so cheap that people that hire them are so newbie that once they receive the servers they never ever patch them because they don´t know how. The hosting company is so bad and does not enforce patches because their earnings are so low per server/customer that they don´t care either to even bother wasting their time informing customers or actually enforcing updates. A proper company WOULD enforce security updates on their customers machines at least if they don´t they should be unplugged from their network which they are responsible for, but this means they need to track them down, inform the customer or even give them support, which is going the extra mile they don´t want. This costs time and money and they hardly make a profit with this customers, so they honestly don´t care.

The other problem is what all those pre paid hosting packages as long as 3 years, most people don´t care about their server or website anymore before one year, but since they paid 3 years they have to be remained online, in this case the company has no idea either the server or even hosting account was abandoned long time ago. Anyone working in some company providing services maybe can confirm this but there probably are allot of abandoned services which users don´t use and don´t care about them either.

If we track down this servers its no wonder they mostly always come from the same network and companies, and this is exactly why. Crappy hosting providers that give terrible low cheap services, most of the spam, dos attacks and other malicious traffic comes from this budget providers.

Amazingly some of this are in Europe, actually allot of this massive budge providers are in Europe and I know some system administrators even blocked the whole networks from this providers since most traffic coming from them was garbage. The US has them as well, but they don´t seem that popular to have a real impact in terms of customers/servers, otherwise we would see the same trend.

Selling cheap is not the problem, the problem is when you give a Linux server to someone that does not know what a server or operating system is and then leave it plugged online 24/7.

I think it is time for hosting companies to take a slighty more pro active approach to nastiness on their networks. If I drove down the road with my car belching smoke and oil, the cops would pull me over very quickly. Why don't we have something, even slightly, similar with unmaintained servers.

Not possible. In theory a hosting company provides you with equipment and a line. Then you pay for usage.

And this is why no matter how suckass my shared webhost is. At least its managed as they maintain at least update the OS and core backend packages needed for me to run a website.

Technically all ISPs monitor their network and will pull your server's network access if it becomes a problem for them, so if you are spamming out millions of emails an hour, you can expect them to take your server down. But they do not care unless they notice it affecting their network, so generally if your server doesn't flood the network, they don't care.

What I do think they need to do, is to take a little more of an active role is policing what comes out of their network - remember the DDoS attacks made by reflecting packets of NTP servers, all that was needed to stop this (and many other similar attacks) is for the ISP to put a firewall rule on their edge routers that simply drops packets with an IP address that they do not own - ie spoofed IP packets. simple, their routers can handle it, and yet... only a few do this.

I do hope that the rise of Platform as a Service, rather than OS hosting becomes more common, because then the platform will be patched regularly and upgraded. Then we only have to worry about compromised applications, but its a start. Still won't really be fixed until the ISPs take some responsibility (or policing action) on what comes out of their networks.