“Internet of Things” is the new Windows XP—malware’s favorite target

Bitcoin miners and DDoS apps are targeting a new crop of low-hanging fruit.

The past few days have revealed new data that suggests the recent upsurge in malware targeting routers—as Ars has chronicled here, here, and here—is not only continuing, but it's spreading to digital video recorders (DVRs).

Further Reading

Exhibit A came Monday from researchers at security training institute Sans, which unearthed a Bitcoin-mining trojan that has infected DVRs. The researchers found the infection while researching the source of an automated script they observed scanning the Internet for data storage devices made by Synology. The researchers eventually found that the bot ran on a DVR with an ARM processor but didn't know much else. They later determined it was part of a Bitcoin miner that took control of DVRs used to record video from security cameras, most likely by exploiting an exposed telnet port and a default root password of "12345." Samples of the malware are here. The password to access the binaries is "infected."

"To our surprise, at least in one case it turned out that a binary by the same name, 'cmd.so', was running on the NAT router itself," Sans CTO Johannes Ullrich wrote. "In addition, a second process was running that looked just like the Bitcoin miner we saw running in the infected DVRs. Sadly, we were not able to retrieve the binaries, but the processlist looks similar enough to make us believe that this is the same basic binary just compiled for MIPS in this case (the router in question uses a MIPS CPU)."

Exhibits B and C

Exhibit B for the case that infections against routers and similar devices are growing comes from researchers at antivirus provider Eset. In a blog post published Wednesday, they reported that Win32/Sality, an 11-year-old piece of computer malware for sending spam and performing denial-of-service attacks, was recently updated to change the domain name system (DNS) settings of home broadband gateway routers. The new component was first used in late October. The Eset researchers wrote:

This feature adds a new dimension to the Win32/Sality operation. The first component, detected by ESET as Win32/RBrute.A, scans the Internet for router administration pages in order to change the entry for their primary DNS server. The rogue DNS server redirects users to a fake Google Chrome installation page whenever they are trying to resolve domains containing the words “google” or “facebook.” The binary distributed through this installation page is in fact Win32/Sality itself, providing a way for the Sality botnet’s operators to increase its size further by infecting other users behind this router.

The Eset researchers went on to conclude that RBrute is likely a way to ensure the continuing survival of the Win32/Sality ecosystem as PC-improving computer security makes traditional infections harder. They wrote:

The usual infection vectors of Win32/Sality might not be sufficient enough to keep the botnet alive; hence the botnet controllers are deploying new component to grow the botnet. DNS hijacking on routers can be quite effective if done correctly. It can reach a lot of users behind a single router, especially on public access points. As routers are not commonly protected by security solutions, it provides an unrestricted environment to attackers allowing them to try several techniques to steal users’ information. An existing technology that could fix the problem is DNSSEC, since the result of a DNS request is cryptographically signed and hence not prone to tampering. A good security practice that would reduce the scope of the problem is to change the default password on router’s web interface.

The takeaway is that it's no longer realistic to think that routers, DVRs, or other Internet-connected home appliances aren't worth an attacker's time. The ability of malware to infect a growing number of platforms, combined with the increasing difficulty of compromising more traditional targets, makes poorly designed "Internet of Things" devices the 2014 equivalent of Windows XP, particularly during the early years when the Microsoft OS was particularly easy to hack.

People who don't rely on networked services provided by their smart TVs, DVRs, and other Internet-capable appliances should consider disconnecting them altogether. Those who need or want those devices to be available on the Internet should spend a few minutes to make sure default passwords have been changed and that remote access and other features are disabled unless absolutely needed.

Promoted Comments

Just yesterday I saw a hacked VOIP phone switch. Some genius opened the H323 ports to the world (ummmm bad, please tunnel your remote phones over IPSEC) and (drumroll please) opened port 80 to the world. That's where the admin interface listens on. The admin login credentials were 'admin' 'password' The phone tech took manufacturer's list of "ports used by this device" and had those ports opened from the Internet. The worst of it is: they still don't understand what exactly did they do wrong.

Here's the basic security problem regarding the internet of things. It's two fold.

Its firmware is insecure. The firmware is often written in house by the hardware maker itself and they don't know how to churn out secure code. Not a problem if the device is behind a firewall which brings me to the second part of this two fold problem.

Those things are not being deployed by network engineers.

They are being deployed by trades folks, the electricians, the repairmen, the plumbers, the mechanics, the telephone techs (like that VOIP switch). Those folks have a very very thin understanding of IP and absolutely no understanding of the security practices required. They may be very good at what they do in their chosen field, but IP networking requires a certain level of abstract thinking they can't quite grasp.

Why on earth do we accept that software is something magical and that the authors are not culpable?

Because computer systems are vastly complicated systems, engineering projects that are vastly beyond the complexity of anything built physically. And with that complexity comes an enormous swathe of potential vulnerabilities that aren't necessarily predictable.

Imagine there is a bug that only occurs in the Netflix app, on a Sony TV, running a specific build of Google Android, that only arises with ARM CPUs from a specific batch but only in conjunction with a specific batch of Ethernet cards running a specific firmware version and only when the primary UI language is Welsh and Daylight Savings Time is in effect and that can only be exploited when the device is behind a handful of less common routers - then who is liable? Could anyone realistically have been expected to anticipate that precise series of events?

It's not possible to provide perfect security, and therefore manufacturers can't be held responsible for not achieving it. But it is possible to meet basic standards and expectations for security.

This is the same as any other field. If a magnitude 8.6 earthquake levels your house, you may not be able to hold the construction crew responsible. But if a magnitude 3 earthquake levels your house because of substandard construction and failure to meet code requirements, you can. The same thing should be true for security. Negligence should be punishable, imperfection should not.

This does create some gray area which courts would have to adjudicate, just as in the earthquake case. It's not perfect but it can be done, and it would be better than the security void we have now.

Construction also is governed by a building code. There could be established standards. Personal data being transmitted over the internet must be encrypted to X minimum standard. If your app/device transmits them unencrypted, you should be held responsible. Certificates must actually be checked to see if they are valid. If your app accepts any certificate without checking, you should be held responsible. Etc. These are basic, fundamental requirements that are flagrantly ignored, because there are no consequences. But they would be easy to apply.

Easy fix -- make the software developers and their employers personally liable for any and all losses caused by security failures in their software and/or hardware. tack on statutory damages in line with copyright law.

Unless there's a revenue stream connected to the device and it comes from a major name-brand producer, don't count on updates ever. Security through obscurity might actually work well if the device wasn't terribly popular.

Easy fix -- make the software developers and their employers personally liable for any and all losses caused by security failures in their software and/or hardware. tack on statutory damages in line with copyright law.

Easy fix -- make the software developers and their employers personally liable for any and all losses caused by security failures in their software and/or hardware. tack on statutory damages in line with copyright law.

no more hiding behind an EULA

problem solved.

edit spilling

Except that just leads to an Internet of no-things, because the risk/reward of Internet enabling stuff skews too heavily against manufacturers. There needs to be some sort of middle ground where we can get devices with better security, whilst not dissuading manufacturers from adding genuinely useful features to products for fear of litigation.

Just yesterday I saw a hacked VOIP phone switch. Some genius opened the H323 ports to the world (ummmm bad, please tunnel your remote phones over IPSEC) and (drumroll please) opened port 80 to the world. That's where the admin interface listens on. The admin login credentials were 'admin' 'password' The phone tech took manufacturer's list of "ports used by this device" and had those ports opened from the Internet. The worst of it is: they still don't understand what exactly did they do wrong.

Here's the basic security problem regarding the internet of things. It's two fold.

Its firmware is insecure. The firmware is often written in house by the hardware maker itself and they don't know how to churn out secure code. Not a problem if the device is behind a firewall which brings me to the second part of this two fold problem.

Those things are not being deployed by network engineers.

They are being deployed by trades folks, the electricians, the repairmen, the plumbers, the mechanics, the telephone techs (like that VOIP switch). Those folks have a very very thin understanding of IP and absolutely no understanding of the security practices required. They may be very good at what they do in their chosen field, but IP networking requires a certain level of abstract thinking they can't quite grasp.

Except that just leads to an Internet of no-things, because the risk/reward of Internet enabling stuff skews too heavily against manufacturers. There needs to be some sort of middle ground where we can get devices with better security, whilst not dissuading manufacturers from adding genuinely useful features to products for fear of litigation.

you're presuming that somewhere there's a benefit to an internet enabled light bulb.

to date, all the IofT looks like is a solution seaching for a problem... and no, i don't consider money in my pocket to be a manufacturer's problem

More seriously, any middle ground is going to have to push a vastly more proportionate level of liability onto manufacturers, and frankly if developers took personal responsibility, maybe a code of professional ethics for buddha's sake. Carp software should be available, and it's not necessary. never has been.

Easy fix -- make the software developers and their employers personally liable for any and all losses caused by security failures in their software and/or hardware. tack on statutory damages in line with copyright law.

no more hiding behind an EULA

problem solved.

edit spilling

Oh sure, easy fix. As long as you don't mind paying $100,000 for Windows.

Here's the basic security problem regarding the internet of things. It's two fold.

I'd say it's even more challenging than that. For something to be 100% secure everyone involved has to do their job perfectly. That means everyone from the people designing the chips to everyone involved in developing the software to everyone involved in actually deploying it. Because that is literally impossible security requires constant work by everyone of those people to look for and fix issues as they are identified in their sphere of control. It gets even more challenging when the person deploying the thing is a home user with barely enough knowledge to get the thing connected to a network let alone do any actual security.

That does make it sound hopeless but we still do need to try and make any improvements anywhere we can.

Developers need to have a better understanding of security, provide better support for their devices after they are sold.

MFGs need to make their devices easier to understand, more secure in their default configuration and provide better support after they are sold.

Users need to take some responsibility for configuring their devices and realize that it impacts not only them but the internet as a whole. If they are unwilling or unable to do that then they need to find someone who will do it for them or pay someone to do it for them.

Easy fix -- make the software developers and their employers personally liable for any and all losses caused by security failures in their software and/or hardware. tack on statutory damages in line with copyright law.

no more hiding behind an EULA

problem solved.

edit spilling

Except that just leads to an Internet of no-things, because the risk/reward of Internet enabling stuff skews too heavily against manufacturers. There needs to be some sort of middle ground where we can get devices with better security, whilst not dissuading manufacturers from adding genuinely useful features to products for fear of litigation.

I'm not sure I agree.

It'll drive up the cost of doing business, sure. Secure software is considerably more expensive than typical software.

This means it'll only happen when it actually adds sufficient value. There won't be as much "gee whiz lets throw IoT stuff in for fun" going on -- it'll have to deliver value that justifies the high cost it adds. So, fewer IP-connected lightbulbs and thermometers, maybe fewer budget no-name DVRs with internet connectivity, but stuff like Nest and TiVo ought to manage, I'd think.

When I was fresh out of college, my first job was IT/Network support, in addition to the odd programming task.

While wearing my network hat, we noticed a lot of traffic scanning our network from an internal box - one of our Security Camera DVR machines was scanning every network connected device and hammering a few of our SSH services.

We didn't have direct access to the box since it was provided by a third party, so we couldn't install any software on it. I did know that it was running XP, so I assumed that it wasn't caught up on patches and a worm had managed to get onto it.

I called the vendor and explained the situation to them, asking for the login credentials so I could get on and install our corporate virus scanning software on the machine. Here's how the conversation went down:

Me: Hey, its DSF from ABC, we need administrator rights to one of our DVRs to install some antivirus software, it appears to be infected by something and is constantly scanning our network and trying to brute force connections to various services on our network.Them: That's not possible, the DVR is running embedded software. It can't be infected.Me: Umm, I took a look at it, it looks like it is running XP.Them: Yeah, XP embedded. That can't be infected, because its embedded. Its different from normal XP.Me: ... Embedded XP is just regular XP with some media functionality stripped out... it is still the same core OS underneath.

<snip some further back and forth>

This went on for awhile before I just went to the director and explained the situation, we ended up dumping their hardware and going with someone else who wasn't as dense. We made sure we didn't sign a "3rd party administered" contract with that one too.

Oh sure, easy fix. As long as you don't mind paying $100,000 for Windows.

source? or are you just pulling numbers out of your hat?

Why should developers run around with a get out of jail free card? If I build a sidewalk in front of my house, and carelessly include a trip hazard or leave rebar sticking up out of the concrete, i'm liable for damages caused by that error/omission. Why on earth do we accept that software is something magical and that the authors are not culpable?

Oh sure, easy fix. As long as you don't mind paying $100,000 for Windows.

source? or are you just pulling numbers out of your hat?

Why should developers run around with a get out of jail free card? If I build a sidewalk in front of my house, and carelessly include a trip hazard or leave rebar sticking up out of the concrete, i'm liable for damages caused by that error/omission. Why on earth do we accept that software is something magical and that the authors are not culpable?

Not a very good analogy, is it? If you were negligent in building your sidewalk, then sure, maybe someone could sue you. But the manufacturer of the rebar would not be liable because you misused the product.

There's no "source" for that number. It's rhetorical. If software developers suddenly assumed personal liability for the code they write, then they would need to carry "malpractice" insurance for when you sue them.

Oh sure, easy fix. As long as you don't mind paying $100,000 for Windows.

source? or are you just pulling numbers out of your hat?

Why should developers run around with a get out of jail free card? If I build a sidewalk in front of my house, and carelessly include a trip hazard or leave rebar sticking up out of the concrete, i'm liable for damages caused by that error/omission. Why on earth do we accept that software is something magical and that the authors are not culpable?

I don't think anyone is suggesting that software is magical or that developers deserve a free pass.

A builder carries insurance, uses a lawyer, or some other means to deal with externalities such as you've mentioned above. There are costs associated with that, which in one form or other are passed on to the customer.

I think what Globe199 is saying is that when it comes to software, end users are not prepared to deal with the costs that would be necessary for developers to deal with such externalities.

Yeah, I'm going to need a damn good reason before I connect just any appliance into the network. And I'm going to have to trust the manufacturer to have a reputation for customer support and the faintest clue about software development, too.

Why on earth do we accept that software is something magical and that the authors are not culpable?

Because computer systems are vastly complicated systems, engineering projects that are vastly beyond the complexity of anything built physically. And with that complexity comes an enormous swathe of potential vulnerabilities that aren't necessarily predictable.

Imagine there is a bug that only occurs in the Netflix app, on a Sony TV, running a specific build of Google Android, that only arises with ARM CPUs from a specific batch but only in conjunction with a specific batch of Ethernet cards running a specific firmware version and only when the primary UI language is Welsh and Daylight Savings Time is in effect and that can only be exploited when the device is behind a handful of less common routers - then who is liable? Could anyone realistically have been expected to anticipate that precise series of events?

Easy fix -- make the software developers and their employers personally liable for any and all losses caused by security failures in their software and/or hardware. tack on statutory damages in line with copyright law.

no more hiding behind an EULA

problem solved.

edit spilling

Except that just leads to an Internet of no-things, because the risk/reward of Internet enabling stuff skews too heavily against manufacturers. There needs to be some sort of middle ground where we can get devices with better security, whilst not dissuading manufacturers from adding genuinely useful features to products for fear of litigation.

My suggestion for this would be to require that the manufactures of these embedded release their firmware as open source and allow the user to upload custom firmware. The real problem with these devices is that the manufacturer has no reason to support the software on their devices, and it quickly becomes abandon-ware. If the firmware were open source then outside projects could just prepare the updates.

Why on earth do we accept that software is something magical and that the authors are not culpable?

Because computer systems are vastly complicated systems, engineering projects that are vastly beyond the complexity of anything built physically. And with that complexity comes an enormous swathe of potential vulnerabilities that aren't necessarily predictable.

Imagine there is a bug that only occurs in the Netflix app, on a Sony TV, running a specific build of Google Android, that only arises with ARM CPUs from a specific batch but only in conjunction with a specific batch of Ethernet cards running a specific firmware version and only when the primary UI language is Welsh and Daylight Savings Time is in effect and that can only be exploited when the device is behind a handful of less common routers - then who is liable? Could anyone realistically have been expected to anticipate that precise series of events?

It's not possible to provide perfect security, and therefore manufacturers can't be held responsible for not achieving it. But it is possible to meet basic standards and expectations for security.

This is the same as any other field. If a magnitude 8.6 earthquake levels your house, you may not be able to hold the construction crew responsible. But if a magnitude 3 earthquake levels your house because of substandard construction and failure to meet code requirements, you can. The same thing should be true for security. Negligence should be punishable, imperfection should not.

This does create some gray area which courts would have to adjudicate, just as in the earthquake case. It's not perfect but it can be done, and it would be better than the security void we have now.

Construction also is governed by a building code. There could be established standards. Personal data being transmitted over the internet must be encrypted to X minimum standard. If your app/device transmits them unencrypted, you should be held responsible. Certificates must actually be checked to see if they are valid. If your app accepts any certificate without checking, you should be held responsible. Etc. These are basic, fundamental requirements that are flagrantly ignored, because there are no consequences. But they would be easy to apply.

The simple truth is that you can't fix stupid, which describes MOST end users.

The IoT isn't a bad idea (thought I seriously wonder about it's actual demand versus hyped promise). But no matter how much a manufacturer implores their customers to "enable security measures", most customers think that means plugging it in, turning it on and seeing if it works. They EXPECT security without having to do anything ABOUT it beyond a password or two.

Developers (and, yes, I do that, too) often forget that the end user is functionally retarded when it comes to security. They need to be led by hand through whatever measures are needed to secure the thing. This means prompts to put in passwords, reminders to WRITE THEM DOWN (a step that's usually skipped, much to tech support's eventual dismay), and a way to test the security of an activated device and to warn customers that it's not secure.

Will that add to the cost of a device? OH, hell yes. But in the long run, tech support isn't going to be over-run by hundreds of thousands of aggrieved individuals who don't know the difference between an ISP and the IRS when their new toy has been hacked because they weren't tech-savvy enough to enable any best practices security measures.

Even then, you're not going to STOP the hacking. You're only going to make it harder - and that really is the point here.

Construction also is governed by a building code. There could be established standards. Personal data being transmitted over the internet must be encrypted to X minimum standard. If your app/device transmits them unencrypted, you should be held responsible. Certificates must actually be checked to see if they are valid. If your app accepts any certificate without checking, you should be held responsible. Etc. These are basic, fundamental requirements that are flagrantly ignored, because there are no consequences. But they would be easy to apply.

That's quite a big step down from 'software developers should be culpable for the code they write' though, which is what the OP was suggesting.

I agree that there needs to be legislation in place globally to lay down more requirements for security. Doing that both globally and in a way that is compatible across the world, without taking so long that the required standards are too weak is a difficult challenge though. I think as our industry matures, we will see that but in the mean time what we really need is OS and dev tool manufacturers to make it easy to do these kinds of things right and deliberately hard to get wrong, because then far more applications will fall into the "secure by default" bucket, regardless of who writes them.

The simple truth is that you can't fix stupid, which describes MOST end users.

Sadly it also describes quite a lot of developers too. Although naïve is probably a better word in both cases. Fixing developers so they really understand the security implications of the code they write is a much, much harder problem than making a genuinely secure product "user friendly"