That's because many of these systems run on older versions of Windows—such as Windows 2000. Medical equipment manufacturers often won't support security patches or operating system upgrades for their systems, largely out of concern about whether such changes would require them to resubmit their systems to the Food and Drug Administration for certification.

The scope of the problem was the topic of a panel discussion (PDF) at a National Institute of Standards and Technology (NIST) Information Security and Privacy Advisory Board on October 11. Mark Olson, the chief information security officer at Boston's Beth Israel Deaconess Medical Center, told attendees that malware had infected fetal monitors in his hospital's high-risk pregnancy ward, to the point where they were so slow they couldn't properly record data.

"Fortunately, we have a fallback model," Olson said. "They are in an (intensive care) unit—there's someone physically there to watch. But if they are stepping away to another patient, there is a window of time for things to go in the wrong direction." The systems have since been replaced with new ones—based on Microsoft's Windows XP.

58 Reader Comments

Gods, medical devices are awful for keeping up to date. We just got rid of an X-ray workstation that was only certified for WinXP SP2 and IE6, though it could be made to work with Vista.

Something that's just as bad is scientific instruments. Another department still has some old stuff that requires Win2K, NT4, Win9x, and one that runs OS/2 3.0. It's prohibitively expensive to replace that stuff or buy a license for an update that works with a modern OS.

Why are these critical systems connected to the internet? They should be connected to a local network only that has these machines banned from connecting to anything other than the hospital's own network. I highly doubt that doctors will need to monitor patients from when the doctors are not on hospital grounds, which is about the only good reason I can see for these systems having an internet connection.

Heck, hospitals should wire their internal Ethernet ports the same way they do their plugs. Have the white one for full internet access and the red one for critical systems that don't need internet access.

One good thing about these embedded computers running outdated operating systems is that many of them are old enough not to be network capable, thus reducing the potential for infection from this vector.

Having worked with these types of embedded industrial computers for many years, I can attest that the risk of virus attack is a business concern second only to failure of one of the underlying hardware components of these frequently nonstandard form factor computers.

Why are these critical systems connected to the internet? They should be connected to a local network only that has these machines banned from connecting to anything other than the hospital's own network. I highly doubt that doctors will need to monitor patients from when the doctors are not on hospital grounds, which is about the only good reason I can see for these systems having an internet connection.

Heck, hospitals should wire their internal Ethernet ports the same way they do their plugs. Have the white one for full internet access and the red one for critical systems that don't need internet access.

Takes TWO seconds to go into the router settings and set these machines for LAN only. If they need to run reports that HAVE to go out to the internet, another computer on the lan can connect to the DB and pull the reports.

Older PBX systems frequently have this problem, too. And they won't let you run certain windows updates because it will actually cause parts of the system to stop working. Uxorious is right that none of this should be connected to the internet, but I'm sure people get data to/from these systems regularly with flash drives or other storage. Inevitably something bad may get on there.

Why are these critical systems connected to the internet? They should be connected to a local network only that has these machines banned from connecting to anything other than the hospital's own network. I highly doubt that doctors will need to monitor patients from when the doctors are not on hospital grounds, which is about the only good reason I can see for these systems having an internet connection.

Heck, hospitals should wire their internal Ethernet ports the same way they do their plugs. Have the white one for full internet access and the red one for critical systems that don't need internet access.

Internet access for remote monitoring is actually an increasingly important need. There is a lot of interest in telemedicine in general, and the ability to send and receive information from non-connected places (other hospitals/clinics, other data systems within the hospital's own data environment, etc.) is a real need. Just think about medical imaging systems. If a patient receives an MRI at one facility and physicians at a different facility need to access those images, that will require some type of external data connection. Should that be a connection directly to the imaging system? Maybe not, but the connection has to exist somewhere.

This report sounds a lot like reports issued on other industries - Ars published a story a few days ago about vulnerabilities in control systems used in solar energy production. Weaknesses like this exist in every industry and, thankfully, there aren't enough smart bad guys to exploit them all. It's a big game of whack-a-mole, and when a big, bad thing happens in one place it gets fixed in response. It would be great if industries like healthcare and energy production could proactively self-correct, but I don't see that happening.

Why are these critical systems connected to the internet? They should be connected to a local network only that has these machines banned from connecting to anything other than the hospital's own network. I highly doubt that doctors will need to monitor patients from when the doctors are not on hospital grounds, which is about the only good reason I can see for these systems having an internet connection.

Heck, hospitals should wire their internal Ethernet ports the same way they do their plugs. Have the white one for full internet access and the red one for critical systems that don't need internet access.

Internet access for remote monitoring is actually an increasingly important need. There is a lot of interest in telemedicine in general, and the ability to send and receive information from non-connected places (other hospitals/clinics, other data systems within the hospital's own data environment, etc.) is a real need. Just think about medical imaging systems. If a patient receives an MRI at one facility and physicians at a different facility need to access those images, that will require some type of external data connection. Should that be a connection directly to the imaging system? Maybe not, but the connection has to exist somewhere.

This report sounds a lot like reports issued on other industries - Ars published a story a few days ago about vulnerabilities in control systems used in solar energy production. Weaknesses like this exist in every industry and, thankfully, there aren't enough smart bad guys to exploit them all. It's a big game of whack-a-mole, and when a big, bad thing happens in one place it gets fixed in response. It would be great if industries like healthcare and energy production could proactively self-correct, but I don't see that happening.

Internet access for remote monitoring is an important need for sure, but why cannot the critical machine write to a common SQL database on the LAN and not itself have access to the WAN? It surely can be monitored in almost real time. I cannot believe that the actual machine would have to be exposed to the internet to be monitored. If something were to go wrong that would require attention, it should be monitored onsite.

I private network will solve all the problems? So no one will accidently get a random machine on the protected side infected and that will never propogate to every other computer on the protected network?

As Ramos.45 says, so long as you need to move data on and off the system, it's going to be vulnerable. Let's not forget that floppy disks used to be the major virus vector.

That said, whether these systems should be connected to the internet or not misses the point. Hospitals shouldn't be investing in expensive medical equipment that entrusts people's lives and care to a consumer operating system. This is the sort of application where hardened, embedded OSes are appropriate. This isn't even about operating system partisanship -- had the device makers simply shipped Windows CE/Windows Embedded, there'd be a lot less risk.

The systems we're talking about often cost millions of dollars, and surely have six-figure annual support contracts. How is it that maintaining a secure system isn't part of the manufacturer's responsibility?

Medical equipment manufacturers often won't support security patches or operating system upgrades for their systems, largely out of concern about whether such changes would require them to resubmit their systems to the Food and Drug Administration for certification.

IMHO, this is precisely the problem. The government decides (rightfully so) that computer security in medical systems is important, so they set a minimum requirement to be 'certified' for medical use. Instead of making things safer, this actually makes it more difficult to 'certify' as safe-by-government-standards than to update/patch the systems to make them safer.

The end result being medical equipment that is not up to date or safe, because convincing someone who knows nothing about computer/network security a problem is fixed is harder than actually fixing the problem.

I private network will solve all the problems? So no one will accidently get a random machine on the protected side infected and that will never propogate to every other computer on the protected network?

Mistakes can happen and infections can propagate through USB drives or non-internet means, but an air gap solves a large majority of issues. It means that the computer on the other side of the gap can be up-to-date and heavily locked down (setting USB to read only would help), reducing the chance that a USB drive used to transmit data would get infected in the first place. Or better yet, have the medical device burn the data to DVD and then transfer that to another computer for processing so that data can leave but not enter the medical device.

Why are these critical systems connected to the internet? They should be connected to a local network only that has these machines banned from connecting to anything other than the hospital's own network. I highly doubt that doctors will need to monitor patients from when the doctors are not on hospital grounds, which is about the only good reason I can see for these systems having an internet connection.

Heck, hospitals should wire their internal Ethernet ports the same way they do their plugs. Have the white one for full internet access and the red one for critical systems that don't need internet access.

Takes TWO seconds to go into the router settings and set these machines for LAN only. If they need to run reports that HAVE to go out to the internet, another computer on the lan can connect to the DB and pull the reports.

This is all about the LAZY

Have you even used real routers? It takes a lot more than just 2 seconds to "set" these machines for LAN only. Also, how does this secure anything if somebody gets into the network in the first place? Are you seriously saying you should have internet connected PCs and intranet only PCs on the same IP Scope? This is a joke, right? This is as bad as the people being lazy and not setting up any security at all.

This is not a simple 2 second fix to make it so everything is fine. This is a major project that requires quite a bit of planning, implementation, AND education to the users. And of course it requires hiring IT people, which nobody wants to do.

Just looking at it and thinking about having proper security on it makes me go wow, there are a lot of factors to think about. Internet access is NOT the only issue here. Having ANY device on that network at all that isn't locked down (outside of these insecure devices) is the issue. But you'd be willing to leave it setup that way. Good job, you have fixed none of the real issues at hand.

Here's 1 method to fixing it. Port Security with a mac-address policy on the port. Same with the DHCP server, and the DHCP server is ONLY serving the secure networks. Firewalls at every layer 3 entry/exit point, which limit traffic in to/out of the scope to the dhcp server, and limited selected hardened devices. Also, another firewall limiting access those hardened devices as well. It would require user education in the fact that they'd have to access the hardened devices to be able to pull any data off of the other devices, but it would secure the network. Problem is, you have to make it easy enough so that the user can't even tell what's going on in between them and the device, and IT is barely ever involved outside of them initially setting it up.

Internet access for remote monitoring is an important need for sure, but why cannot the critical machine write to a common SQL database on the LAN and not itself have access to the WAN? It surely can be monitored in almost real time. I cannot believe that the actual machine would have to be exposed to the internet to be monitored. If something were to go wrong that would require attention, it should be monitored onsite.

A big part of the problem is that medical hardware/software companies know that hospitals have few choices in suppliers. And switching is insanely expensive, especially since other systems may need replaced to interface with the new one.

Plus, they love to charge for every little feature, and almost refuse to use standard protocols. Getting two systems to talk can take months of negotiations, followed by months going back and forth between the two companies to actually make it work. ("Oh, it must be a problem with the other product, call them.")

This is definitely a problem in the healthcare environment that I manage. We're not allowed to touch most vendor PCs, and certainly can't install an antivirus on them, without voiding the warranty and support agreement. Contractual obligations require in many cases that the device vendor is the only one allowed to make any changes. Which becomes even more awesome when their "expert" technicians decide you need an onsite visit to the tune of $10,000.

Internet access for remote monitoring is actually an increasingly important need. There is a lot of interest in telemedicine in general, and the ability to send and receive information from non-connected places (other hospitals/clinics, other data systems within the hospital's own data environment, etc.) is a real need. Just think about medical imaging systems. If a patient receives an MRI at one facility and physicians at a different facility need to access those images, that will require some type of external data connection. Should that be a connection directly to the imaging system? Maybe not, but the connection has to exist somewhere.

This report sounds a lot like reports issued on other industries - Ars published a story a few days ago about vulnerabilities in control systems used in solar energy production. Weaknesses like this exist in every industry and, thankfully, there aren't enough smart bad guys to exploit them all. It's a big game of whack-a-mole, and when a big, bad thing happens in one place it gets fixed in response. It would be great if industries like healthcare and energy production could proactively self-correct, but I don't see that happening.

Emphasis mine.

I would go much stronger and say 'absolutely not'. Clearinghouse is the way to go in that scenario where that data needs to be transported outside.

Medical equipment manufacturers often won't support security patches or operating system upgrades for their systems, largely out of concern about whether such changes would require them to resubmit their systems to the Food and Drug Administration for certification.

IMHO, this is precisely the problem. The government decides (rightfully so) that computer security in medical systems is important, so they set a minimum requirement to be 'certified' for medical use. Instead of making things safer, this actually makes it more difficult to 'certify' as safe-by-government-standards than to update/patch the systems to make them safer.

The end result being medical equipment that is not up to date or safe, because convincing someone who knows nothing about computer/network security a problem is fixed is harder than actually fixing the problem.

From my experience working in the medical device industry, the FDA doesn't really care too much about device security (other than data security - think HIPAA)

The FDA does strongly care about patient safety. Depending on the classification of the device, the FDA may require you to resubmit your devices through their "510k" process, even for a simple change such as an OS upgrade. Regardless, any change to the system will likely require a full verification cycle, which costs lots of money. (My former company had to go through those hoops when changing from a lead-based processor to a lead-free one, when upgrading to a newer version of a C++/STL library, etc)

largely out of concern about whether such changes would require them to resubmit their systems to the Food and Drug Administration for certification.

It doesn't quite work like that. As long as the intended medical use is not changed, you don't have to resubmit for certification.However, any change to the system has to be thoroughly verified and validated, by the manufacturer. This is a very expensive process especially for equipment that is no longer in active development. That's why manufacturers prefer to issue "instructions for use", like "use vlans" or "add an additional firewall that does virtual patching".

Dentist I go to (a national chain) just recently upgraded their x-ray machine to digitize the pics, and software in the exam rooms lets them bring them up. I was watching the hygenist bring up my x-ray, and the interface looked like a beginner to VB created it. Was pretty hackish. But, it hooks to the db and brings up the pic. I guess it gets the job done. She said the company is very tight with money, and put this off for 10 years. Shareholders want their profits.

Internet access for remote monitoring is an important need for sure, but why cannot the critical machine write to a common SQL database on the LAN and not itself have access to the WAN? It surely can be monitored in almost real time. I cannot believe that the actual machine would have to be exposed to the internet to be monitored. If something were to go wrong that would require attention, it should be monitored onsite.

This would work, but would require another server that remote reporting would be dependent on. Unfortunately it doesn't fix all of the security concerns that come with running ancient versions of operating systems. The CrackLing has the right of it. These systems, by and large, probably aren't being infected because someone is logging in to them so they can search for pr0n. They're being infected because they're on the same network as other computers that people are using to search for pr0n. The reporting database, while a workable solution, does not solve this particular problem.

Also there would be the hurdle of trying to sell a nearly-realtime system to non-technical people who are used to being sold realtime systems.

Why are these critical systems connected to the internet? They should be connected to a local network only that has these machines banned from connecting to anything other than the hospital's own network. I highly doubt that doctors will need to monitor patients from when the doctors are not on hospital grounds, which is about the only good reason I can see for these systems having an internet connection.

Heck, hospitals should wire their internal Ethernet ports the same way they do their plugs. Have the white one for full internet access and the red one for critical systems that don't need internet access.

These machines aren't necessarily getting infected via the internet. The hospital network I work for had a widespread infection a few years ago on our own intranet and it reached at least a PET scanner and a CT scanner that I know of, if not more. The vendor has since installed AV software on the units that run on Windows XP, but we have at least one unit that I know of running Windows 2K that isn't protected in such a manner. But you can't take these things off the intranet. Not in a modern setting where PACS systems have replaced film. Heck, even when we were still printing on film, our printers were networked on the intranet.

Part of the problem here was that they were very lax in taking measures to screen for or prevent malware infections. Many of the PCs had no AV software, and those that did often had it disabled because it would completely bog that PC down because they were hopelessly low on RAM (many had no more than 512 MB of RAM installed, I believe some had only 256). They also over-relied on policies prohibiting employees from using the internet for personal use, when the reality is that people were highly prone to do it anyway. We even had people plugging in their personal laptops to use on breaks. All the policies in place don't mean a hill of beans when people violate them so habitually.

I private network will solve all the problems? So no one will accidently get a random machine on the protected side infected and that will never propogate to every other computer on the protected network?

Mistakes can happen and infections can propagate through USB drives or non-internet means, but an air gap solves a large majority of issues. It means that the computer on the other side of the gap can be up-to-date and heavily locked down (setting USB to read only would help), reducing the chance that a USB drive used to transmit data would get infected in the first place. Or better yet, have the medical device burn the data to DVD and then transfer that to another computer for processing so that data can leave but not enter the medical device.

This was what we used to do in my last workplace. It was a research network that used equipment suitable for both health care and clinical research, but it was not classified as a medical facility. We used USB sticks since they were faster and easier to use for the nurses that actually did the transferring of data, but DVDs would have certainly been even more secure.

I was also loosely affiliated with the local University Hospital at the time and they also installed Windows XP to all of their systems at the time (2010). They upgraded from NT 4.0 when its support ended in 2006, so for them XP was still a relatively new system. There was some talk about upgrading to Windows 7 soon, but since I no longer work in that research facility, I don't know if they did that at the hospital. The research software of course was not certified for anything newer than Windows XP and the new software version did not support the older hardware...

Medical equipment manufacturers often won't support security patches or operating system upgrades for their systems, largely out of concern about whether such changes would require them to resubmit their systems to the Food and Drug Administration for certification.

IMHO, this is precisely the problem. The government decides (rightfully so) that computer security in medical systems is important, so they set a minimum requirement to be 'certified' for medical use. Instead of making things safer, this actually makes it more difficult to 'certify' as safe-by-government-standards than to update/patch the systems to make them safer.

The end result being medical equipment that is not up to date or safe, because convincing someone who knows nothing about computer/network security a problem is fixed is harder than actually fixing the problem.

Did you see that?! The issue, to them, was that the computer was infected by malware which made the computer slow and almost unusable. It did not occur to them that patient data may be at risk, or that a key logger may be present - capturing every password, or that someone may be in control of that computer in an attempt to breach more systems in the hospital.

In my experience, that's typical behavior. No one but nerds and techies worries about malware until the malware makes the computer unusable. Then they seek help. It's frightening.

Malware = stop using the computer and unplug the network cable immediately. Find how the malware got on. Then backup user files, format, reinstall OS, reinstall apps, restore user files, and take steps to prevent that vector of attack.

Malware = stop using the computer and unplug the network cable immediately. Find how the malware got on. Then backup user files, format, reinstall OS, reinstall apps, restore user files, and take steps to prevent that vector of attack.

That's a bit extreme. In some cases, reformat + reinstall is unavoidable, but certainly not in all ( or even most?) cases. Your first and last bullets though, yes, always.

It's not extreme. Malware can be for all practical purposes undetectable. Add to that many layers of malware on a typical ill-maintained system (malware installing other malware) and there's no way to tell what's been installed or tampered with, and there's no way to tell if all of it has been cleaned out or not.

I have personally seen systems with a seemingly simple malware infection, which the helpdesk managed to remove, and the systems continued to maintain strange HTTPS connections to IP addresses in China. There were no loaded DLLs besides Microsoft's, there were no running processes besides the built-in OS processes. There was nothing, and yet the computer was owned. Also, the user was not able to use Google, even though DNS and browser proxy settings and HOSTS file were not tampered with. It's diabolical.

Malware = nuke and pave. Then take steps to prevent it from happening again.

Granted, in a hospital setting it'd be hard to blow away an x-ray machine control PC. Maybe they could keep a second computer offline and ready to go in case something happened to the primary? That would be a good idea in any event.

Hospitals shouldn't be investing in expensive medical equipment that entrusts people's lives and care to a consumer operating system. This is the sort of application where hardened, embedded OSes are appropriate. This isn't even about operating system partisanship -- had the device makers simply shipped Windows CE/Windows Embedded, there'd be a lot less risk.

The systems we're talking about often cost millions of dollars, and surely have six-figure annual support contracts. How is it that maintaining a secure system isn't part of the manufacturer's responsibility?

Wile I completely agree in principle, the reality does not match.

I'm a hydrographic surveyor, and our industry has tons of exotic equipment, typically based on XP with custom hardware drivers. It's a nightmare as the years grind on and the systems begin to fall apart. And don't even think about updates beyond the most critical initial bug fixes, it seldom happens, and for all we can assume the programmers who could are all gone with the tide.

Sadly, we have little opportunity to argue in real life. If we want to work, we need to buy a product, and that product inevitably comes with one of these crappy MS operating systems bundled as a fixed component. If we don't like it, we can find another career, because there's nothing else on the market, and the manufacturers are largely deaf, because computers are a secondary issue for them.

The manufacturers got suckered by Microsoft into accepting their offerings as robust. But in reality, so did the entire computing industry. All the support was on NT / XP. If you were designing a product, you could get drivers API's that your average programmer could use on a WIN32 system, and good luck with anything else. Linux? Nice idea, but the ecosystem was not perceived as being there, and largely still isn't noticed.

Personally, I can't endorse Microsoft products for these kinds of applications. They are not reliable enough, and Microsoft is primarily concerned with commodity computing, not real robust embedded applications. Maybe QNX would be better, if only these industries payed attention.

Working in a multihospital system in the midwest, I can absolutely see this.

We have non-IS people in charge of "clinical" devices that are not technical people. Next week I have to give them an overview on what Active Directory is, because they don't know.

As for patching, we are TRYING to get to Windows 7, but vendors hold us back because they aren't compatible. We have some webapps that require IE6 on some PCs.

And these PCs/equipment need to be on the network because they talk back to systems that talk to desktops and laptops all over the place to alert nurses when something is amiss. If they didn't have a connection we would have some big problems.

Thankfully we do have an AV nut that goes above and beyond with his job of keeping everything he can clean, and constantly submitting new viruses that he finds.

We are attacked constantly from outside, and from the inside through our Guest wireless network (which is separated from internal), we have a fully staffed network team, but these things are just not simple.

Malware = stop using the computer and unplug the network cable immediately. Find how the malware got on. Then backup user files, format, reinstall OS, reinstall apps, restore user files, and take steps to prevent that vector of attack.

That's a bit extreme. In some cases, reformat + reinstall is unavoidable, but certainly not in all ( or even most?) cases. Your first and last bullets though, yes, always.

Much faster more practical solution. Fire up a replacement, drop the image on, put on the base applications and put the computer in place. No important files should be on the desktop, that should all be on the network where it gets backed up.

If you want to get really into it (as we are trying) no installed apps. Go Citrix, AppV, ThinApp, XenApp or a variation and virtualize all the apps. Now you drop an image on the PC, put it in place, and it self configures and works. done.

If a business has time to find the root cause of everything they are spending too much. Big important things yes, but everything that comes through you end up killing the business (I once worked at a place obsessed with it... spend 4-8 hours looking for a root cause to something that doesn't happen often, or 30 minutes to replace the PC and move onto the next fire...)

In addition to using imaging equipment using propietary Linux OS, we disable all but LAN traffic on these devices. Those system reporting to outside entities either connect through a secure fiber optic WAN or via SSL. Regular office machines are on a different LAN, using a different firewall, even in a different closet.

While we had the opportunity to build our secure network with our facility, a lot of hospitals and outpatient imaging centers merely add medical devices to existing networks rather than build a subnet to protect critical devices.

There's a significant amount of security regulation of medical devices required by the HIPAA act of '97 (e-security regs completed in 2003 and amended almost annually), but again, other than putting a lock on the server room door, many medical facilities haven't bothered.

I'm a hydrographic surveyor, and our industry has tons of exotic equipment, typically based on XP with custom hardware drivers. It's a nightmare as the years grind on and the systems begin to fall apart. And don't even think about updates beyond the most critical initial bug fixes, it seldom happens, and for all we can assume the programmers who could are all gone with the tide.

Kind of like our roads and bridges. Anyway "scratch an itch" open source isn't filling the niche?

wes_517 wrote:

Much faster more practical solution. Fire up a replacement, drop the image on, put on the base applications and put the computer in place. No important files should be on the desktop, that should all be on the network where it gets backed up.

If you want to get really into it (as we are trying) no installed apps. Go Citrix, AppV, ThinApp, XenApp or a variation and virtualize all the apps. Now you drop an image on the PC, put it in place, and it self configures and works. done.

Maybe the cycle will swing the other way and mainframe with thin clients will be back in vogue.

The systems we're talking about often cost millions of dollars, and surely have six-figure annual support contracts. How is it that maintaining a secure system isn't part of the manufacturer's responsibility?

The issue isn't just getting the stuff secured, it's that changes would then have to be approved by the FDA. That process would take a very long time and a not insignificant cost.

Malware = stop using the computer and unplug the network cable immediately. Find how the malware got on. Then backup user files, format, reinstall OS, reinstall apps, restore user files, and take steps to prevent that vector of attack.

That's a bit extreme. In some cases, reformat + reinstall is unavoidable, but certainly not in all ( or even most?) cases. Your first and last bullets though, yes, always.

Much faster more practical solution. Fire up a replacement, drop the image on, put on the base applications and put the computer in place. No important files should be on the desktop, that should all be on the network where it gets backed up.

This is a variation of the steps I usually take in regards to a suspected/confirmed malware infection:

- User backs up data to a share that is scanned 24/7 for malware/viruses and segregated from other data

- User delivers system to IT/Ops and is handed a loaner system that is secured so that they can't install any apps but has Office/Firefox/messenger app installed which are the basic tools used for most tasks.

- System is low-level formatted using Darik's Boot and Nuke (still have yet to find a better program even 'tho this one has issues with some Dell laptops due to the SATA controller being used)

- Use Ghost image to restore which are typically 30-days or so behind the Windows Update curve

- Install a/v, apps, and double-check all updates are applied and deploy system

Typically I can turn around a machine in a short period of time squeekie clean, along with a lecture on the dangers of browsing, and a note that repeated infections will be reported to their manager. Solves 90% of the problems with this behavior and keeps me sane not having to spend hours trying to de-crapify a PC. ^_^

You have to admire those manufacturers for taking the cheapest way out while finding ways to make the systems as proprietary as possible in order to maximize their profit. Personally.... XP? Wouldn't Linux be more secure?

Nevermind, I'll just settle with either shaking my head after doing a face palm.

Sean Gallagher / Sean is Ars Technica's IT Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland.