September 5, 2012

What if you could shut down several emergency rooms simultaneously without leaving your own home? How about hacking a pacemaker and reprogramming it to cause a heart attack?

Although these could be scenes from an espionage film, they are also some of the plausible scenarios that Kansas State University cybersecurity experts are working to prevent.

Eugene Vasserman, assistant professor; John Hatcliff, university distinguished professor; Dan Andresen, associate professor -- all in computing and information sciences -- and their partners in industrial and regulatory agencies are developing the theory and software needed for safe operations of next-generation electronic medical devices that will be secure, interoperable and networked. The Kansas State University team is one of the first developing such infrastructure.

The team was recently awarded more than $482,000 by the National Science Foundation in a grant titled "Security, Privacy and Trust for Systems of Coordinating Medical Devices." Researchers are using the funds to evaluate how to develop a flexible but standardized and secure communication network for medical devices -- such as pulse oximeters, pacemakers and CT scanners -- that would work anywhere from a small doctor's office to a multi-campus hospital.

According to Vasserman, medical devices on the network could talk to each other to monitor and reason about a patient's health. This will help doctors provide more comprehensive care to patients. For example, a pulse oximeter could automatically take a reading once an infusion pump starts delivering medication, and sound a medication overdose alarm locally and remotely in case the infusion pump malfunctions or the medication adversely affects the patient. The system could also be used to suppress false alarms that may distract caregivers from critical tasks.

As medical devices and their interactions with each other become more complex, security problems become more difficult.

"You can think of security in terms of individual devices, but that's not enough," Vasserman said. "The patients' health and safety are the primary concern and faulty devices can harm patients. But as soon as devices are integrated for automated control, the security of an individual's device becomes less important than the security of the overall networked medical system that now includes other devices."

One of the solutions the team is developing is a specialized safety- and security-critical software, or middleware, that acts as a digital bouncer between medical devices. It allows devices to communicate but ensures they do so safely. When devices attempt to communicate either wirelessly or through cabled network, the middleware would only grant access after verifying that the devices are certified by manufacturers and that they support the required safety and security protocols. If a device fails any of the checks, it either cannot interact or is only allowed limited operation.

Similarly, researchers are looking at how to help devices reason about what to do if a problem is detected during communication or while monitoring a patient's health -- including if the problem is in the reasoning framework itself.

Because the components of a device could fail and produce false readings or a device could have its security compromised and become hijacked virtually, hardware and software security overrides will also need to be developed. Security overrides are especially important for what Vasserman called a break glass scenario.

"Those are medical situations analogous to breaking a glass container to get to a fire extinguisher. It's an emergency. So there has to be a mechanism that tells the device, 'Stop. You can't handle this; a human clinician has to take over,'" Vasserman said. "You want to be sure that if anything and everything fails, patients are still safe. That's really the primary concern."

Researchers are also creating a suite of developer tools to help medical device manufacturers make safe and secure devices. Manufacturers could use this toolset to produce software for their devices without having to know much about cybersecurity. The development environment -- similar to software development kits used for smartphones and tablets -- would automatically generate relevant security-critical functions for the device to allow it to run within the established middleware environment and network.

"As tongue-and-cheek as it sounds, I'm picturing something like an animated assistant, like Clippy, popping up in the software development environment and saying, 'It looks like you're trying to create a medical device. Would you like me to add confidentially and authenticity to the device for you?'" Vasserman said. "The final product will be more refined than a giant talking paperclip, but the idea is the same: make it easy and make it secure by default."

The cybersercurity of medical devices was introduced into public consciousness in 2008 mostly through an academic paper about the vulnerability of pacemakers. Although the schematics or communication protocols of pacemakers are not public information, the paper's authors used actual devices to experimentally show the security vulnerabilities that could be covertly exploited.

The Kansas State University team's project to evaluate and develop security protocols and a safety and security framework for medical devices stems from Hatcliff's Medical Device Coordination Framework, or MDCF, software project. The project, which is largely funded by the National Science Foundation with additional support from the National Institutes of Health, is working to build a standards-compliant open source software platform for interoperable medical devices -- similar to Google's standardization requirements for Android tablets. More information about the Medical Device Coordination Framework is available at http://mdcf.santos.cis.ksu.edu/.