Cybersecurity in medical devices - Part 4 Impact on Software Development Process

We continue this series of posts on cybersecurity with some comments on impacts of cybersecurity on the software development documentation.

IEC 62304 class A software vs cybersecurity

IEC 62304 defines a rather light set of constraints for class A software. Many connected objects or back-office servers processing medical data are low risk devices (when they’re qualified of medical devices). If we put apart the case of devices used for close-loop, or remote monitoring (or the like), most of standalone software in back-office (services) or used remotely (standalone web apps or mobile apps) will present failures with negligible consequences for the patient. As such, most of standalone software will be in class A (a conclusion that I verified empirically).

Cybersecurity documentation requirements

For class A software, it’s usual to do the bare minimum to be compliant: software requirements, functional tests, a bit of risk assessment with acceptable risk prior to mitigation actions, a bit of SOUP management, and voilà! It won’t be enough to prove that standalone medical device software is secure.
Thus, cybersecurity requires to bring additional documentation: security risk assessment, mitigation actions, and evidence of their effective implementation. Where are we going to find these evidences?
In software requirements, of course, but also in architectural design, possibly in detailed design, and in unit/integration/verification tests. This sounds more like class B than class A.

Special tests for cybersecurity

More, we can mimic the section 5.5.4 of IEC 62304, applicable to class C only, by including additional cybersecurity tests requirements like application of OWASP 13 Top-10 to verify good coding practices. For this kind of tests, tools like static analyzers or methods like peer code reviews are usually implemented. These tools and methods are more frequent for class C software development, than for class B and A.

Summary

To sum-up, we have the following cases:

Type

SW embedded in MD (contributes to essential performance)

Standalone SW (diagnosis / treatment intended use)

SW embedded in MD (doesn't contribute to essential performance)

Standalone SW (no diagnosis / treatment intended use)

Usual SW safety class

C or B

C or B

A

A

Connected?

Usually no, but BTLE is appealing!

Usually yes, on PC or mobile connected to HCP or public network

Yes with BTLE or Wifi connected to HCP or public network

Usually yes, on PC or mobile connected to HCP or public network

Cybersecurity overhead

Null if not connected (beware of USB). Very high if connected, to prove that security breach won’t result in the degradation of essential performance.

Null if not connected. Important if connected (data loss only, but class A documentation is not detailed enough)

Important if connected (data loss only, but class A documentation is not detailed enough)

This comparison of different cases shows that we have a paradox:

High safety risk devices will require a limited cybersecurity overhead. If essential performance relies on software, device connectivity will be limited. The cybersecurity documentation will represent a limited amount of work, compared to device safety documentation.

This comparison is not valid for high-risk devices designed for connectivity, like an hypothetical close-loop system, which loop goes through a remote server. It's the worst case, reserved to manufacturers with enough resources to support the design and maintenance of such device.

Conclusion

Connecting a medical device to a network is not trivial, especially if that network is not a controlled network, like public or home network. The cybersecurity requirements established by the FDA guidances in the US, and in the new Medical Device Regulation in the EU have significant consequences on the cost of documentation to bring evidences of compliance.
The cost is significant for IEC 62304 class A software, which requires documentation closer to class B than class A.