Life critical systems should be small, fully open stack, fully audited, and mathematically proven to be correct.

Non-critical systems, secondary information reporting, and possibly even remote control interfaces for those systems should follow industry best practices and try to do their best to stay up to date and updated.

Most likely many modern pieces of medical technology have not been designed with this isolation between the core critical components that actually do the job and the commodity junk around them that provide convenience for humans.

Maybe you really think it's just as simple as mandating exactly what you wrote here. But I'd imagine you'd agree that even doing this would have real and significant costs, which means tradeoffs are going to have to be made, e.g. some people won't receive medical care they otherwise would.

Pretty sure you can build an X-Ray/MRI control software in Rust on top of seL4, and do lightweight verification (or, even better: hardware breakers of some sort) around issues like "will output lethal doses of radiation". That is a general purpose enough kernel and a general purpose enough programming language, without having to drag in tens of millions of lines of code intended for personal GUI systems... Then for malware issues you simply don't plug that device directly into the internet, nor allow it to run any new code (e.g. your only +X mounted filesystem is a ROM and memory is strictly W^X).

Yeah, I am aware. The problem is that using, say, CompCert might result in less security in practice, since although the compiler transformations are verified, code written in C is usually more prone to security issues. It also puts the burden of proving memory safety on the developer, which is a requirement for proving nearly anything else. I don't know Rust well enough to know if this applies for sure, but I think it is a lot less to ask from the manufacturer that they produce a proof of the form "assuming this language's memory model holds, we have properties X, Y and Z" and then just hope the compiler is sane, versus requiring a more heavy-weight end to end proof. Also, eventually there might be a mode for certified compilation in Rust/Go, at which point you get the best of both worlds.

Does this distinction between critical and non-critical systems make sense for medical equipment? Displaying the information to humans (doctors and nurses) is probably life-critical. If the display is broken, it's not working.

It's not like medical devices have an entertainment system like cars and airplanes.

The display and the business end of the equipment are critical and should not be network-connected (or even have USB ports, for that matter). The part that uploads to whatever big server should have updates all the time. The critical bit should either be connected to the non-critical bit by a genuinely one-way link (e.g. unidirectional fiber) or should use a very small, very carefully audited stack for communication.

This is all doable, but it adds a bit of BOM cost and changes the development model.

An alternative would be to expose these subsystems on a network and have strict API's, encryption, and authentication between them. This would allow you to audit/update components individually rather than the whole device. So your display would act as a networked display and only have a very limited set of functions.