We now have some concrete examples in a related field, forensic analysis
software. Some researchers have found
bugs
in forensic software packages. The article quotes one of the
researchers: "Basically
we can make it impossible to open up a hard drive and look at it."
Can it be used to take over the analyst's machine? They're not saying yet if
their attacks can be used to take over the analyst's machine.

The societal implications are obvious: defense attorneys are going to have a
field day. They're going to quiz analysts — who are not expert on the
internals of the packages they're using — about why they think the
software is trustworthy. "How do you know your buggy software didn't miss
exclupatory evidence?" They're also going to
subpoena vendors and vendor data, including source code, change logs, test
plans, bug report databases, etc. If the vendor can't or won't produce, they'll
try to get any forensic evidence excluded.

The obvious counter for law enforcement is to use some form of "certified"
software. That is, some outside party would
evaluate the software and certify its correctness. Note that this is a very
difficult (and expensive) process. Furthermore, for fairness in criminal cases,
it probably needs to be an adversarial process. Imagine how this will play out
in a high stakes, well-funded white collar criminal case.

Ultimately, of course, this is a question of how to produce and verify
high-assurance software. Several decades of work tell us that this is a hard
problem.