From these three sources, it sure looks like things will get worse before they get better. Let’s start by Shamir’s views on cybersecurity:

Cybersecurity is terrible, and it will get worse.

The Internet of Things will be a security disaster.

Cyber warfare will be the norm rather than the exception in conflicts.

All of this sounds very real today, so Shamir is just implying that it’s not going to get better any time soon. Schneier, on the other hand, is not giving up, by asking for a new government regulatory agency, and also for a body of public-interest technologists, who would be provide expertise into the public debate about technology.

That brings us to the third piece of news. An AI managed to win against really good poker players over several days. This requires strategy, bluffing, and much more. By extension, an AI can now pose as humans or beat humans in any narrowly defined situation, even when complex strategic thinking is required. OK, but even as I write it, I have a problem grasping the consequences of this sentence. I know a things or two about computing in general, and even about AI. Yet, it is very hard to see where that leads us, and how it applies to other fields, like cybersecurity.

So yes, Schneier is probably right in asking for regulatory agencies and public-interest technologists, because of the complexity of today’s technological issues. Shamir is probably right too, because Schneier is not going to get what he asks for, at least not in Trump’s USA (and I wouldn’t bet that we will get it in today’s Europe, either). So, what consequences does it have for “us”?

First, we have to stand strong ourselves. What we are doing at Prove & Run is right. Making devices stronger and more resilient/resistant to attacks is essential in the fight against attackers. And even if a majority of IoT actors don’t care, there remain enough responsible vendors to make a huge market for ProvenCore and applications.

Then, we need to remain humble. Stronger devices are not sufficient by themselves. 10 years ago, as an evaluator, I have used static analysis on Java Card programs to detect vulnerabilities that were very hard to find “manually”, with amazing success. So, if a small research team can program an AI that beats professional poker players, how long will it be before some team of hackers programs an AI that designs attacks on IoT systems? And if that happens, how much will our formal proofs matter? Even if our software is not broken, how easy will it be to bypass it?

French students learn in school about the great ligne Maginot, a very strong line of defense against Germany built in the early 1930’s. The Germans did not break it, they circumvented it.

I believe more strongly than ever that high-assurance security components and formally proven software are essential components of future secure systems. But we have to face a difficult challenge: make sure that no human or AI is able to bypass our highly resistant technology, effectively making it a 21st century ligne Maginot.

We can and will succeed. Our “We are the most secure” arguments are needed to attract our customers’ attention, but we must be careful to move to more complex “We are the foundations of the most secure systems” arguments as their understanding of the issues at stake improves and we get closer to implementation.