Kaspersky Lab Developing Its Own Operating System? We Confirm the Rumors, and End the Speculation!

Hi all!

Today I’d like to talk about the future. About a not-so-glamorous future of mass cyber-attacks on things like nuclear power stations, energy supply and transportation control facilities, financial and telecommunications systems, and all the other installations deemed “critically important”. Or you could think back to Die Hard 4 – where an attack on infrastructure plunged pretty much the whole country into chaos.

Alas, John McClane isn’t around to solve the problem of vulnerable industrial systems, and even if he were – his usual methods of choice wouldn’t work. So it comes down to KL to save the world, naturally! We’re developing a secure operating system for protecting key information systems (industrial control systems (ICS)) used in industry/infrastructure. Quite a few rumors about this project have appeared already on the Internet, so I guess it’s time to lift the curtain (a little) on our secret project and let you know (a bit) about what’s really going on.

But first – a little bit of background about vulnerable industrial systems, and why the world really needs this new and completely different approach of ours.

The Defenselessness of Industrial Systems

Though industrial IT systems and, say, typical office computer networks might seem similar in many ways, they are actually completely different beasts – mostly in terms of their priorities between security and usability. In your average company, one of the most important things is confidentiality of data, and IT administrators are encouraged to isolate infected systems from non-infected systems to that end, among others. Thus, for example, if on the corporate file server a Trojan is detected, the simplest thing to do is disconnect the infected system from the network and then later start to tackle the problem.

In industrial systems that can’t be done, since here the highest priority for them is maintaining constant operation come hell or high water. Uninterrupted continuity of production is of paramount importance at any industrial object in the world; security is relegated to second place.

Another challenge to securing an “always on” environment arises due to software at an industrial/infrastructural installation only being updated after a thorough check for fault-tolerance – so as to make sure not to interrupt the working processes. And because such a check requires loads of effort (yet still doesn’t provide a guarantee of non-failure) many companies often simply don’t bother to update ICS at all – leaving it unchanged for decades. Updating software might even be expressly forbidden by an industrial/infrastructural organization’s safety policy. Just recently I read a nice piece about this, which listed 11 ICS security rules; rule #1 is “Do not touch. Ever.” What more of an illustration do you need?!

Still, even if the possibility to update software and patch up “holes” does exist, this doesn’t always help much. Manufacturers of specialized software aren’t interested in constant source code analysis and patching holes. As experience has shown, corners (costs) are normally cut on this kind of activity, and patches are released only if a certain exploit has been found and put on the Internet. In fairness, this is true for common, garden variety software, not just specialized software; still, today we’re talking about specifically industrial software.

The problem consists in the following: the vulnerability of control software, programmed controllers, and industrial communication networks leads to operators of industrial/ infrastructural systems not actually having the ability to receive reliable information about the systems’ total operation!Theoretically a situation is possible where, let’s say, a system for distributing electricity is attacked, as a result of which somewhere at a distant installation the other side of the country a breakdown occurs. But the control center doesn’t know anything about it: the attackers have sent to its computers false data.

Examples

You don’t need to look far to find examples of this actually happening in real life. The first method used – an example of cyber-sabotage at its potentially most dangerous – was in a direct attack on SCADA systems as far back as the year 2000 in Australia. An employee of a third-party contractor who was working on the control systems of Maroochy Shire Council carried out 46 (!) attacks on its control system, which caused the pumps to stop working or work not as they should have. No one could understand what was happening, since the communication channels inside the system had been breached and the information traveling along them distorted. Only after months did companies and the authorities manage to work out what had happened. It turned out that the worker really wanted to get a job at the sewage firm, was rejected, and so decided to flood a huge area of Queensland with sewage!

There are plenty of other such examples; they’re just not reported in the media. After all, victim companies are generally not too keen on letting the whole world know their systems have been compromised. (Public interest issues abound, but I’ll save those for another day and another post…) And in many incidents even the victims themselves don’t know they’ve been attacked. Not long ago a hole was found in RuggedCom industrial routers that permitted any average user to simply increase his/her access rights up to administrator level and gain control over the device. By who, when, how, and where else the hole could have been exploited can only be guessed at. Plus how many similar holes exist and are possibly being exploited in secret – we can only guess at.

For a bit of personal-development I recommend reading about attacks on ICS that succeeded in fulfilling their missions – here, here and here.

So who else – apart from blackmailers, disgruntled job applicants, etc. – might get access to the source code of ICS software, controllers, operating systems and the like? Of course there are the respective government and industry authorities – namely those with a department that certifies software for critically important systems. But in recent years there have been departments created for developing cyber-weapons used for attacking opponents’ systems, whomever they may be – perhaps commercial competitors, but more likely other countries in general.

I mean things like Stuxnet and the subsequent Duqu, Flame and Gauss – malware so vastly complex that it’s clear it was developed with the support of nation states. And it doesn’t really matter who’s being targeted at present; what matters is that such cyber-weapons are being developed and deployed at all. And once Pandora’s Box is open, there’s no way of getting it closed again. The building up of armaments for attacks on the industrial systems and infrastructure of enemies sooner or later will affect us all. So it turns out that the biggest threat to the planet today comes not from the regular cyber-riff-raff, and not even from organized cyber-criminals, but from nation state-backed creators of cyber-weapons.

Protection Today: Alas, Not Effective

At the same time as arming themselves, both infrastructure companies and various government authorities aren’t forgetting about protection. Indeed, they started protecting themselves long ago. But how do they actually go about it?

There are really just two methods. The first – isolating critically important objects: disconnecting them from the Internet, or physical isolation from the outside world in some other way. However, as experience has shown, if a technician during the night shift wants to watch films from an infected USB stick on the control computer – nothing’s going to stop him (we have working methods for blocking such activity, but I won’t go into that here).

Second – keeping secrets. Collective and large-scale attempts to keep secret everything and anything. Developers of ICS keep the source code secret, owners of factories and infrastructure place a “SECRET” stamp on the schematics of information and control systems, the types of used software are kept secret, and so on. However, at the same time, information about vulnerabilities in, for example, the majority of popular SCADA systems, is freely available on the Internet. And if we dig deeper we find that for several years already the SHODAN search engine has been up and running – designed for, among other things, seeking out vulnerable industrial systems (including SCADA), whose owners decide to connect them to – or forgot to disconnect them from – the Internet.

In parallel, specialists at industrial/infrastructure organizations also apply traditional methods of protection of vulnerable software and operating systems through control over program behavior and also over actions of users. But a 100% guarantee of protection can’t be provided, again because of vulnerability-by-default in the software doing the controlling. But for critical infrastructure a guarantee is what is needed most of all.

Protection as It Should Be

Ideally, all ICS software would need to be rewritten, incorporating all the security technologies available and taking into account the new realities of cyber-attacks. Alas, such a colossal effort coupled with the huge investments that would be required in testing and fine-tuning would still not guarantee sufficiently stable operation of systems.

But there is fully realizable alternative: a secure operating system, one onto which ICS can be installed, and which could be built into the existing infrastructure – controlling “healthy” existing systems and guaranteeing the receipt of reliable data reports on the systems’ operation.

First I’ll answer the most obvious question: how will it be possible for KL to create a secure OS if no one at Microsoft, Apple, or the open source community has been able to fully secure their respective operating systems? It’s all quite simple really.

First: our system is highly tailored, developed for solving a specific narrow task, and not intended for playing Half-Life on, editing your vacation videos, or blathering on social media. Second: we’re working on methods of writing software which by design won’t be able to carry out any behind-the-scenes, undeclared activity. This is the important bit: the impossibility of executing third-party code, or of breaking into the system or running unauthorized applications on our OS; and this is both provable and testable.

More details about the system, its requirements and background to its development you can read here.

In closing, in anticipation of the multitude of questions from colleagues, partners, media and simply curious folks, a few basics: the development is a truly secure environment. It’s a sophisticated project, and almost impracticable without active interaction with ICS operators and vendors. We can’t reveal many details of the project now because of the confidentiality of such cooperation. And we don’t want to talk about some stuff so competitors won’t jump on our ideas and nick the know-how. And then there are some details that will remain for certain customers’ eyes only forever, to ward off cyber-terrorist abuses. But as soon as any possibilities do appear, we’ll tell you all we can about the project in more detail.

Till next time!

Enter your email address to subscribe to this blog and receive notifications of new posts by email

agtrier

Well played, the panic inducing reference to disaster movies at the beginning. Politicains like that – Still, you fail to answer the obvious question: why starting from scratch (oh so many have tried and most of them failed!) rather than throwing the force behind hardening some existing OS instead? So excuse me but I’m rather unimpressed.

JD

There are multiple companies such as Greenhills and Wind River that have been doing this for more than ten years. They have products that can be used as a secure OS and/or a secure hypervisor. The products exist today, have military (DO-178B) and security (NSA) certifications. A new company starting from scratch, learning all the hard lessons again, Yawn….

Arch Hughes

Perhaps they have a lot of money to burn?

3

SP

you forget to mention LynxSecure from LynuxWorks!

1

Pawel K

Well I think answer to this is simple… Rest of the OS can be hacked and this one supposed to be not hackable. If they would use something which is already built and contains gazillion patches or fixes is not good enough.. my humble opinion.

Namit

Love it! But from a business point of view, would not this force the companies to “TRUST” the K-OS (M$ has already done that and everyone knows the results)? I dont deny the credibility of K-Labs. But thinking on this, the businesses would be “forced” to persue the K-services. The freedom of choice exists but it might cost them a fortune to migrate( incase the companies choose to). Moreover the idea of flame/stux like highly sophisticated wepaons residing inside the os might discourage the popularity. Nonetheless, any new os is always exciting. Waiting anxiously for more details!

Ramon

Aleks

Interesting idea, but looks like embedded OS with minimal compatibility, at the same time, alot of software needs dependencies and frameworks to run, and those usually have the “holes”, but good luck on that one!

bbabu

Pia F. Bichsel

Do you also consider techniques employed in KeyKOS , its successor EROS-OS and its respective successor KapROS ? All limit the transitivity of trust by means of their capability based design.

I question, however, two of the statements quoted from you linked to:
“* The operating system can’t be based on existing computer code; therefore, it must be written from scratch.
* To achieve a guarantee of security it must contain no mistakes or vulnerabilities whatsoever in the kernel, which controls the rest of the modules of the system. As a result, the core must be 100% verified as not permitting vulnerabilities or dual-purpose code.”

The first point might work given you have that much resources – but still getting it work reliably and securely will cost a fortune even if it’s only a microkernel we are talking about.

More fundamentally, I consider the second point almost impossible. There are only a few ways out:
1. Using a limited language that allows for mathematical proofs of the code.
2. Building in fences that try to contain the faults built inside the code from spreading. So try to even let faulty code not become visible by means of failures. This is a principle described with reliability in mind in Robert S. Hanmers book.

Wingnamm

Someone

Indeed starting from scratch needs a very good excuse. Ever considered building upon a secure and formally-verified-to-be-bug-free OS kernel like that of seL4? At least take a look and learn a bunch from what people have already been up to: http://www.sigops.org/sosp/sosp09/papers/klein-sosp09.pdf

temp

ramonespinosa

We all live in Cyber war EK has the plan to protect U 6. ; to every one reading this will be in steps to protect your wallet U identify issues Kaspersky institute runs to clean the threats & people it never ends so the War on hackers & rogue Governments. Protection
Is 24 /7 issue folks.

Sean

Larry Constantine (Lior Samson)

Noble effort, perhaps, but where does this flawless secure OS reside? Does it replace proprietary software managing the scan cycle on the PLC’s or Unix on data historians or Windows on all the engineering work stations, PC-based HMI’s and monitoring and management systems, or onsupport workstations? What about control servers and routers and RTU’s on the SCADA network? And…

There is neither a single hardware platform nor a common OS in the complex, custom-tailored world of ICS software. I’m skeptical that this approach can work even if the OS can be built. And this is to say nothing of the enormous diversity of the marketing target, where every plant and factory is a one-off, custom-designed, hand-configured installation. This is not the desktop world–not even close.

What is right about this approach is the recognition that the vulnerabilities in ICS’s are built into the very structure of how the systems are designed and built, from the PLC hardware modules to the ladder logic in the control application, all the way to the HMI. The problems will not be made to go away by a patchwork of partial solutions, by fresh layers of signature-based and behavioral barricades or ad hoc plugs filling up holes that should not have been there in the first place.

As I said at EST 2012 in Lisbon, these problems are architectural and will require architectural solutions. Whether a new OS will help much remains to be seen, but I certainly wish my colleagues at KL good luck in their pursuit.

Pia F. Bichsel

Dear Larry,

> The problems will not be made to go away by a patchwork
> of partial solutions, by fresh layers of signature-based
> and behavioral barricades or ad hoc plugs filling up holes
> that should not have been there in the first place.

I agree with you. What I have meant with “fences” has been limited to a certain context: If a fault already became visible in terms of a failure, in my opinion you have to change the code to fix the fault and change the process (e.g. improve education) to fix the error to help preventing the same fault happening again. Building in fences instead only adds to the complexity, and the fences or barriers probably add their own faults to the overall system. The situation I meant, however, was the case that you do not know the faults yet, but must assume that there are faults simply because human beings are limited. Too much self-confidence or testosterone is dangerous – therefore one should contain the faults one does not know about yet. Or – as others and I have said – use mathematically provable programming languages.

Larry Constantine (Lior Samson)

I think we are in agreement about the need for proactive rather than merely reactive approaches to industrial security. In this theater, the conventional route of responding after vulnerabilities are discovered as zero-day exploits could mean first waiting for a major portion of some power grid be taken down or a chemical plant to be reduced to rubble and then figuring out how it could/should have been prevented.

I was talking with ICS security expert Ralph Langner yesterday. We agreed that the biggest barriers to enhancing industrial cyber-security are not so much technical–formidable though those may be–as financial. In the absence of government mandates there are no economic incentives for operators to improve ICS security. The large investment has no near-term payoff; it is costly and it complicates already complex systems. Until the industrial equivalent of the Twin Towers, we are not likely to see great strides forward in terms of protecting critical infrastructure from cyber-attacks. Even then, it would not be too surprising if most of the effort went into initiatives analogous to airport security–showplace charades more about public reassurance.through the illusion of security than about the reality.

–Prof. Larry Constantine

2

Christophe

A secure OS written in C is doomed to fail. You need to enrich the language and the runtime to provide automatic bounds checking and also prevent the programmer from allocating/deallocating memory manually. And this is just the start.

(of course the memory manager still need to have unbounded access to memory. but it should be the only part of the system with this much privilege).

Using this approach will allow you to run code safely on architecture without an MMU, because you move the problem into the compiler: thus if you can prove your compiler creates safe code (ie. there is no escaping bounds checking) then you win, and you don’t have to prove the whole code base. However I’m not touching here the reliability issue, ie. what to do when code crashes.

I love Russian winters. Everything coated in spotless (at least on my balcony at the office) driven snow, and when the sun comes out, the beauty of the serene scene is multiplied several fold: But wait. Typo, surely, no? Russian winter? But we’re 16 days into spring already. At least, that’s what I thought. What’s […]

Many different cyber-professional events take place around the world every year. Out of all of them I have one special favorite – our own special one for cybersecurity analysts: SAS (Security Analyst Summit). And every year they just get better and better and bigger and bigger. This time we had 320 guests from 30+ countries – […]

Hi folks! Cenotes. Gotta love ’em. What’s a cenote, you ask? A cenote is “a natural pit, or sinkhole, resulting from the collapse of limestone bedrock that exposes groundwater underneath. Especially associated with the Yucatán Peninsula of Mexico, cenotes were sometimes used by the ancient Maya for sacrificial offerings.” – Wikipedia. Cenotes of the Yucatán […]

I’m a curious chap. Example: I’ve long wondered what the differences are between European and Russian… steel works! Ok, not quite everyone else in the world is wondering about such a thing, but, then, you don’t follow this blog for more on what everyone else is thinking, right? ). So wonder I did. Past tense. Today […]

I first set eyes on these incredible creatures last year in 2017. Just a year later and I was back for more, and since then I haven’t been able to stop wondering: where did they come from and how did they manage to survive? Wikipedia gave me part of the answers, which in this instance […]

Buzzwords of the 21st century. They come; some go – some stay. Example of the latter: synergy. Remember that one? It used to be bandied about in practically every business presentation given some 15 years ago (apart from mine; no thank you!). And do you recall the Y2K bug? Oh my goodness – that was […]