We all know about the security challenges of personal computers: they get hacked, viruses hijack machines, and keystroke loggers steal passwords that then end up on public websites. Yet the amount of security-critical transactions conducted on such systems is steadily increasing.

Now think about smartphones and so-called mobile Internet devices, which are increasingly being used for critical transactions and which may replace PCs as the access device of choice for most Web services. These devices are, in many respects, starting to resemble PCs - albeit with a much more convenient form factor. Does this mean that they will be as prone to security exploits, and that we are heading towards full-scale disaster as far as the security of our Web services is concerned?

We might be, if we don't do something about it. The good news is that the technology to protect us exists.

Mobile virtualisation

The first part is old technology in a new guise - virtualisation. Originating in the mainframe world of the 1960s, it has recently caused a revolution in the data centre. By partitioning a physical server into several virtual machines, each appearing to software like a real machine clearly separated from all others, a virtual machine monitor - known as a hypervisor - enables users to consolidate multiple logical servers onto one computer. This helps cut hardware costsand power bills, without compromising the isolation of one logical server from another within the computer.

This data-centre scenario does not have much in common with the way we use a mobile device, so what use is virtualisation here?

The key to the answer is a fundamental property of virtualisation: isolation. Because virtualisation partitions a single computing platform into multiple isolated logical machines, which appear to other systems as if they were separate physical entities, it implicitly provides firewalls that can be deployed for security purposes. These firewalls are provided by the hypervisor, which, to the software running in each virtual machine - the so-called guest software - appears to be part of the hardware.

Take this a step further. Imagine a PC that is used to access online services as we do today, except that whenever the time comes to enter a PIN or password, that PIN is not entered using the PC's keyboard, but with a physically separate device. As long as the remote server can ascertain that the PIN entry comes directly from the separate device - for example, through a secure network link - and that the device is itself tamper-proof, the transaction can be secured even if the PC is infected with every virus under the sun.

I'm simplifying things a bit here - there still needs to be some secure visual feedback on what you're actually authorising - but the point remains.

It all sounds fine, if a bit cumbersome, and it's not suitable for a mobile device. After all, no one wants to carry a separate PIN-entry device around with them.

This is where virtualisation comes in: we can simulate this detached PIN-entry device by putting it into a separate virtual machine. The normal phone software runs in its own virtual machine - or, more likely, several. The hypervisor ensures that the security-critical PIN-entry functionality is completely protected from any potentially compromised software running in other virtual machines. Again, I'm simplifying: the hypervisor also needs to ensure that access to the keyboard and any visual feedback are secure.

So, what about the hypervisor itself? Why won't it become compromised?

Hypervisor security

This is a good question, particularly if you think of enterprise-style hypervisors. These are big systems, including the security-critical management components that can consist of millions of lines of code. Experience tells us that even well-engineered software averages several bugs per thousand lines of code (kLoC), so these hypervisors can be expected to contain thousands of bugs. Some fraction of those will be relevant to security, creating vulnerabilities that can be exploited by hackers.

So, the key to security is size: small is beautiful. This is one reason why a well-designed hypervisor for mobile use is very small. By adapting it to the specific requirements of mobile systems (and the similar requirements of other consumer-electronics devices), and by careful design and implementation that focuses on providing the minimum set of primitives, it's possible to limit the code size to around 10 kLoC, as in the OKL4 hypervisor from Open Kernel Labs. Due to its minimalist design, and design principles inherited from operating-system microkernels, such a small hypervisor is called a microvisor.

Even with traditional software-engineering methods, the number of bugs in such a small code base can be reduced to a few handfuls. And, with luck, none of them will constitute security vulnerabilities. This is helped by stability: once you have a well-designed hypervisor that supports a small set of primitives well matched to its application domain, there is little need to change it. And certainly there is no need to change it once it has been deployed on the device, as the user-visible device functionality does not depend on it.

You might be forgiven for thinking that all of this is questionable: clearly, the argument that the hypervisor cannot be compromised is more than a little hand-wavy so far.

Formal verification

In fact, this is no longer the case, according to researchers at NICTA, Australia's National Centre of Excellence for ICT Research. They recently announced the successful formal verification of their seL4 microkernel, a kernel that shares its ancestry with OKL4.

What is formal verification? It means establishing the properties of systems using mathematical methods and rigour. But even that needs further explanation, as different people use the term in different ways.

Static-analysis tools are frequently used to detect bugs in programs. The more sophisticated ones use formal methods to determine the presence (or absence) of certain properties in the code. The formal methods behind such approaches fall into a category called 'model checking'. While model-checking approaches can be useful for eliminating bugs in programs, including very large ones, their scope is limited to checking relatively simple and well-defined properties. As such they can identify bugs from a narrow class of problems.

The NICTA researchers are using a kind of formal verification called a refinement proof. It shows, in a very strict sense, a correspondence between a high-level and a low-level representation of a system. More precisely, it proves (with mathematical rigour) that all the possible behaviours of the low-level representation are completely captured in the high-level representation. In this case, think of the high-level representation as the functional description (that is, the application programming interface) of the kernel, formulated in a mathematical language. Think of the low-level representation as the actual kernel implementation.

Refinement proofs have been performed on some very small operating-system kernels before. There are two interesting issues about the verification of seL4.

The first is that the low-level representation of the kernel is the actual source code of seL4, written in the C language. Past verification projects stopped at a low-level model of the kernel, rather than at its actual code, requiring an informal, and therefore non-rigorous and inexact, step of arguing the correspondence between the model and the actual code.

The second is that seL4 is a general-purpose platform, which enables developers to build arbitrary systems on top. For example, seL4 can serve as a hypervisor, and is able to run Linux in a virtual machine, and does so at essentially the same performance as the OKL4 hypervisor. So, seL4 is not a toy or an academic exercise, but is suitable for use in the real world.

Together these attributes make seL4 what is believed to be the world's first general-purpose operating-system kernel, or hypervisor, that has been formally verified down to the actual source code.

Implications

We now have a rock-solid foundation on which to build secure systems. A formally verified hypervisor can be truly considered an extension of the hardware - it is as reliable and secure as the underlying hardware. It can therefore isolate virtual machines as securely as if they were running on separate hardware - exactly what is needed for security.

This means we have the building blocks to make mobile devices secure, and to avoid repeating the mistakes of the past. Mobile devices will evolve to take on more of the functionality we know from PCs, but we can make it happen without some of the distasteful side-effects.