Trusted computing and systems

We can use hardware to make systems more secure; and can make the system inflexible.

Overview

Since security is an evolving problem - where attackers compete to find flaws and vulnerabilities in defensive mechanisms - inflexibility can be a real issue. We are working to design security mechanisms that provide some flexibility and can still be cheaply implemented in hardware.

ASTRID (AddreSsing ThReats for virtualIseD services)

Cloud-based services often follow the same logical structure as private networks, with the lack of physical boundaries and dependence on a third party’s infrastructural security mechanisms often undermining confidence in the overall security level of virtualised applications.

Prompted by this growing trend, the ASTRID project aims to build situational awareness for virtualised services to facilitate the detection of sophisticated cyber-attacks and prompt an automated response. This would effectively shift the responsibility for security, privacy and trustworthiness from developers or end users to service providers. It would foster the transition to novel microservices architectures that can support unified access and encryption management, correlation of events and information among different services/applications, support for legal interception and forensics investigation.

In this project, the focus is on detecting vulnerabilities and threats in individual applications as well as across the entire service graph, and also establishing trusted microservices. The novelty lies in decoupling detection algorithms from monitoring and inspection tasks, seeking better integration with virtualisation frameworks.

The growing adoption of cloud technologies and the trend to virtualise applications are inexorably re-shaping the traditional security paradigms, due to the increasing usage of infrastructures outside of the enterprise perimeter and shared with other users.

The need for more agility in software development and maintenance has also fostered the transition to microservices architectures, and the wide adoption of this paradigm has led service developers to protect their applications by including virtualised instances of security appliances in their design. Unfortunately, this often results in security being managed by people without enough skills or specific expertise. It may not be able to cope with threats coming from the virtualisation layer itself (e.g., hypervisor bugs), and also exposes security appliances to the same threats as the other application components. It also complicates legal interception and investigation when some applications or services are suspected of illegal activity.

To overcome the above limitations, the ASTRID project aims at shifting the detection and analysis logic outside of the service graph, by leveraging descriptive context models and their usage in ever smarter orchestration logic, hence shifting the responsibility for security, privacy, and trustworthiness from developers or end users to service providers. This approach brings new opportunities for situational awareness in the growing domain of virtualised services: unified access and encryption management, correlation of events and information among different services/applications, support for legal interception and forensics investigation.

ASTRID will develop a common approach easily portable to different virtualisation scenarios. In this respect, the technology developed by the Project will be validated in two relevant domains, i.e., plain cloud applications and Network Function Virtualisation, which typically exploits rather different chaining and orchestration models.

Under the technical lead of the University of Surrey, a consortium of 14 academic and industry partners from across Europe are researching a Quantum-Resistant (QR) TPM – a hardware chip which is used as a ‘root of trust’ for a computing system.

The aim is to develop QR crypto algorithms that can be used in a new generation of TPM-based solutions to enable security when quantum computers become reality – which could be as little as 15 years away.

Three use cases are being developed to test the algorithms in sectors where privacy and security are crucial: Online banking, activity tracking in healthcare, and device management.

The goal of FutureTPM is to design a Quantum-Resistant (QR) Trusted Platform Module (TPM) by designing and developing QR algorithms suitable for inclusion in a TPM. The algorithm design will be accompanied with implementation and performance evaluation, as well as formal security analysis in the full range of TPM environments: i.e. hardware, software and virtualisation environments. Use cases in online banking, activity tracking and device management will provide environments and applications to validate the FutureTPM framework.

Security, privacy and trust in a computing system are usually achieved using tamper-resistant devices to provide core cryptographic and security functions. The TPM is one such device and provides the system with a root-of-trust and a cryptographic engine. However, to sustain this enhanced system security it is crucial that the crypto functions in the TPM are not merely secure for today but will also remain secure in the long-term against quantum attacks.

FutureTPM will address this challenge by providing robust and provably-secure QR algorithms for a new generation of TPMs. Research on quantum computers has drawn enormous attention from governments and industry; if, as predicted, a large-scale quantum computer becomes a reality within the next 15 years, existing public-key algorithms will be open to attack. Any significant change to a TPM takes time and requires theoretical and practical research before adoption. Therefore, to ensure a smooth transition to QR cryptography we should start now. A key strategic objective of FutureTPM is to contribute to standardization efforts at EU level within TCG, ISO/IEC and ETSI. The consortium consists of high calibre industrial and academic partners from across Europe, combining QR crypto researchers with TPM developers. Because the TPM shares many functions in common with other widely-used devices, such as HSMs and TEEs, the FutureTPM solution is expected to benefit them as well.

Verifiably correct transactional memory

Multi-core computing architectures have become ubiquitous over the last decade. This has been driven by the demand for continual performance improvements to cope with the ever-increasing sophistication of applications, combined with physical limitations on chip designs, whereby speedup via higher clock speeds has become infeasible. The inherent parallelism that multi-core architectures entail offers great technical opportunities, however, exploiting these opportunities presents a number of technical challenges.

To ensure correctness, concurrent programs must be properly synchronised, but synchronisation invariably introduces sequential bottlenecks, causing performance to suffer. Fully exploiting the potential for concurrency requires optimisations to consider executions at low levels of abstraction, e.g., the underlying memory model, compiler optimisations, cache-coherency protocols etc. The complexity of such considerations means that checking correctness with a high degree of confidence is extremely difficult. Concurrency bugs have specifically been attributed to disasters such as a power blackout in north eastern USA, Nasdaq's botched IPO of Facebook shares, and the near failure of NASA's Mars Pathfinder mission. Other safety-critical errors have manifested from using low-level optimisations, e.g., the double-checked locking bug and the Java Parker bug.

This project improves programmability of concurrent programs through the use of transactional memory (TM), which is a concurrency mechanism that makes low-level optimisations available to general application programmers. TM is an adaptation of transactions from databases. TM operations are highly concurrent (which improves efficiency), yet manage synchronisation on behalf of a programmer to provide an illusion of atomicity. Thus, by using TM, the focus of a programmer switches from what should be made atomic, as opposed to how atomicity should be guaranteed. This means concurrent systems can be developed in a layered manner (enabling a separation of concerns).

E-voting

One of the major highlights of our Group's past successes in applying cyber security research into the real world is the EPSRC-funded Trustworthy Voting Systems project.

Past projects

While various ‘e-voting’ systems have been piloted around the world, researchers in Surrey’s Department of Computing led by Professor Steve Schneider, with Dr Chris Culnane as lead system architect, have developed the world’s first end-to-end verifiable electronic voting system. This was successfully deployed in the State of Victoria election in Australia in November 2014.

The Victorian election constituted a number of world firsts:

First time an end-to-end verifiable electronic voting system was deployed in a state-wide statutory political election worldwide

First time blind voters have been able to cast a fully secret vote in a verifiable way

First time a verifiable voting system has been used to collect remote votes in a political election

Based on open source code, the verifiable voting system is a secure and trustworthy electronic voting system which protects against fraud, and fosters greater trust in the electoral process, by allowing voters to check that their votes have been accurately recorded. The system also encrypts receipts so that votes remain completely secret.

The system features a printed ballot form with the candidates listed in a randomised order (ie different on different ballot forms). The voter makes their selection and then destroys the list of candidates, retaining and casting their marked preference list for verifiable tallying.

An early prototype based on the ‘Pret a Voter’ design, was developed in 2007 at Surrey, winning ‘best system design’ at VoComp 2007. More recently in 2011 the team was approached by the State of Victoria and spent three years refining the system to meet the needs of the Victorian election in 2014.

The State of Victoria has a proud history of innovation in voting systems, having introduced the secret ballot and deployed the world’s first secret ballot election in 1856. It had already run electronic voting systems at its 2006 and 2010 elections but was seeking a verifiable system to give better assurances of the integrity of the ballot.

With voting compulsory in Australia, the election authorities are obliged to make every effort to enable people to vote, so better accessibility for blind, partially sighted and motor impaired voters was a key requirement. Elections also need to cater for the broad range of languages spoken by Victoria’s citizens, as well as expatriate Australians living in other countries around the world. In addition, since Victorian elections are based on the single transferable vote, the ballot is very complex, with voters required to rank a list of around 40 candidates in their preferred order.

Surrey’s verifiable voting system was able to meet these needs and provide a chain of links all the way from the initial casting of the vote right through to the tallying, reassuring voters that their vote has been cast as they intended. By incorporating an audio interface, the system enabled blind and partially-sighted voters to cast a fully secret vote in a verifiable way.

The verifiable voting system was deployed for the last two weeks of November 2014 for ‘early voting’ at 24 voting centres in Victoria, where it was offered to particular target groups of voters (the blind, partially-sighted and motor impaired). It was also made available to all voters at the Australia Centre in London.

In this controlled deployment, the verifiable voting system ran perfectly with no need for rebooting throughout the two-week period. A total of 1,121 votes were cast, with a very low level of spoilt ballots (1.9 per cent, compared with 4.3 per cent for paper voting). A survey of voters in the London election found that 75 per cent preferred the electronic system to paper voting.

In separate tests, the system proved to be capable of handling a million votes and was able to respond to individual voters within 10 seconds, and accept 800 votes in a 10 second period.

Following the success of the verifiable voting system at the Victorian election, Professor Schneider and his team are looking at opportunities for commercialisation and roll-out of the system.