Introduction To Trusted Computing

Trusted Computing was defined by the Trusted Computing Group (TCG, formerly known as Trusted Computing Platform Alliance or TCPA) as a set of industry standards revolving around the specification of a Trusted Platform (TP). The TCG was founded in 2003 and is, in its own words (see https://www.trustedcomputinggroup.org/about/), "a not-for-profit organization formed to develop, define, and promote open standards for hardware-enabled trusted computing and security technologies, including hardware building blocks and software interfaces, across multiple platforms, peripherals, and devices. TCG specifications will enable more secure computing environments without compromising functional integrity, privacy, or individual rights. The primary goal is to help users protect their information assets (data, passwords, keys, etc.) from compromise due to external software attack and physical theft." The TCG now has 170 members from a variety of industries.

In the security field, the traditional definition of Trust was first mentioned in "Trusted Computer System Evaluation Criteria (TCSEC)," also known as the Orange Book, written by the U.S. Department of Defense in 1983, where "a trusted system or component is defined as one whose failure can break the security policy; and a trustworthy system or component is defined as one that will not fail." The definition chosen by the TCG is different, but not inconsistent, with this definition and encompasses the results of years of experience in the security field in a simple and yet effective definition: "A trusted system or component is one that behaves in the expected manner for a particular purpose." Though this definition does not take into account the many facets of the human notion of trust, it does suit the concept's purpose in the context of the technological elements that the TCG aims to specify. Fundamentally, an element of the computing platform can be trusted if 1) it can be identified without ambiguity; 2) it operates unhindered; and 3) its user has first-hand experience of good behavior or she trusts someone who provided a recommendation for good behavior. The various components of Trusted Computing contribute to achieve these three aspects of the trust property in a variety of computing contexts.

*This work has been partially funded by the European Commission (EC) as part of the OpenTC project (ref. no. 027635). It is the work of the author alone and may not reflect the opinion of the whole project.

The work of the TCG, at the time when it was still the TCPA, was heavily criticized because of the historical security blunders of some of its founders, such as Microsoft and Intel, at a time when exposure to security threats was at a maximum. One of the main concerns of the "anti-TCPA" groups, in particular the Electronic Frontier Foundation (EFF), was that privacy would not be important for the TCG because they were trying to lock down computers to proprietary computing solutions. This movement was exemplified by Richard Stallman's article, "Can You Trust Your Computer?" (available at http://www .gnu.org/philosophy/can-you-trust.html) and the claim that Trusted Computing was created solely to implement Digital Rights Management (DRM) systems, a technology created partly to prevent copying and illegally distributing copyrighted content (e.g., multimedia files). But now that the technology has matured and is embraced by a large part of the industry, including its free/open-source members and communities, it can be seen that many of the underlying issues have been addressed, for example, privacy via the introduction of new anonymity mechanisms and the proprietary aspect of the technology as most components and tools are now freely available for Linux.

Trusted Computing specifications are broken into various groups of standards: Infrastructure, Mobile, PC Client, Server, Storage, Trusted Network Connect (TNC), Trusted Platform Module (TPM), and TCG Software Stack (TSS). Each set of specifications tackles particular problems or provide solutions tailored to particular environments (e.g., mobile and server). The TPM specifications defined the platform core elements, whereas all networked components are described in the TNC specifications. The interested reader can find more information on the TCG website, where all specifications are publicly and freely available: https://www.trustedcomputinggroup.org.

Trusted Computing relies on three fundamental, core elements: measurements, roots of trust, and the chain of trust. Measurements, also called integrity measurements, are the means to reliably identify a piece of software and are obtained by applying a hash, or integrity metric (currently SHA-1), to a program binary to obtain a unique 160-bits (20-bytes) identifier for this program. These measurements do not correspond intrinsically to certain values of (un)trustworthiness, as this decision is left up to entities requesting these measurement from a platform. The set of all measurements available on a given platform defines the state of that platform, which identifies exactly what software is in control of execution and how it was started. A root of trust is an element that needs to be trusted for the particular purpose that it was designed for. It generally designates a program small enough so its properties can be well defined and analyzed, thus granting the program a high level of trustworthiness. The TCG defines three basic roots of trust: the Root of Trust for Measurement (RTM), which is used to obtain reliable measurements of programs (the Core RTM, or CRTM, is the part of the RTM used for measuring the program executed after a platform reset, i.e., the first program during the boot process); the Root of Trust for Storage (RTS), which is used to store data on the Trusted Platform in a trustworthy manner; and the Root of Trust for Reporting (RTR), which is used to report integrity measurements to entities requesting them. The chain of trust designates the general sequence of programs starting at the CRTM and measuring each program by the previous program, thus ensuring that all the programs in the sequence are measured before they are executed. The archetypical example of a chain of trust is the boot sequence modified by the use of Trusted Computing, which is called authenticated boot.

The general architecture of Trusted Computing (see Figure 12-1) revolves around a central component called the Trusted Platform Module (TPM), which is usually implemented as a separate secure chip integrated with the motherboard, but this integration is not a mandatory condition, and the TPM can also take alternative forms such as being a subcomponent of the chipset, a secure chip on a daughterboard, a software emulation, or a virtualized TPM. The TPM is a tamper-evident element that contains the RTS and the RTR, in addition to volatile and nonvolatile memory (and, in particular, a minimum of 16 Platform Configuration Registers (PCRs) that are used to store integrity measurements), cryptographic capabilities (secure hashing HMAC, RSA key generation and storage, RSA encryption and signature, and true random number generation), and opt-in commands in order to enable the use of TPM.

Figure 12-1 General structure of a Trusted Platform (TP)

The TPM is a passive chip and does not perform commands by itself, but only when requested by executing programs. It has to be explicitly enabled and then activated by the platform owner. The former is usually done via a TCG-compliant BIOS, whereas the latter is done using Trusted Computing tools in an operation called taking ownership. The TPM contains a unique 2048-bits RSA key pair called the Endorsement Key (EK), whose private part is used internally (it is never exposed outside the TPM) to perform all its operations securely, while the public part can be exported to anyone outside the TPM and is associated with an Endorsement Certificate (usually signed by the TPM manufacturer) that attests to the key's uniqueness and satisfies the properties defined by the TCG. The keys generated by the TPM, as well as any other secret that the user requests be protected by the TPM, are stored in a key hierarchy that is protected via the use of a Storage Root Key (SRK).

The TPM can generate cryptographic keys on request and the keys generally used are Attestation Identity Keys (AIKs). AIKs are associated with certificates obtained from Privacy Certification Authorities (Privacy-CA or P-CA) using a protocol that verifies the validity of the Endorsement Certificate and attests that the AIK belongs to a genuine TPM. This key certification process ensures privacy, as only the P-CA can correlate AIKs and the EK of the TPM that created them. Another identity certification process called Direct Anonymous Attestation (DAA) involves more complex cryptography and can be used to improve significantly the privacy properties of the identity certification process, because you can only trace the keys back to a group of TPMs and not to individual TPMs. It is important to note that all commands involving the manipulation of cryptographic keys are executed inside the TPM and in such a way that the private part of key pairs is never visible in the clear outside the TPM.

Many TPM commands are associated with 20-bytes-long authorization data that has to be specified at particular times depending on the command (e.g., when the TPM is activated or when a piece of data is stored securely using the TPM) and is used during challenge/response protocols. The two authorization protocols (e.g., OIAP and OSAP) defined by the TCG ensure that entities are authorized to request the execution of commands and enable the creation of sessions to run several TPM commands in sequence. Some TPM commands require a physical presence, which ensures the requester of a command is physically present in front of the platform. This typically corresponds to pressing a particular key, for example, pressing the fi key during the TPM activation process.

Integrity measurement corresponds to the application of the integrity metric to a program, and the calculated metric digest is then usually stored in special registers of the TPM called Platform Configuration Registers (PCRs). PCRs are said to be extended, meaning that when an entity requests that a new value be added to a PCR, this new value is concatenated with the old PCR value; the result of the concatenation is hashed and then stored in the PCR. Using this PCR extension mechanism, the PCR stores, in fact,

Trusted Platform Module (TPM) Cryptographic capabilities

- RS A encrypt/sign

- RSAkeygen

- HMAC Memory

- Platform Configuration Registers (PCS)

- Cryptographic keys

Trusted Platform (TP)

Opt-in/activation User

Trusted Platform (TP)

Roots of trust

Figure 12-2 Architecture of a Trusted Platform Module (TPM) and the roots of trust a chained hash of all the value inputted into the PCR. Each time a PCR is extended, the command and its argument are logged into a System Management Log (SML) that can be used for auditing the system. Platform-specific TCG specifications define which PCR should contain the measurement of which program, for example, the PC-specific specification reserves PCR[0] (the first PCR, numbered zero) for measuring the different parts of the BIOS (including the CRTM) and host platform extensions. Integrity reporting is the attestation operation that validates the integrity of storage contents to an entity requesting the integrity measurements and is managed by the RTR.

Binding is the action of encrypting a particular content using the public key of a TPM, so this content can only be decrypted by this particular TPM, provided the TPM key was non-migratable (i.e., the TPM prevents its migration to other TPMs). Sealing is an extended form of binding where the content can only be decrypted using the decryption key if the platform exhibits a particular set of platform metrics (i.e., one or more PCRs), in what is called a platform configuration or state. This set of PCRs is specified when the data is sealed. If, at the time of unsealing the data, the PCR values do not match the configuration state specified for the sealing, the content cannot be decrypted. The sealing operation ensures that the content is only available in a particular execution environment, designated by the hash values stored in PCRs of the various desired programs.

In the context of Trusted Computing, attestation is the vouching of the Trusted Platform's trust properties to an external entity (e.g., a remote program) that requests proofs of these trust properties. The attestation mechanism corresponds to several different situations:

• Attestation by the TPM provides proof of data known to the TPM. For this situation, data internal to the TPM are digitally signed by an Attestation Identity Key (AIK), which is the platform identity that is used during this attestation exchange.

• Attestation to the platform provides the proof that a platform can be trusted to report integrity measurements. This corresponds to the use and validation of the set of credentials related to the platform, such as a Platform Certificate.

• Attestation of the platform provides proof of a set of the platform's integrity measurements. An AIK is used to digitally sign a set of PCRs to show the platform configuration in a trustworthy manner.

• Authentication of the platform provides evidence of the trustworthiness of a given platform identity. Similar to the situation of attesting to the platform, this operation involves the use and validation of identity certificates.

The TPM specification is actually only valid for the PC and server platforms. In the case of the mobile platform (e.g., mobile phones, PDAs, embedded devices), the Mobile Phone Working Group of the TCG specified a specialized version of the TPM called the Mobile Trusted Module (MTM). The MTM has many similarities to the TPM, but can accept two forms depending on which stakeholder (e.g., the device manufacturer, the network service provider, the enterprises, the content provider, or the user/owner) it is bound to. The Mobile Local Trusted Module (MLTM) is a close version of the TPM but with restrictions that ensure it can be implemented on mobile platforms containing hardware with very constrained resources, such as limited processing power and memory. The Mobile Remote Trusted Module (MRTM) is a version of the MLTM that enables remote entities (such as the phone manufacturer or the cellular network provider) to preset some parts of the phone to some preestablished values.

Other elements of the Trusted Computing Infrastructure include Trusted Network Connect (TNC), which allows you to leverage Trusted Platform functionalities at the level of the network and enforce network security policies based on endpoint configuration data, so, for example, computers can be given access to certain networks when they run particular flavors of the Linux kernel or are denied access to certain services if they execute on particular execution environments such as Java. The TCG Infrastructure workgroup also specified various gluing elements, such as XML APIs, used for capturing and reporting integrity information, and the Integrity Measurement Architecture (IMA) to extend the chain of trust from boot components to more complex software, such as operating system kernels and system services.

You should note that Trusted Computing does not stop per se at the TCG specifications. The traditional notion of the Trusted Computing Base (TCB), which designates the set of platform components that need to be trusted in order to trust the platform, encompasses the TCG elements (e.g., RTM and TPM) and the software directly related to it (e.g., TPM device driver). The TCB also includes the chain of trust programs (BIOS, boot loader, and operating system loader) and possibly parts of the operating system kernel (e.g., device and memory managers). On the other hand, the TSS, the Application Programming Interface (API) used by general software to interact with the TPM, is not part of the TCB because it is a big and complicated middleware that cannot be easily analyzed.

One recent development in Trusted Computing is the introduction of hypervisors, also called Virtual Machine Monitors (VMMs) or virtualization layers. This technology is used to introduce an additional software layer between the hardware and the software in order to provide compartments where you can run the operating system isolated from other compartments. Hypervisors were originally used in server platforms for executing and managing multiple environments in parallel. But it turns out that their isolation property satisfies the property of unhindered operation essential for trust. VMWare was among the first to implement this technology, and more and more open-source hypervisors have since been developed, such as Xen and L4 and even the recent KVM Linux kernel module. Executing the hypervisor requires a higher level of privilege than the traditional "ring 0" that is granted to the operating system kernel. This feature is implemented either by pushing the operating systems run on hypervisors into ring 1 or by providing CPU instructions for new "privileged" rings of execution for the hypervisor.

The next sections provide an overview of the broad spectrum of security attacks that can be prevented using Trusted Computing and examples of the Linux support tools and applications currently available. Some of the concepts of Trusted Computing are explored in more depth in the next sections.

Read how to maintain and repair any desktop and laptop computer. This Ebook has articles with photos and videos that show detailed step by step pc repair and maintenance procedures. There are many links to online videos that explain how you can build, maintain, speed up, clean, and repair your computer yourself. Put the money that you were going to pay the PC Tech in your own pocket.