The Headaches Of Being A SoC Security Architect

A modern System-on-Chip (SoC) has a wide array of very strict and difficult-to-verify security properties. Issues related to locking critical configuration or key registers, proper implementation of interconnect access control rules, and general configuration during system boot are issues that pain just about every SoC Security Architect. They spend hours reviewing documentation from the verification teams and discussing these issues with separate product security teams. The end result is a process which results in enormous amount of time spent with very inadequate results. The solution to these issues is a Design-for-Security methodology that leverages verification platforms that augment the existing verification tool chain to give SoC Security Architects the ability to work with their verification teams with higher quality results at dramatically reduced cost. In this blog, I will discuss some specific SoC access control vulnerabilities and how they can be addressed with security platforms that leverage the existing verification tool chain.

Example: Register Locking

Many subsystems have registers that hold critical assets, contain configurations that set security memory regions, or restrictions about the core being reset by another subsystem in the SoC. Most SoC Security Architects recognize these type of vulnerabilities and express them in their Security Threat Models. However, due to the disconnect between functional verification and security verification, these type of vulnerabilities are often missed by standard functional verification methodologies and manual review is resorted to by the Security Architects.

Protecting a critical register, a common SoC security requirement

To be a bit more specific, assume that, during SoC boot, a key is loaded across the bus fabric from non-volatile storage into a key register in a cryptographic subsystem. Subsequently, once the key is in place, the register is “locked” such that they key should no longer be accessed or overwritten. This all sounds good, right? This is a great design for protecting a critical security asset. However, there is no obvious way to analyze this using conventional verification platforms. The best one can do is assert that the lock was set during boot. However, how does the Security Architect know that the locking mechanisms was implemented to properly maintain the integrity and confidentiality of the register contents (in this case, a key). Here are a few specific issues that can and have happened even when verification ensured the lock-bit is set:

Although the locking mechanism was properly configured, contents can be read out by issuing a read request

The locking register works properly, but a debug line was left in that allows subversion of the locking mechanism

The subsystem has the ability to be reset, which does not clear the contents of the key but unsets the lock

At first glance, one might say that these types of issues could be mapped into a standard verification environment. Unfortunately, Security Architects often have very limited understanding into the verification process and, even worse, verification teams currently do not have any automated methods in place to address these potential vulnerabilities. Security Architects and verification teams instead scratch their heads while considering all potential cases where confidentiality or integrity could be violated. This process ultimately results in low confidence after spending tremendous amount of time.

The solution is the ability to express and then verify the confidentiality and/or integrity of the register contents and provide Security Architects and verification teams with debug information that makes sense with respect to these properties. Tortuga Logic provides such a platform, which augments the standard verification tool chain for formal verification, simulation, or emulation depending on the scale of the security verification problem and the assumptions that need to be made. Tortuga Logic allows for the simple expression of security properties with its Sentinel language. In this particular example, Confidentiality and integrity can easily be expressed inSentinel as:

Confidentiality:

Critical_reg when is_locked =/=> AXI;

Integrity:

AXI =/=> Critical_reg || !is_locked;

The confidentiality property can be read as “When Critical_regis_locked, it should not flow to AXI” and the integrity property as “AXI should not flow to Critical_reg if is_locked.” These two statements succinctly encapsulate the security requirements of Confidentiality and Integrity of the critical register. These statements, along with the RTL, can then be fed seamlessly into standard verification tools.

Tortuga Logic software technology overview

Takeaways

Mapping security properties into a conventional verification environment does not provide the sufficient ability for Security Architects and verification teams to confidently express or debug vulnerabilities. Tortuga Logic’s Design-for-Security methodology enables existing verification environments for security verification and provides the expressiveness and debug results necessary to discover and understand critical security vulnerabilities. This provides a rigorous methodology for taking a Security Threat Model, expressing properties with respect to that threat model, and then verifying the absence of vulnerabilities.

About Jason Oberg
Dr. Oberg is one of the co-founders and Chief Executive Officer of Tortuga Logic. He oversees technology and strategic positioning of the company. He is the founding technologist of the company and has brought years of intellectual property into the company that he successfully transferred out of the University of California. Dr. Oberg is leading the company to meet all milestones including raising capital, tripling the team size, putting together strategic partnerships, and generating revenue on all products and services. Dr. Oberg has a B.S. degree in Computer Engineering from the University of California, Santa Barbara and M.S. and Ph.D. degrees in Computer Science from the University of California, San Diego.