Traditional hardware verification is a non-probabilistic process that verifies the adherence of a design to a non-probabilistic performance specification. However, adaptive techniques like voltage and frequency scaling, process variations due to shrinking chip geometries and input variations for accommodating "better-than-worst-case" design contribute significantly to the stochastic nature of contemporary chips. Therefore, it is desirable to incorporate probabilistic reasoning in the verification paradigm. Moreover, if such probabilistic verification can be applied at the register transfer level (RTL), it would facilitate better choices early in the hardware design cycle.
In this thesis, we introduce the SHARPE (Statistical High-level Analysis and Rigorous Performance Estimation) methodology. Our SHARPE methodology is a rigorous, systematic approach to verify design correctness in RTL in the presence of variations. We construct macromodels to transfer low-level hardware information, such as timing, to the higher RTL. We treat the RTL source code as a program and use static program analysis techniques to propagate signal probabilities. We combine the information obtained from both static analysis and the macromodels and model the RTL design as a statistical entity. We then employ formal probabilistic analysis on this probabilistic RTL model in order to compute the required probabilistic metric. The use of formal analysis makes our methodology high in confidence as opposed to simulation-based methods.
Our SHARPE methodology can be applied to both combinational and sequential designs. In combinational designs, formal probabilistic analysis is performed by exhaustively testing all possible input vectors. In sequential designs, we represent the probabilistic RTL models as discrete time Markov chains (DTMCs) that are then checked formally for probabilistic invariants using PRISM, a probabilistic model checker. When posed with a query, PRISM performs formal probabilistic analysis by exploring all possible transitions of the DTMC model.
Formal probabilistic analysis is known to be infeasible for large RTL designs. We improve the scalability of our SHARPE methodology by using sound symmetry reductions, automatic compositional reasoning and value-based interval abstractions in the design source code. For very large RTL designs, we employ statistical model checking, a scalable simulation-based alternative to probabilistic model checking. Since simulation-based analysis is not exhaustive, verification by statistical model checking is inexact. However, the statistical model checker guarantees the verification results to be within specified bounds of error.
In this thesis, we demonstrate several applications of our SHARPE methodology. We first show how our SHARPE methodology can provide statistical timing estimates in RTL in the presence of either input variations or process variations. We then modify the macromodels used by our methodology and use them to obtain RTL estimates of aging resulting from negative bias temperature instability (NBTI) effects. We describe how our methodology can be extended to compute bit error rates (BER) that are commonly-used performance metrics for RTL designs of wireless communication systems. For RTL of large multicore designs, we employ statistical model checking to provide guarantees regarding the performance of dynamic power management schemes. We perform our experiments on both benchmark RTL designs and real-world RTL designs.