Verification Panel, Part 3: Stretching Formal

The discussion turns to the how far formal methods can go in addressing the challenges facing SoC design.

In Part 1 of this panel discussion on verification, the experts talked about the state of verification and the progress that has been made.

In Part 2 the subject broadened to the increasing areas in which verification has to be performed, including power and software. In this third part the discussion turns to the how far formal methods can go in addressing the challenges facing SoC design.

Taking part in this discussion are: Paul Martin, senior manager of debug, trace, and performance modeling at ARM; Rajeev Ranjan, CTO, and Oz Levia, vice president of marketing and business development at Jasper Design Automation; Harry Foster, chief verification scientist at Mentor Graphics; and Viresh Paruthi, senior technical staff member at IBM.

EE Times: How do we tackle things like performance and power?

Rajeev Ranjan: If you talk about power, I think there are numerous problems at the hardware level. We talk about coverage closure being an issue today, even without power-aware design. If you have power-aware design, you're talking about a simulation coverage space covering a multitude of power domains. Last year a customer presented at our users group. He had a design with 20 clock-gating domains. They couldn't even think about using brute force methods to exhaust all of the corner cases. Using the formal approach, you can make the design power-aware and do exhaustive formal power-aware verification of the functionality. That's the only way I see the solution.

Harry Foster: I agree. When you can do formal you should do it, but going back to the OS aspect and the software aspect...

Paul Martin: There are two distinct issues. There is a system validation problem and an integration problem. At a lower level, those are basic integration tests, and you need to verify that the power logic is correct.

Ranjan: I agree. Not just that the power logic is correct but that in the process of introducing low-power implementations you have not corrupted the original functionality. So, functionality keeps coming back to you. You could do power optimizations and performance optimizations; you could make your fabric coherent because of performance; but you still need to verify the functionality.

Martin: That is still two distinct domains. There's an integration-level problem and a system-level problem, which is: How do we validate that we delivered on the system-level specification?

Oz Levia: Every problem like this in the past was solved by introducing two things: One, limiting the degrees of freedom; and the other, creating methodologies and techniques around the degrees of freedom that were left.

Foster: Or raising abstraction.

Levia: Right, if you look at it, a standard cell is not the most efficient way to design, but it reduces degrees of freedom and enables a methodology that supports three or five billion transistors on a chip. People are going to have to agree to give up some degrees of freedom and agree on methodologies -- be it formal verification, virtual platforms, executable specifications, the standardization to some degree on fabrics, or the standardization on programming techniques -- such that the people who write applications, who are making the communications fabrics, and who are verifying these things, are speaking the same language so that it can be verified.

Martin: Also ensuring that all of those pieces are reliable before you bring them into the system. It's about prevention rather than cure. A lot of faults are found in final system validation, where debugging them is very labor intensive. It's about making sure all the pieces are fault-free before they go into final assembly.

Foster: Yes, it's too costly. At a system level it's painful. If you can find something with formal at the low level...

Viresh Paruthi: If you look at data from our projects, we have driven innovations and methodologies so that the lower-level blocks are very clean, and we are seeing more of our problems at the integration level. We need to continue those practices, start at the bottom, and make our way up. Formal is a great aid. Pretty much everyone uses it extensively these days. But we need to keep brainstorming on the next level of challenges, which is the system level, where we need to think about increasing the capacity of the virtual prototyping backbone, and we have to stress it in different ways.

Ranjan: Absolutely. We are extending the capacity of formal tools and how they are applied at the system level, and to reason about functionality. Take, for example, security. Formal technology is getting to the point where you can apply it to ensure that no rogue agent can access or corrupt data.

Levia: It's not unimaginable that in the very near feature, or the medium term, it will be possible to apply formal to software as it applies to SoCs. Software is just a bunch of steps; it has state space; it is more general, more complex. Not to be too self-serving, but Jasper has taken the static formal methodology up from the block level, into the subsystem level, and into the system level. We now do full chip connectivity and functional verification.

* * *

Readers: How much of the verification task do you think can be attacked with static formal methods?

At DVCon 2013, the Wednesday morning panel, "Where Does Design End and Verification Begin?", included panelists from ARM, Mentor Graphics, Intel, Gary Smith EDA, and Real Intent. Several panelists mentioned that an application-oriented verification approach resulted in static techniques having a much more significant impact than in the past. They said it is becoming possible for designers to provide meaningful closure (sign-off) on a number of concerns before the design is handed over to the verification team. The biggest takeaway I heard from the audience was an appreciation of the amount of verification possible prior to simulation.

The idea of application-specific verification gives the EDA community the opportunity to develop tools that provide complete solutions to the well-defined verification concerns, rather than just tools that provide raw technologies that the user must then figure out how to apply. These complete solutions are, in fact, a synthesis of various technologies that have been around for many years, but their synthesis is enabled by the narrow scope of the verification concern being addressed. For example, structural analysis and formal analysis work very well together in addressing the clock-domain crossing (CDC) verification problem.

Smart structural analysis provides a baseline analysis of the design in the context of the narrow circuit idioms used to implement clock-domain crossings and creates a formal specification for the formal analysis tools to work on. The narrow scope of each verification concern along with the bringing together of multiple technologies in the solution allows many of these verification tasks to be completed pre-simulation (i.e. statically). This is an important, but not widely acknowledged, enhancement to existing design and verification processes.

The difficulty in extending the chip-design variant of formal methods to software is simple, yet it tends to get overlooked in discussions like this: Software changes at rates that are orders of magnitude faster than chip designs. This causes one of two things to happen rather quickly: Either the verification falls to pieces, or the verified set of software becomes pretty much useless due to its inability to compete. Because chip design generally addresses only a very narrow sliver of the full formal verification problem, I would respectfully suggest that more useful lessons might be learned by chip designers looking more closely at how formal methods are used (and not used) in nuclear reactors and commercial autopilot systems than vice-versa. Verification in systems subject to rapid change is a very, very hard topic, yet it is also one that more formalists need to recognize as the real problem they should be tackling.