As part of this week’s DVCon event being held in Silicon Valley, the EDAC Emerging Companies Committee sponsored a really intense and well-attended evening panel on hardware/software codesign. The effervescent Paul McLellan moderated the panel. (If you’ve not read his book, EDA Graffiti, go get a copy and read it.) If you’ve attended many panels and presentations, you are more than familiar with the phrase “the elephant in the room,” which refers to a large problem that everyone seems to be ignoring. Hardware/software codesign is more like a herd of elephants in the room for SoC designers because the inclusion of one, two, or many processors on chips means that the chip doesn’t work if the software (all of the software) isn’t right—or isn’t there at all. We’ve long observed that it takes roughly ten software engineers writing software and firmware to support every hardware engineer, yet only now is the EDA industry waking to the importance of hardware/software codesign.

McLellan’s evening panel included:

Atul Kwatra, a principal engineer from Intel

Michael James, from Lockheed-Martin Space Systems

Bill Neifert, founder and CTO of Carbon Design Systems

McLellan asked the panel to describe the challenges of hardware/software codesign and Kwatra came out swinging. The challenge, said Kwatra, is to deliver complex hardware/software systems while meeting goals for:

Low power

Fast development time

High performance

Smaller [chip] area

Lower development costs

Lower product cost

This list includes many conflicting goals, which ultimately require careful optimization. Designers need to remember that they are developing systems, not just chips.

In this environment, forcing software development to follow hardware development will no longer work. Hardware and software must both be under development starting at Day One.

Kwatra continued: “The emergence of end-application experience now drives design.” In other words, system project goals are linked to what the end customer experiences when using the product.

The typical specs of MHz (or GHz) and Mbytes (or Gbytes) are no longer sufficient or competitive.

How do you start hardware and software development on Day One? Kwatra said that you do it through the use of Virtual Prototypes, which give the software-development team a platform to work with long before even an alpha version of a hardware platform is ready to use. “Virtual Platforms shift all aspects of product development to the left,” he said. “All aspects” in this context refers to architecture, design, software, validation, and even marketing.

James reaffirmed what Kwatra had just said. Then he refined the descriptions from his company’s perspective. “We build expensive, unique space systems. Generally, we only have one of them.” That means one satellite or one space station. Consequently, software development has usually been done on unique and expensive hardware, which is rarely available early in the development cycle. Even when hardware does become available, it’s a scarce resource and there’s a lot of contention among software-development teams for that resource. Further, the scarcity and high value of that hardware makes exhaustive testing difficult—particularly testing to destruction, which is what you want to do if you truly want to wring out the bugs.

Simulation solves many of these problems, said James. “Everyone gets their own simulation. Early availability of simulated development platforms moves the risk forward in the design cycle, where it’s more readily addressed. Simulation makes it easy to inject faults into testing to get at those hidden bugs. Simulated platforms have exceptional visibility and control.

However simulated platforms do not come without problems, said James. There are real-world connections to support if you really want to exercise command and control functions, interoperability, and time synchronization with other systems.

During the Q&A session, Kwatra came up with another advantage for hardware/software codesign: Hardware/software codevelopment helps team avoid overoptimization—overbuilding the hardware. By optimizing or debugging both hardware and software throughout a system-level project, you can optimize the entire system, make a variety of hardware/software tradeoffs, and thus minimize overall development and product cost.

Note: For an excellent example of a Virtual Platform, see this description of the recently announced Zynq-7000 Extensible Virtual Processing Platform jointly developed by Xilinx and Cadence.

Download free EDA360 Wrapping Paper

Visit Cadence’s Web site:

Read Richard Goering’s EDA Blog “Industry Insights”

About the author:

Steve Leibson has appeared on television with Leonard Nimoy (Star Trek's Mr. Spock), however he's not a TV star (although he's always open to offers). He is the Cadence EDA360 Evangelist and a Marketing Director at Cadence Design Systems, the leading EDA vendor for system and chip-level design tools, design IP and IP design platforms, and verification IP. Steve’s written some of the key books about IP-based SOC design including “Designing SOCs with Configured Cores,” published in 2006 and “Engineering the Complex SOC,” co-authored with Dr. Chris Rowen and published in 2004. An experienced design engineer, Steve has been evangelizing advanced, IP-centric SOC design since 2001.