Avoiding the Sin of Respin: Q&A with Johannes Stahl of Synopsys

No one wants to learn about software performance issues when it’s too late.

Johannes Stahl acknowledges that the company’s recent announcement of its latest FPGA-based prototyping system, HAPS-80, is the “trigger point” for a conversation with EECatalog. However Stahl, who is director, product marketing, prototyping and FPGA, emphasizes that the larger conversation needs to be about the move to integrated physical prototyping.

EECatalog:Let’s begin with the rise of this idea that the hardware and software sides of prototyping need to be more closely integrated.

Johannes Stahl

Johannes Stahl, Synopsys: Maybe a decade or so ago, design teams that developed chips decided they could not take a chip into production if it had not run on an emulator for some period of time to make sure there were no hardware bugs.

What’s more, over the last 10 years, chips have become more complex. A huge part of the complexity is that chips now run a lot of software. And semiconductor companies also have to develop that software.

So, today, semiconductor companies have to put much more emphasis and money on the software side, and what that means is that pre-silicon software development is becoming critical. These firms have to develop software before they tape out the chip, because if the software were not to operate on the silicon, they might need to respin. At which point it’s too expensive and too late!

This need to not only verify the hardware, but also to make sure the software runs, is really driving the need for prototyping that is able to run software prior to silicon at a high enough speed, between 30 MHz and 300 MHz depending on what is being checked, in the real world environment, so that you can see that the software performs as it should.

EECatalog: How has this need been met?

Stahl: With a “build your own solution for prototyping” approach. Customers had some tools, synthesis for FPGA, partitioning tools sometimes, debug tools and hardware.

This hardware could be a board that they built on their own for a specific project, or it could be a more general-purpose board obtained from a supply vendor.

In any case, in this “build-your-own solution” environment, the task of connecting these tool flows with the underlying prototyping hardware is left to the prototyping team. And that chore has become harder and harder, as prototypes have become more complex. At the same time this approach has problems, which include:

• it’s a non-integrated solution
• it requires a lot of effort to build
• it’s not always possible to predict when the prototype will be ready, which poses risks to the schedule.

It’s also tough to know when the prototype will work and when it won’t, as often it has not been proven in a generic sense. Most important, what we are seeing on the customer side is that once there is a problem, and the software is on a critical part of the schedule, it is very hard for those customers, using internally built solutions, to stay on track. They would rather pick up the phone and call an external commercial vendor and have the vendor come in and solve their problem.

We’re very familiar with the shift from the “build-your-own” approach to integrated commercial solutions, because for many years, we have been in the market with separate tools and separate prototyping hardware. We were selling the generic tools that would work for our hardware and also for third-party hardware. But our customers began telling us, “That’s not good enough. We need more.”

Another means by which we learn what customers in this market require is through surveying the market on a regular basis. The vehicle we use for that is a book that we published about four years ago, “The FPGA-Based Prototyping Methodology Manual.” To download a free copy of this book, the individual requesting the book needs to complete our survey. Figure 1 shows the results from that survey conducted over the last 4 years to which more than 8,000 designers who build prototypes responded.

In all of these five key requirements, the integrated prototype wins. Because if you don’t do it in an integrated fashion, you have a hard time meeting these goals.

EECatalog:What’s involved in getting prototyping time down to under two weeks?

Stahl: The clock starts to tick at the point that the prototyping team gets the RTL code. That time that elapses between the team bringing up the design on the prototype, on the one end, and, on the other end, the software team being able to use it, is where the blows to productivity can occur.

In the past, it would take about three months to bring up a prototype. What’s made it possible to bring that down to under two weeks is that the tools understand the prototyping hardware onto which the design gets mapped intimately—the hardware architecture, the connectivity, the components of the hardware, the exact timing of the hardware. And so the tool can partition the very complex design to the different FPGA components in the prototype and help to bring it up.

And, while this is being done through the use of timing information throughout the bring-up phase, the performance of the prototype will be good enough to be deployed after two weeks of bring up time.

So, going through all of these steps and being ready with the prototype that works at a good enough speed after two weeks is a tremendous benefit. Today the overall schedule of projects is reduced to typically six months for derivative designs. If you take three months for prototype development, you are already beyond the useful time of giving it to software developers. You have to be able to cut it down to just a few weeks, with enough of a time window for using the prototype for software development.

EECatalog:Please compare, for instance, debugging with a do-it-yourself solution and debugging with an integrated solution.

Stahl: You cannot achieve something that is running in two weeks, if you cannot identify a problem quickly. So the tool allows you to configure any signal you want to see and it will rely on built-in debug hardware in the hardware system, to capture all of these signals. And then it will rely on a proven way of looking at the data, which we have with Verdi tools at Synopsys, to find problems in the prototype. This combination of configure, capture, and analyze, in an integrated way, is critical to finding bugs quickly.

Teams that have done this without integrated solutions typically consider debug as an afterthought. Once they found an issue, they would add debug logic to the prototype that changes the prototype and causes in itself additional debug problems. By making it built in, part of the process, we have debug always available.

EECatalog:What are the factors affecting performance?

Stahl: Performance for prototypes is interesting. You might think, “Well, if I use very good prototyping hardware, I will get all the performance I need.”

But actually the performance heavily depends on the tool that drives the prototyping hardware, because, as you have a very complicated chip, you have a lot of communication between the different FPGAs, and only a tool can automate that to find the best possible way of doing it.

While possible, it could take you many, many months to do this optimization manually. Nobody has that much time. So we do all of this optimization automatically, and achieve very high performance, we always increase it from generation to generation, so, here, with HAPS-80 we have increased it by 2x compared to our previous generation.

We can also scale a prototype to a very high complexity—1.6 billion ASIC gates.

This is important for two reasons: One, a highly complex prototype allows you to put in your entire SoC into a prototype, one which could conceivably have 10 CPU cores and maybe 100 different IP blocks, with each of those blocks requiring a piece of software. But only if you can put all of these blocks on your prototype can you actually develop and test all of the software—that’s the benefit of having a very large prototype.

The second reason is that customers want enterprise solutions. The prototype used to fit on the developer’s desktop, now it moves into the data center—a very big scale, built out prototype that can be accessed remotely, via an IP address, so design teams can program the prototype to their needs, use as much capacity as they need, and utilize their investment in the best possible way.

That is a major shift, and we believe, again, we are pioneering this because our system is built from the ground up to be scalable.

EECatalog:How are you addressing backwards compatibility?

Stahl: Customers want to be able to connect to the previous generation, HAPS®-70[JS1] , and that is what we are enabling here. You can electrically connect the HAPS-70 and HAPS-80, and ProtoCompiler will understand that there are two systems that are connected and ProtoCompiler will take the design and put one piece on HAPS-80 and one piece on HAPS-70, and the user will not even see through the flow that there are different HAPS systems underneath.

On the hardware side, we can do this because the components we use are compatible on an electrical level, and of course, we have designed our cabling to be compatible.

And on the software side, we can make the tool understand both systems, but also understand connectivity, and view it as one large system. Customers like that, because if they have to do a large prototype the first thing they want to do is put a piece of the prototype on a software platform, so they can experience some of the new benefits, such as debug, incrementally before they get ready to move over the entire prototype to the next generation HAPS-80 system.

It’s also important for our customers to preserve other investments. We haven’t talked about one specific use model of prototypes which is to do validation, where you actually connect the physical prototype to the external physical world, maybe a HDMI display, or maybe a USB device, that you connect to this prototype, so you can see if the software can correctly interact with the external world. We do this through daughter boards. All the HAPS-70 daughter boards also work in HAPS-80. It’s the same interface.

EECatalog:How are changes you are noticing affecting the alliances Synopsys has with FPGA vendors?

Stahl: Over the years prototyping has been going from something you do sometimes when you can afford to—and it doesn’t always work— to something that is absolutely mission-critical to work, and that shift is causing the market for prototyping solutions to grow. So I believe that from an FPGA supplier perspective, this market is becoming more interesting. So, that means that as a prototyping supplier we get to provide more input on the FPGA vendor’s roadmaps for the future.

The second thing that is important is in the specific case of Synopsys, we not only use FPGA prototypes for the physical prototype, but we also use them for our emulation technology. So as long as we can align some of the requirements between the two different use cases, we also have the opportunity to work more closely with the FPGA companies to come up with good solutions. Cooperation will continue to increase as the size of the market increases.

EECatalog:Do you see the embedded software developer’s role changing?

Stahl: At the end of the day, if I’m in embedded software development, I’m working on products at a semiconductor company, I would say that there is a new standard that is being established in the market and I can only encourage everybody to expect MORE, expect more of a physical prototype—don’t just expect that it is some board that is provided by some team in the company that (fingers crossed) will work, but [rather] software developers should expect, “this board can be very quickly turned around if there is a design bug being identified and fixed.” This prototype can be available very, very early, so the software developers have more influence on the hardware side of SoC development if they help to find a problem in the hardware.

I think that having a higher expectation of what the prototype should actually be and how it should be delivered would be a good thing to ask for anybody who does embedded software development at a semiconductor company.

EECatalog:And if you take the point of view that you are a member of the prototyping team…

Stahl: From the perspective of the prototyping team, the methodology is not changing, it just gets better because it’s integrated. What is changing is that management needs to realize: what is the cost to the company of doing things internally versus adopting a commercial solution that has a much higher value from a user perspective, from finding bugs, from a performance perspective. It’ s about looking at total cost of ownership—that is something that needs to happen. And we have been seeing this [focus on TCO] happening consistently over the last two years with the big companies, and we fully expect that it will also happen for the next tier. The discussion about TCO becomes much easier if it is an identified pain that can be resolved. They all have enough pain.

Systems with high speed serial links often have serial channels which result in signal distortion described as insertion loss, reflection, cross-talk, and other channel impairments. Receiver equalization can help compensate for such channel-driven losses and distortions, but link tuning and bring-up can be non-trivial even for the most experienced transceiver and signal integrity specialists. In this video, you'll learn how Xilinx FPGAs with fully auto-adaptive equalization is critical to high speed transceiver design and enables system designers to get their systems up and running quickly.

At DESIGN West 2012 (aka "ESC" San Jose), Senior Editor Chris A. Ciufo sat down with TI's Matt Kurtz, Manager, Wireless Connectivity Group. Matt's group's job is to add wireless capabilities to microcontrollers with "everything from ANT to ZigBee". Consider their SimpleLink initiative, which is comprised of chipsets, protocols, APIs and drivers that make easy the task of adding wireless to a TI microcontroller product. In this short video, Matt explains that TI's focus is one-stop-shopping and simplicity. Judging from market feedback, TI has met its goals.

The economic downturn has had an upside for some industry segments. PCB prototyping service Screaming Circuits has seen business climb steadily since it’s founding in 2003 as more companies have cut back staff and resources, stretching engineering staffs beyond their own capabilities. Screaming Circuits recently formed a relationship with Newark/element14 to offer their services to the element14 community. In this interview with Duane Benson, director of marketing and sales, Benson elaborates on the growing prototyping market in the PCB industry as well as the challenges and needs of their customer base.