He's founded and sold companies, pioneered some EDA technologies, and survived cancer. So Forte Design Systems founder and CTO John Sanguinetti has seen a thing or two, and today he sees an EDA tool chain that's too complex.

In a recent interview, Sanguinetti -- whose early career included time at Digital Equipment Corp. and Ardent Computer -- summoned some history in order to put the present and future in perspective.

As electronics design has gotten more complex, the EDA industry naturally has had to move up the abstraction ladder to keep design engineers productive.

"Kluged" EDA tool chain?
Sanguinetti disagrees with some characterizations that to keep up that pace, EDA has patched together tools and methodologies with duct tape and PVC glue. Still:

There's too much complexity in the tool chain. So it takes a long time for a tool to mature, especially one that occupies a big place in the transformation flow. If you're doing a checker for something, that doesn't take a great deal of time and effort. But for transformation like high-level synthesis, the time to maturity for those tools is really much longer than it used to be.

This has an effect on startup companies, which take far longer to grow and evolve than in previous decades. A startup creating a tool in the middle of the chain has to consider what's above and below it in the chain, and those interfaces are more complicated than ever, Sanguinetti said.

He added:

Verilog was a relatively straight-forward language. All we had to do was get it to parse and implement the semantics. Now System Verilog is a much more complex thing. System C is a lot more complex than that. And doing synthesis from System C, where you have to take in a complex language and the RTL has to conform to all these constraints imposed by the downstream tools, that level of complexity has gone up in the past 20 years.

One step at a time
But complexity or not, the industry moves forward.

Sanguinetti said:

You have a good-sized chunk of the market saying, "We think doing high-level design with a high-level language is right." Others say, "Doing design at a higher level of abstraction has to be done with IP." Neither one is the full solution.

Younger engineers are embracing the high-level abstraction approach, which, to Sanguinetti, is a good sign.

The other big challenge he sees is verification. Verifying designs at higher levels of abstraction helps, of course, but the amount of effort going into verification seems only to increase, year after year after year.

"When I started doing verification 25 years ago, we had two design-verification guys for six hardware designers," Sanguinetti said. "I've been told that it's now three verification guys for every one designer. That's mind boggling to me."

Next startup?
Sanguinetti's apparently not going anywhere anytime soon, enjoying life at Forte, where he’s been for the past 13 years. But if he did dive back into the startup life, Sanguinetti would invest in IP and formal equivalence checking technologies.

Formal checking from high level to RTL "is an unsolved problem" long believed to be "intractable." But, "from what little I know about academic work, there is technology that can be applied to the problem," Sanguinetti said. "That'll be a big help in the verification problem."

As for IP, SoC designers aren't "going to design their own USB or even LTE blocks," Sanguinetti said. "It doesn't provide additional value. There's a lot of opportunity to do IP at the large block level. Those are the two areas that I'd be focusing on."

Mr. Sanguinetti's comments about tool chain complexity are spot-on if one accepts the premise that EDA tools must move to a higher level of abstraction for design and also for verification. While this premise may be true for large SoC's, let's not forget that not all ICs are large SoCs. Many designs being taped out today don't use High Level Synthesis, for example, and not every design requires a verification flow based on System Verilog.

The non-SoC designs seem to be neglected because of all the SoC HLS hype.

Another thing is that many designers want vendor/device independence and avoid even using IP and infer everything in HDL.

If true dual port memory blocks were available across the board, then using one ram for sequencing/control and another for data variables would allow C syntax for source code and for modeling the function by a C#/C++ OOP model using the same heirarchy and structure as the hardware.

The memory access and cycle times are fast enough to match designs using regs and muxes in many cases.

Still another programming language is not necessarily the right answer. More ways to use block memory should be explored.