Processors Everywhere, but Tools Lacking

A research study shows the majority of IC designs are now for multiprocessor chips. But are the EDA and design verification sectors keeping up with the SoC multiprocessing revolution?

The formation of a focus on SoCs (systems on chips) here on EE Times should come as no real surprise to anyone, and yet, as an industry, we are being slow to react to the implications of the SoC revolution.

A Wilson Research study conducted during 2012 for EDA company Mentor Graphics, and referenced at Mentor's website, shows that 79 percent of all non-FPGA designs now contain one or more processors, and even 56 percent of FPGA designs have a processor in them. What’s more, over 50 percent of the non-FPGA designs are multiprocessor chips having two or more software-programmable processors in them. The study showed a significant rise in the number of multiprocessor designs.

As an example, consider that in 2007 almost 40 percent of designs contained just one processor, and about 12 percent contained two. In the latest study, 28 percent contained two processors, and only 22 percent were uniprocessor designs.

As an interesting aside it appears that three processors has always been an unpopular number. Also, it should be taken into account that the survey is skewed towards the advanced companies, with the average process geometry adopted being 45 nm. If we were to consider all designs it would probably be a node or two behind this, and the processor adoption might be closer to the 2007 numbers.

Tools for processors
So, with most advanced designs containing multiple processors you would think that we would find lots of design tools that focus on the processor, or the interaction of the processor with the other aspects of the system. But this is far from the truth. Consider for a moment the principle verification strategy in use today. It is based on RTL simulation, and it is fed with stimulus that comes from a constrained random pattern generator. This is fueled by the SystemVerilog language and is encapsulated in methodologies, the latest of which is UVM. But where is the support for the processor in there? There is none, and in fact all processors have to be removed from the design before this strategy will work. Mentor is trying to work around this to some extent, with the company's inFact tool, but many of the problems remain.

Alternatively, could this be an acknowledgement that the processor is a block of IP that can be trusted and does not need to be re-verified in the context of the rest of the system. If this were true, it might be encouraging in that we have started to conduct a layered approach to verification where things do not need to be verified multiple times. The fact that I have not seen it happen for other blocks leads me to believe that this is not the case.

Mentor was probably the most visionary of the EDA companies in this respect and started to buy its way into the embedded processor and software markets many years ago. But to this day, most of its tools remain highly segregated. If the EDA companies are not developing tools to address this, does it imply that tools are not required? We have certainly seen system-level tools that attempt to measure performance or power of a potential architecture, and the whole area of prototyping to support early software development is growing -- hence another area of focus supported in the makeover of EETimes.

So, is the processor anything more than a fancy piece of control logic that happens to run software? This would seem to be the natural implication of the lack of processor-centric tools. Are we seeing the emergence of a layered approach to verification? Do we even need tools that take the processors into account?

It may be that the focus of attention will, and should be, in software scheduling.

That will then make decisions based on available resources at run time as to where tasks/threads etc. should reside.

Extensive workload simulation on virtual prototypes then tells you what resources it is best to put in the SoC. Although each time you strip a resource out in the interest of saving area/power you would need to resimulate to look for unintended consequences on particular peak performance requiring tasks.

I don't think so - somebody's gonna make a lot of money if they crack this nut.

I sure would love to run a multi-core RTOS on my bipedal robot - one moment it's mild mannered Clark Kent, the next minute it's Stooperman - locked up and helpless in the presence of a new and improved bug. But it's still multi-core and very, very cool.

You are right Peter in that there are tools for the software portion of a single processor and some work related to multi-processor, but almost nothing on the hardware side. We are beginning to see things such as specific cell libraries optimized for processors, but we haven't yet got to the point where synthesis, and place and route can be optimize based on knowing it is a processor and thus the general structures likely to be seen. Also, nothing that woud help with things such as knowing which processor to use.

One of the issues is that while there is some tool support around specific processor architectures (compilers, debuggers, etc,) there is not much unified support for heterogeneous multiprocessing chips.

Still a couple of UK firms are trying to help out. I am thinking of Imperas and UltraSoC.