Myths, hype and the building blocks of SoCs

How much of a typical chip is based on IP reuse? As a percentage, is it going up or down? Here are some figures that may surprise you.

The SoC market can be roughly separated into three categories of devices by complexity

Advanced Performance Multicore SoCs

Value Multicore SoCs

Basic SoCs

Each
type has differing characteristics, but perhaps the most telling is the
use of embedded memory. The Advanced Performance SoCs would have the
most on-chip memory resources to support the multiple CPU, DSP and
Graphics cores these types of parts embody. Value Multicore SoCs and
Basic SoCs would require fewer of these on-chip resources because the
applications they are aimed at are less complex and require less
performance. However, in general, the trend for use of more embedded
memory is increasing across all these classes of parts, although at
different rates.

An important reason for the increase in
computing and memory resources in these parts is the trend for
increasing connectivity in all types of devices. From relatively simple
parts that are driving the Smart Grid to devices expected to connect to
the Internet of Things to devices required in Smart Phone applications
where users expect high-performance access to multiple data and audio /
video streams, the computing and memory resources required to support
these functions is increasing at a fast rate.

Another interesting
trend surfaces in the amount of re-used logic found in these designs.
Since many of the functional blocks that fall under re-used logic have
been carried over from previous designs, they can represent the same
function, but are usually ‘tweaked’ to one degree or another when moved
to the next design even if it is at the same process geometry. This
could be done to gain extra margin in the new design or even in response
to a change in a specific market requirement impacting the device
functionality or feature sets.

The problem then becomes do we
count one of these ‘tweaked’ blocks as a re-use, or does it fall into
the new logic category? This can be a sticky definitional issue and the
outcome of which can vary depending on who you speak to. In the above
chart, Semico is trying to be as even-handed as possible without
favoring one side or the other.

In our view, re-use is
increasing, but the amount of die area given over to these blocks is
declining in favor of more area dedicated to increasing on-chip memory
densities. Even though memory is very die-area efficient, the increases
in density are pushing the area consumed by the memory in the silicon to
higher levels.

In the above chart, Semico is trying to give a
‘snapshot’ of the delineation of these three design efforts. Since there
are so many different design requirements to fit differing market
requirements and competitive situations, such a chart is at best a
‘guidepost’ to what is happening in the market today. Given how many
different approaches there are to defining and design device
architectures today, literally every design will exhibit differences
compared to our ‘snapshot’ chart above. It is certain that everyone’s
mileage is going to vary quite a bit.

It is certain that the
number of IP blocks being instantiated into SoC silicon is increasing
quickly. Looked at from this perspective, one might count the memory as
only one type of IP even though there may be 100s or 1000s of memory
blocks on the chip, taking up a substantial amount of die area. When
measured against the area the other blocks take up, the memory block /
blocks will far outweigh the area dedicated to the other IP blocks on
the part even though the number of the other discrete IP blocks will
outnumber the memory IP block / blocks by far.

Within this
context it is accurate to say that IP reuse could / would consume 90% of
a design. However, I think you would need to count the memory IP as
part of the 90%. Again this goes back to how you define re-use vs. new
logic.

There is another side to this discussion. If 90% of a
design is being re-used from a previous design, then why are design
costs rising? The answer is that most of those blocks are being tweaked
from the previous design (more work / more cost) and then the
integration of those ‘new’ blocks into the new design is difficult and
is becoming more difficult as the complexity levels increase.

As
you can see, this is a very complex issue that has different answers
depending on what effort is being undertaken and where the original
starting point in the design is.

I honestly don't see the point in measuring the distribution of _die_area_ between new/reused design and memory.
Die area may influence cost models, yields etc but has little bearing on the design realted aspects of a chip.
One can fill 90% of a chip with replicated memory banks, using almost no design effort. Memory doesn't need to be functionally or formally verified. Doesn't need to be timing-closed. Doesn't need gate-level simulations. Integration of memory into a design is normally straightforward.
Memory should simply be left out of any discussion regarding "innovation in hardware being constrained".
Memory aside, we are left with new vs. reused blocks. Even here die area is of little value in the discussion, since if I have 12 hardened CPU cores replicated in my design, they may dominate the die area and still be a negligible part of the project when measured in design effort (= schedule, = investment, ~ innovation).
Just removing memories from the graphs in the article will show that while there is a clear increase in the die area share of reused designs, it is not a sharp exponential trend and the ratio just transitioned from ~40:60 to ~60:40 over 2 decades. This is by no means a _fundamental_ change of the industry.
Keeping in mind that due to replication, die area is at best an "inaccurate" indicator of design effort / innovation, I don't think any serious conclusion can be drawn about the subject purely from die-area data.