Say Hi To Hybrid

It has been proposed for some time that virtual platforms could be linked to emulation hardware in order to co-verify the software and hardware components of an SoC. However, that proposal now has evolved into hybrid emulation, a practical solution to allow pre-silicon verification and validation of today’s complex SoC designs.

First-rate work by the standards body Accellera and the Open SystemC Initiative (OSCI) has given us all Transaction-Level Modeling, or TLM. TLM has enabled us to create a virtual platform of a CPU sub-system, trading off accuracy for speed in order to provide an early target to test software. In the early days, a common obstacle to realizing such virtual platforms was the availability of SystemC models for various components, for example, a new CPU. If none was available then we would lose time generating a trustworthy model, eroding the benefit of early software test.

These days, those gaps have been filled by the availability of SystemC model libraries for commonly-used functions and IP, such as ARM’s Fast Models (more about them later) but that still leaves the other blocks; you know, those new and often crucially differentiating functions unique to our SoC? One proven solution is to implement such functions in an FPGA-based emulation platform, such as Aldec’s HES, and then link that into the virtual model via transaction-level interfaces.

Let’s explore seven different use modes for such platforms. We shall see how hybrid emulation combines virtual platform and FPGA hardware, offering a best-of-both-worlds approach. However, we also will discuss certain practical constraints and how success is measured in time; the timing accuracy required; the time taken to set up; and the project time saved by doing so.

No software, no SoC
What differentiates an SoC-based product? If all we do is hook up the same kind of IP blocks around the same kind of bus as everybody else, then how do we differentiate our product? How can we add value to our SoC if we all have pretty much the same platform? The answer may be in extra blocks of bespoke hardware, but often software is the major difference, and of course, the SoC isn’t much use without it. However, software is also the major headache in SoC development and verification. Software delivery, validation and sign-off can sit squarely in the project critical path, so shouldn’t project schedules and tool chains be more focused on the success of the software team?

Consider standalone FPGA-based prototyping for a moment. It is clear that the major beneficiaries are the software team, but there are also returns for the hardware folks. The prototype, of which there often multiple copies, is a physical target upon which the software and its integration with the hardware can be tested. However, during those tests the software team also is stressing the hardware in new and sometimes unexpected ways, and so uncovering hardware bugs not previously found by verification alone. Everybody wins, but given their importance to the SoC project, doesn’t the software team deserve even more? Of course they do, which is why the EDA industry developed SystemC and virtual platforms which software teams can use much earlier in the project, rather than wait for the hardware.

Let’s look closer at these virtual platforms, and establish the kinds of software to which they are most useful.

There’s software . . . and then there’s software
Let’s consider what we mean when we say “software”. Designers and commentators may loosely use the term “software” without specifically defining what they mean by it (mea culpa paenitet). After all, there are many different types of software operating and inter-operating at different levels of a software stack, as shown in Figure 1. The user space, including applications, appears at the top of the stack and rests upon lower layers which are increasingly concerned with the OS kernel and the hardware, as we reach lower levels.

Figure 1: The relative importance of speed and performance at different levels of a software stack.

If we are developing software at any given level in the stack then we will need an environment representing the levels below upon which to run it. Ideally that might be the real lower-level software running in real system hardware, but when that is not available then we must rely on some kind of model instead. The higher we are in the stack the more we are trying to model, so the greater the need for speed, but thankfully, the less we need complete accuracy. In fact, the model needs only enough accuracy to fool the software into thinking it is running the real system. Any more accuracy than that will unnecessarily slow down the model’s operation.

Only software at the lowest levels of the stack, shown in Figure 1 in green, is dependent on the SoC hardware, and part of its job is to mask any hardware dependencies from the higher-level software. At Aldec, the software at these lower levels is called “hardware-dependent software”. A model upon which we might test aspects of hardware-dependent software will need high accuracy and may include a cycle-accurate model of the relevant hardware itself, such as RTL simulation, emulation, or an FPGA-based prototype, depending how fast we need to run.

Software at the highest levels of the stack, on the other hand, such as apps and other user-space programs, need the least accuracy, and hence can run at the highest speeds. Welcome to the world of virtual platforms.

Fooling all of the apps, all of the time
Have you ever tried to develop an application for Android? If so, then you may be aware of the emulator which is available as part of the Android SDK. This is a software tool that runs on a host workstation and employs its graphics, memory and interfaces in order to represent the target Android environment. As you can see from Figure 2 , you get to see a representation of the Android device on your workstation screen and can click its buttons with your mouse and see realistic responses and data.

Figure 2: The Android Emulator. Source: Google

The aim of the SDK is to enable developers to install and run their apps and other user-space programs, in effect fooling the apps into thinking they are running on a physical Android device. The Android SDK, therefore, does not support USB connections, headphones, battery features, Bluetooth, and of course, it can’t make phone calls.

Google’s Android emulator is a very specific example of a virtual platform. To recap, the task of a virtual platform is to have just enough accuracy to support the level of software being run upon it. This is largely achieved by modeling behavior and inter-block communications at the transaction-level, which makes them inherently much faster than equivalent cycle-accurate representations.

If we have the necessary libraries of models then we can create a virtual platform for any SoC. Such libraries are usually written in SystemC and often available open source, while others are commercially licensed, such as the System-Level Library from Synopsys. However, let’s face it, today’s SoC designs are dominated by ARM IP, so at the very least we are going to need models of our ARM cores and bus sub-systems. The good news is that ARM supplies such models, going by the name of Fast Models (their trademark), and these are commonly used in virtual platforms worldwide.

Virtual platforms and FPGA-based Emulation: A model marriage?
So our major IP blocks are accounted for, but what about the rest of the SoC? Your differentiating blocks of hardware or new IP will need models, as well, in order to create the fully-populated virtual platform. These may not be available in SystemC from the get-go but the RTL for these blocks may often be available (and even verified) first. What if we could use that RTL in place of some of the missing SystemC models?

We could compile the RTL into a normal simulator and link it to our SystemC virtual platform using a transaction-level interface. Don’t forget, the virtual model is already operating at a transaction level. This combination would indeed solve the missing-model problem. However, it would be at the cost of limiting the overall model’s speed to that of the RTL simulator. The RTL is too accurate as a model for software at the highest levels of the stack, so we will be running at RTL simulator speed and gain nothing in return.

The good news is that this slowdown can be minimized by accelerating the RTL simulation with emulation or even replacing it with an FPGA-based model. Let’s use the Aldec name HES (Hardware Emulation Solution) for short, since it encompasses both of these approaches. This combination of transaction-level virtual platform with HES is given the name Hybrid Emulation.

There are a number of use modes in which hybrid emulation platforms can be beneficial; these are listed in Table 1.

Table 1: Seven key use modes for hybrid emulation.

We will consider each in turn, but the one key enabler for all these hybrid emulation use modes is the link between the virtual and the physical parts of the system. Thankfully, there is already a transaction-level method for linking SystemC virtual platforms with emulators, FPGA-based or otherwise; this being the Standard Co-Emulation Modeling Interface, or SCE-MI.

Why is SCE-MI so dreamy?
SCE-MI saves everybody a lot of work and furthermore, it’s an open industry standard. Aldec employs SCE-MI compliant interfaces in order to link HES to a variety of simulation environments, including SystemC-based virtual platforms, as illustrated in Figure 3. Here we can see that the FPGA’s in the emulator are implementing the RTL of a subset of the SoC, maybe a new graphics function or High-speed peripheral, while the rest of the SoC is represented by Fast Models.

In the actual SoC design, all the blocks communicate via a bus network, including the function implemented in the FPGAs. Figure 3, shows that in Hybrid Emulation some of the transaction-level communication needs to pass over to the FPGAs. There may be a number of such interfaces in a real hybrid emulation platform (see the partitioning discussion below), but we have only shown the one in the diagram.

Transactions relevant to the function implemented in FPGA must be converted into (and from) cycle-accurate signal transitions within the hardware. SCE-MI is not the whole part of the story, however, and readers should scrutinize claims of “don’t worry; we do SCE-MI too” from board suppliers. SCE-MI, along with the extra infrastructure required, should really be considered as Verification IP (VIP).

Think VIP, not just SCE-MI
An important part of the interface VIP is the layer that converts transactions within the virtual platform’s bus models into transactions understood by the SCE-MI transactor on the emulator. Typically, in a virtual platform, the TLM communications and models are written in SystemC, so we also use SystemC to create TLM wrappers for the purpose of integrating the virtual platform with external modules. Aldec, for example, provides a library of adaptors that convert relevant SystemC TLM activities into SCE-MI messages which then communicate with the physical transactors in HES.

We show a the generic TLM2SCE-MI block in Figure 3 but in fact, different adaptors are used for different interfaces, specific to each bus protocol or peripheral standard at the VP-Emulator boundary. For example, Aldec’s TLM2SCEMI-AXI Verification IP provides the connection between their HES FPGA-based emulator and a virtual platforms employing ARM’s AMBA AXI bus. The more complete the library of VIP, the greater flexibility we have in partitioning the SoC between the virtual platform and the emulator.

Coming in Part II: Seven ways to use hybrid emulation, and partitioning an SoC across virtual and hardware platforms.