Step Up to C for Embedded R&D

Working from C/C++ and via System C and TLM works, says Guy Bois, president of Space Codesign Systems Inc.

We are all eager to lower the cost of embedded system design, while increasing quality and decreasing time-to-market. However, as embedded systems become more complex and sophisticated, the traditional design process is taking up too much time. It is simply not agile enough to achieve the results as rapidly as we need.

Since the 1990s, efforts to improve the R&D of embedded systems using hardware/software co-design have yielded limited co-development processes. The R&D has tended to center on specific types of hardware design, and still with separate departmental teams involved; hardware and software. As a result, prototypes still require an integration phase, along with the risks that this process incurs, and multiple coding languages are used, resulting in a constant need for recoding.

The starting point for a more agile approach to development is to work at a higher level of abstraction, in this case, ESL, or electronic system level.

By taking advantage of the SystemC library definitions and TLM-2.0 interface standards -- both part of standard IEEE 1666 -- we can use a single language, C/C++. This allows us to create a fully-modeled functional software representation of a hardware/software SoC design based on a mix of processors, software, communication links (AXI interconnects), memories, and other IP cores. Thus, the various tasks of the requested embedded application can be implemented in the SoC as either hardware or software, according to quality-of-results (QoR) requirements based on performance, power consumption, and hardware resource utilization, following a true hardware/software co-design approach.

Under this approach, the hardware/software co-design process is conducted from the very start of a SoC embedded system project. The application is first broken down into multiple tasks; otherwise, a single large task will end up as a single software or hardware implementation without the benefit of optimization. Before introducing details of any target architecture platform, the tasks need to comprise an algorithm that is a functional specification of the application, which must be validated through behavioral simulation.

Once the algorithm is deemed sound, the functional specification is then mapped to an initial design configuration, for a chosen architecture platform of processors (including RTOS) and interconnects. Some tasks will be targeted for software implementation while others will be targeted for hardware. A design exploration process ensues, where successive design configurations are analyzed based on QoR constraints. Real co-design takes place while exploring different design configurations and adjusting the hardware/software partitioning.

This process can be facilitated by an automation technology which retargets the same C/C++ models from hardware to software, or vice-versa, without recoding for the other medium, resulting in a rapid and agile process.

With the efficiency of using the same models to create hardware and software implementation of an SoC embedded system, and the speed of simulating ESL models for performance analysis of each candidate, it is possible to converge rapidly on a design that fulfills QoR goals. If not, one must examine the ability of the chosen architecture platform and decide which changes to make -- for example, choose another processor from within the same instruction set architecture or change the platforms outright.

Even then, the rapidity of using this agile approach results in a loop of mere hours, if not days, rather than weeks using RTL-based approaches.

When a viable configuration is found, the next step after implementation is realization. At this phase, we seek to drive downstream implementation tools such as the Xilinx Vivado tool suite or Altera's Quartus II. The new Xilinx All Programmable Abstractions initiative highlights a number of tools that can be driven this way, such as HLS (high level synthesis) for silicon realization after design creation, using true hardware/software co-design as I have described above.

In summary, using the suggested method the hardware and software design is highly unified compared to designs made using current methodologies. In this paradigm, embedded system design becomes so rapid that there is no need for an explicit prototyping phase -- instead, you can start working on your product implementation on day one, and proceed with incremental refinement. This is a process where there is no longer need for multiple design languages, design teams and models, where the recoding of functions for hardware or software is a thing of the past.

The approach is practical and the entire process has been realized by the author's company using the Xilinx Zynq "All Programmable SoC" platform.

— Guy Bois is a professor at École Polytechnique de Montréal and Director of the GRM2 Laboratory, where the SPACE program (SystemC Partitioning of Architectures for Co-design of Embedded Systems) was originally conceived. Guy's expertise in leading his research lab has extended to guiding the launch of SpaceStudio from Space Codesign Systems Inc.

It is quite right that Co-Design will decrease the chances of delay in the system design, generally the traditional design was incorporating the hardware design first and later at the time of designing the software many points of modifying the hardware were getting raised, this approach minimized these chances.

Dear KB3001, EDA evolution contains many examples where the introduction of new technologies, to further automate the design process, met with reticence and resistance before their adoption. For example, RTL languages and logic synthesis when they emerged in the 1990s, but are now in routine use. For ESL and codesign, the challenge is even greater because it does not only involve hardware designers, but also software developers. We are not saying that software developers will replace hardware designers (or vice versa). Anyone who has worked with HLS tools knows that it takes knowledge and experience in hardware to obtain efficient results. Once you know how use it, HLS saves a lot of time and allows one to focus on the algorithm optimization (in collaboration with software developers) rather than on details of state machines. For that, we say that a common langage and platform must be at a higher level of abstraction than RTL, allowing us to bring together "groups of developers with complementary skills"...

Thanks for your input, j_b_ This isn't just musing on paper, we are doing this today with system level (ESL) models in C/C++ with SystemC and TLM-2.0. By working in a common language, this is not just for system architects but allows system architects to work with software developers and hardware designers. So the sort of specialist insight that you talk about, and new perspectives that they can gain from each other, can help to drive design exploration and achieving an optimal design. The technology does not optimize the design (yet), it merely eases the process by retargeting the same function (within an application) to either hardware or software implementation.

My experience has been the same. Instead of expecting one person to be expert at everything we needs groups of developers with complementary skills who can understand each other's requirements and can work well together in an agile way.

PS. High level tools are good after the bootom-up work has been done properly.

I second the encouragement of software engineers who understand hardware and hardware engineers who understand software! All too often I have seen software talent being used to design hardware (usually FPGAs due to the "programming nature" of the VHDL/Verilog) who did not understand any of the hardware nor the implications of their "code". Over time, they learned from their mistakes and migrated towards a software like understanding of hardware and given enough experience would be quite adapt at the development of hardware. The biggest bang for the buck would be to more tightly integrate the software/hardware developers so that just like Agile software team approach everyone is responsible for jumping in and making the next release (hardware prototype) successful.

In my opinion that's something that sounds great on paper but might very well fail in practice. I would rather prefer hardware developers that can program and programmers that understand hardware. Together they'll do the proper dove-tailing much better than such an abstraction monster - while using the correct tools for both sides of the development. At least that's my experience over the last 25 years in this business. It's like the thing with automatic routing of PCBs with all the constraints etc. The theory sounds great, but the results of human professionals are still better than what the machine turns out.