Sign up for QueueNews

Embedded Systems

Articles

What if your programs didn't exit when they accidentally accessed a NULL pointer? What if all their global variables were seen by all the other applications in the system? Do you check how much memory your programs use? Unlike more traditional software platforms, embedded systems provide programmers with little protection against these and many other types of problems. This is not done capriciously, just to make working with them more difficult. Traditional software platforms, those that support a process model, exact a large price in terms of total system complexity, program response time, memory requirements, and execution speed.

Programming without a Net

Embedded systems programming presents special challenges to engineers unfamiliar with that environment.

George V. Neville-Neil, Neville-Neil Consulting

Embedded systems programming presents special challenges to engineers unfamiliar with that environment. In some ways it is closer to working inside an operating system kernel than writing an application for use on the desktop. Here’s what to look out for.

What if your programs didn’t exit when they accidentally accessed a NULL pointer? What if all their global variables were seen by all the other applications in the system? Do you check how much memory your programs use? Unlike more traditional software platforms, embedded systems provide programmers with little protection against these and many other types of problems. This is not done capriciously, just to make working with them more difficult. Traditional software platforms, those that support a process model, exact a large price in terms of total system complexity, program response time, memory requirements, and execution speed.

The trade-offs between safety and responsiveness form a continuum [see Figure 1]. At one extreme is programming on bare hardware, without any supporting operating system or libraries. Here programmers have total control over every aspect of the system but must also provide all their own runtime checking for program errors.

With the growing complexity of embedded systems, more and more parts of a system are reused or supplied, often from external sources. These parts range from single hardware components or software processes to hardware-software (HW-SW) subsystems. They must cooperate and share resources with newly developed parts such that all of the design constraints are met. This, simply speaking, is the integration task, which ideally should be a plug-and-play procedure. This does not happen in practice, however, not only because of incompatible interfaces and communication standards but also because of specialization.

Putting It All Together

Embedded projects are built out of lots of pieces. Are you sure that what you've got at the end is what you wanted when you started?

Component integration is one of the tough challenges in embedded system design. Designers search for conservative design styles and reliable techniques for interfacing and verification.

Rolf Ernst, Technical University of Braunschweig

With the growing complexity of embedded systems, more and more parts of a system are reused or supplied, often from external sources. These parts range from single hardware components or software processes to hardware-software (HW-SW) subsystems. They must cooperate and share resources with newly developed parts such that all of the design constraints are met. This, simply speaking, is the integration task, which ideally should be a plug-and-play procedure. This does not happen in practice, however, not only because of incompatible interfaces and communication standards but also because of specialization.

Take, for example, a signal processing program that has been adapted to a specific digital signal processor (DSP) architecture by carefully rewriting the source code using special functions or subword parallelism, optimizing loops, data transport, and memory access. Reusing such a DSP program means either rewriting that code or reusing the whole DSP architecture or part of it, turning the original SW integration problem into a HW-SW integration problem. A crypto algorithm that runs on an application-specific instruction set processor (ASIP) is another example. DSP and ASIP architectures are great for reaching the performance and power consumption goals, but they make portability and, thus, reuse more difficult.

by
Rolf Ernst

Interviews

Linux may well play a significant role in the future of the embedded systems market, where the majority of software is still custom built in-house and no large player has preeminence. The constraints placed on embedded systems are very different from those on the desktop. We caught up with Jim Ready of MontaVista Software to talk about what he sees in the future of Linux as the next embedded operating system.

A Conversation with Jim Ready

Linux may well play a significant role in the future of the embedded systems
market, where the majority of software is still custom built in-house and no
large player has preeminence. The constraints placed on embedded systems are
very different from those on the desktop. We caught up with Jim Ready of MontaVista
Software to talk about what he sees in the future of Linux as the next embedded
operating system (OS).

Ready has a 20-year history with embedded OS development. He founded Ready Systems
in 1981 and pioneered the development of one of the first commercially viable,
realtime operating system (RTOS) products, the VRTX realtime kernel. After merging
with Microtec Research and eventually with Mentor Graphics, Ready began his
next push by forming MontaVista Software to capitalize on open-source Unix/Linux
and its use for the embedded systems market.

Articles

Motivated by technology leading to the availability of many millions of gates on a chip, a new design paradigm is emerging. This new paradigm allows the integration and implementation of entire systems on one chip.

Software development for embedded systems clearly transcends traditional "programming" and requires intimate knowledge of hardware, as well as deep understanding of the underlying application that is to be implemented.

Motivated by technology leading to the availability of many millions of gates on a chip, a new design paradigm is emerging. This new paradigm allows the integration and implementation of entire systems on one chip.

These complex systems typically contain application-specific hardwired parts, as well as application-specific programmable parts. The programmable units typically consist of microcontrollers, digital signal processors (DSPs), RISC processors, or the new breed of "reconfigurable" processors. With all their parts and units, these complex systems need to work together seamlessly and flawlessly.

Increasingly, embedded applications require more processing power than can be supplied by a single processor, even a heavily pipelined one that uses a high-performance architecture such as very long instruction word (VLIW) or superscalar. Simply driving up the clock is often prohibitive in the embedded world because higher clocks require proportionally more power, a commodity often scarce in embedded systems. Multiprocessing, where the application is run on two or more processors concurrently, is the natural route to ever more processor cycles within a fixed power budget.

Division of Labor in Embedded SystemsIvan Godard

You can choose among several strategies for partitioning an embedded application
over incoherent processor cores. Here’s practical advice on the advantages
and pitfalls of each.

Increasingly, embedded applications require more processing power than can be
supplied by a single processor, even a heavily pipelined one that uses a high-performance
architecture such as very long instruction word (VLIW) or superscalar. Simply
driving up the clock is often prohibitive in the embedded world because higher
clocks require proportionally more power, a commodity often scarce in embedded
systems. Multiprocessing, where the application is run on two or more processors
concurrently, is the natural route to ever more processor cycles within a fixed
power budget.

Today these plural processors usually reside as cores in a single chip as part
of a system-on-a-chip (SoC) solution, frequently together with control circuitry,
local memory, and large chunks of dedicated non-programable custom logic, collectively
called peripherals. In general the cores in multicore embedded designs are incoherent—i.e.,
there is no hardware means by which the cores maintain a consistent collective
view of the rest of the system and in particular of the contents of memory.

System-on-a-chip (SoC) design methodology allows a designer to create complex silicon systems from smaller working blocks, or systems. By providing a method for easily supporting proprietary functionality in a larger context that includes many existing design pieces, SoC design opens the craft of silicon design to a much broader audience.

SoC Software Hardware NIGHTMARE or Bliss

System-on-a-chip design offers great promise by shrinking an entire computer to a single chip. But with the promise come challenges that need to be overcome before SoC reaches its full potential.

Telle Whitney, Ph.D., George Neville-Neil

System-on-a-chip (SoC) design methodology allows a designer to create complex silicon systems from smaller working blocks, or systems. By providing a method for easily supporting proprietary functionality in a larger context that includes many existing design pieces, SoC design opens the craft of silicon design to a much broader audience.

Chip design complexity trends continue to follow Moores law, well beyond where many pundits thought it would end. Systems using a feature size of 90 nanometers are in design, and chips of 0.13 microns are in production. Complex designs now include 20 million logic gates, or 200 million transistors on a 1 cm2 die. By way of comparison, only a few years ago 2 million logic gates or 10 million transistors were common.