Embedded SoftwareJust another EDA Blogs site2017-08-15T09:46:03Zhttps://www10.edacafe.com/blogs/embeddedsoftware/feed/atom/WordPressColin Wallshttp://www.mentor.comhttps://www10.edacafe.com/blogs/embeddedsoftware/?p=2552017-08-15T09:46:03Z2017-08-15T09:46:03ZIt is interesting how different parts of my life intersect with one another. I am thinking of my working life in embedded software and an aspect of my personal life: my lifelong interest in photography. Years ago, they were very separate activities, but the move from film to digital has brought them closer together.

A particular incident occurred recently that raised interesting questions about the value of software …

Everyone has heard of Photoshop, which is the definitive software tool used by digital photographers. This product, along with a bunch of other similar tools, is quite expensive – many hundreds of dollars, which, for an amateur photographer, is quite steep. Although there is a very low cost, “amateur” version available (which is amazing value for money IMHO), many people yearn to have the “real thing”. Most of them dig deep and spend the money. However, there are always some who try to cut corners.

I belong to a local camera club and a new member advertised the availability of a range of pro quality software at a very reasonable price. I expressed interest – I always like a bargain – but said that I was only happy if it was legal, licensed and shrink-wrapped. He got back to me and was quite up-front about the fact that it was not legal – it was a “cracked” pirate copy and that many others did not have my “high standards”. When I said that I (and the club) could not support or condone such an illegal activity, he told me that all he was doing was helping people, who could not afford this fancy software, to get the tools they needed. I pointed out that this was equivalent to stealing cars for people who could only afford to take the bus.

This incident has sparked much discussion among club members about the value and cost of software. On the negative side, there were comments about Adobe “ripping off” customers. This in unfair, as Photoshop is aimed at pros, who are actually getting a good deal. From a positive perspective, club members are now much more aware of the low cost and free options available to them. They now have a clear choice: spend little/nothing, spend a lot, or break the law.

This discussion is almost directly parallel to that had among engineers and managers of embedded software projects. Why should they pay high prices for tools, when there are free options available? Can they “make do” with fewer licenses? There are lots of options, but the bottom line for embedded software developers is the same as it is for photographers: if you have the right tools, you will get the best results. For embedded software engineers, that means that they would introduce fewer bugs and find other bugs more quickly; the saved time quickly pays for the tools.

I have always wondered why many products need to have such sophisticated license management. Surely “real” companies would not pirate software, when they consider the legal implications (= costs) of being caught doing so? Certainly, at Mentor Graphics the rules are clear: using illegal software can be a just cause for dismissal. However, we frequently encounter customers who infringe the license terms for both tools and runtime software. It is clear that the value of software really is not clear to everyone.

]]>0Colin Wallshttp://www.mentor.comhttps://www10.edacafe.com/blogs/embeddedsoftware/?p=2512017-07-17T12:34:37Z2017-07-17T12:34:37ZFor a software developer, the idea of a library is quite simple: It is a file containing a (typically large) number of functions/procedures/subroutines in a special format. At link time, the linker looks in the library (or there may well be more than one, in which case it checks each in turn) to resolve any references to functions not satisfied by the supplied object modules. This means that the programmer just needs to reference commonly used functions and their code is pulled in automatically.

Of course, it is not quite that simple. Also, as with most aspects of embedded programming, libraries present more challenges and options to developers …

When developing software for the desktop, most programmers give little consideration to standard libraries, concentrating only on any special libraries that might be employed for their application. Such libraries are commonly dynamically linked, so they are quite different from anything used in most embedded systems.

The fact that every embedded system is different is often cited (by me anyway) as a reason for much of the complexity with developing embedded software. The use of libraries reflects this. Apart from getting a library that is right for the target device family and the chosen compiler, there may be a wide selection of other variations: chip family member specifics, register relative addressing, PC relative (position independent) code, endianity, size/speed optimization, the list goes on …

A standard C library contains two types of functions:

Functions called by the generated code where the compiler has determined that so doing is preferable to creating inline instructions.

Functions explicitly called by the code (like printf()).

Commonly a single library is used to contain both these types of function. However, that is not always the case. For GNU compilers, one library (libgcc) contains the compiler support functions. There are a number of options to choose from for the explicitly called functions:

GLIBC is very full-featured and designed to be used with Linux, including not only the ISO C functions, but POSIX support and GNU extensions.

uClibc is also designed to be used with Linux (or uClinux) and contains a subset of GLIBC, so it has a smaller footprint.

Newlib is smaller still and is primarily aimed at embedded applications with no OS.

An interesting question is: what is the benefit of a smaller library, given that only called functions are extracted from it so final binary image size will not vary? I can think of about three answers:

A smaller library results in faster link time.

Less choice of functions might make a programmer more careful in their selection, thus resulting in more optimal (smaller) code.

A small library may also have smaller (less capable) versions of functions, which will result in a smaller application memory footprint.

]]>1Colin Wallshttp://www.mentor.comhttps://www10.edacafe.com/blogs/embeddedsoftware/?p=2482017-06-12T16:05:30Z2017-06-12T16:05:30ZI am not a networking specialist. If you are an expert in this area, this posting will be teaching a grandmother to suck eggs (strange expression – I wonder what it actually means). Obviously, over years of working with embedded systems, I have learned quite a lot about protocols, so learning about a new one is not starting from scratch. For many, networking begins and ends with TCP/IP. However, there are lots of other Internet protocols – FTP, UDP and HTTP, for example. There are also other kinds of connectivity that may or may not be thought of as networking – Wi-Fi, Bluetooth and USB, for example.

It was while studying the operation of the last of these, USB, that I came across a technique that was familiar in form, but unfamiliar in application: bit stuffing …

The term “bit stuffing” broadly refers to a technique whereby extra bits are added to a data stream, which do not themselves carry any information, but either assist in management of the communications or deal with other issues.

The use of bit stuffing, with which I was familiar, was to do with protocol management. Imagine you have a stream of binary data and you periodically want to include a marker to say that a data set has finished. Obviously, you can use a particular bit sequence, but how do you recognize it when any sequence of bits might occur within the data stream? This is where bit stuffing comes in.

For example, we can define a rule that no more than five 1s can occur in a row in the transmitted data. To make this work, the sending software will insert (stuff) an extra 0 after any sequence of five 1s. It can then send a sequence of six 1s specifically to indicate an end of data set. The receiving software, on seeing a sequence of five 1s, checks the next bit. If it is 0, the bit is simply discarded; if it is 1, then it notes that an end of data set has been flagged. This technique is very flexible and can be adapted to various circumstances. It is broadly the same idea as using an “escape” character in byte-oriented protocols.

I was interested to see how and why bit stuffing is used in USB, where it has a different purpose. USB is a synchronous protocol, which means that the sender and receiver must be somewhat synchronized to correctly recognize data on the bus. There is no separate clock line between the devices. The receiver uses transitions in the data stream to get in synch. Because of the way the encoding is implemented, a stream of 0s is not a problem, as there are plenty of transitions. However, a stream of 1s would have no transitions, which might compromise the synchronization. This is fixed by the transmitter stuffing a 0 after any sequence of six 1s. The receiver just discards any bit following a sequence of six 1s. This is a simplified description, but I hope that it conveys the idea.

The downsides of bit stuffing are that it introduces an overhead, but this is minimal (0.8% on random data in USB, for example), and it means that the overall data transmission rate is not predictable, as it is (slightly) data dependent.

Interestingly, USB 3.0 also messes with data streams in another way. It has a data scrambling mechanism, which eliminates repetitive patterns in data, which would otherwise cause radio interference that would compromise FCC requirements. But that another story entirely …

]]>0Colin Wallshttp://www.mentor.comhttps://www10.edacafe.com/blogs/embeddedsoftware/?p=2452017-05-15T09:58:13Z2017-05-15T09:58:13ZEmbedded software development tools are important to all developers and a topic that I frequently discuss. The way such tools are described by vendors is interesting. For example, there might be a reference to an “optimizing compiler”. That is rather meaningless, as all compilers are optimizing to at least some degree. For an embedded compiler, the important factors are the quality of optimization and, more importantly, the degree of control that the user can apply.

Another interesting terminological issue is applied to debuggers and trace tools. They are commonly referred to as “non-intrusive” …

Most people will have heard of Heisenberg’s Uncertainty Principle. Although it has a precise meaning in the realm of quantum physics (which is well beyond the scope of this blog!), it does highlight a basic rule of life: you cannot measure something without affecting the thing that you are trying to measure; i.e. the act of measurement affects the result. This applies in a surprisingly diverse range of circumstances and has useful side benefits.

If you are an electronics designer, you may need to measure a voltage in a circuit. That measurement drains a (very tiny) current, which affects the voltage. If you measure a current, a resistance is introduced, which affects the current. To look at timings and waveforms, an oscilloscope is the standard tool. The application of the ‘scope probes, affects the capacitance of the circuit, which, in turn, may modify the timing behavior. Does all this matter? Normally, no, because the effects are so tiny that they may be ignored. Sometimes, however, the effect (of tiny timing changes) may cause the circuit to malfunction. Such a fault can be hard to track down, but it does indicate that the design is rather marginal and some work to make it more robust would be advisable.

Surprisingly, the rule applies to software as well. A debugger may use a hardware probe, which has a small effect upon the timing of the code. There may be a debug agent to facilitate “run mode” debug or extra operating system instrumentation to enable profiling. Both of these affect the size of the code and, more importantly, its real time behavior. Just like with the hardware design, the effects are likely to be tiny. However, any problems that do arise simply serve to highlight poor (i.e. fragile) design.

On a future occasion, maybe I will look at more parallels from nuclear physics, which teach useful principles. Maybe we can discuss the half life of executing software …

]]>0Colin Wallshttp://www.mentor.comhttps://www10.edacafe.com/blogs/embeddedsoftware/?p=2422017-04-18T12:53:54Z2017-04-18T12:53:54ZAll my working life, I have had a challenge with explaining to people what I actually do for a job. It all starts with defining what is an embedded system. This is by no means easy. I thought that this might become simpler over time, as embedded systems become even more ubiquitous, but the reverse is true. The definition is getting even fuzzier.

It has reached a point where software engineers do not necessarily know whether they are working on embedded systems or not …

There are two reasons why it may be unclear [or of no concern] to a software engineer that they are working on an embedded system:

First, software teams have grown drastically in recent years, as systems have become more complex and the code content increased massively. That growth is not simply the addition of more programming “power” – it is not a question of just applying more brute force to write more code. Increasingly, developers are specialized. Some are likely to be “real” embedded engineers, who are comfortable working close to the hardware; others may get no closer than an RTOS API; some application specialists will not even be aware that an RTOS is in use, so their programming context is much like that of a desktop software developer.

The second reason is that many embedded systems are built using desktop-derived OSes, like Linux, and [some] RTOSes are becoming more like desktop OSes. The difference used to be clear. A desktop OS was a stand-alone entity, which was booted up when the machine started. Applications were then started and stopped, as and when required. An embedded OS was bound to the application code as a single, monolithic executable entity; when the device was switched on, the application code began to run [under the control of the OS]. Embedded Linux works in a similar way to its desktop relation: it boots and the application code is then loaded [from somewhere] and started. A facility for the dynamic loading and unloading of application modules is now available for many embedded OSes, which changes the programming paradigm.

So, that explains the confusion. But does it matter? I think it does, because “true” embedded software engineers are not only skilled in the low level stuff, they are very adept at squeezing the last bit of performance out of hardware. Desktop programmers attitude is that they have infinite memory and CPU power to play with. This could be one reason, at least, why my phone spends more time on charge than it does in my pocket.

]]>0Colin Wallshttp://www.mentor.comhttps://www10.edacafe.com/blogs/embeddedsoftware/?p=2392017-03-15T13:53:11Z2017-03-15T13:53:11ZI have frequently made the observation that a key difference between embedded and desktop system programming is variability: every Windows PC is essentially the same, whereas every embedded system is different. There are a number of implications of this variability: tools need to be more sophisticated and flexible; programmers need to be ready to accommodate the specific requirements of their system; standard programming languages are mostly non-ideal for the job.

I have written on a number of occasions about the non-ideal nature of standard programming languages for embedded applications. A specific aspect that can give trouble is control of optimization …

Optimization is a big topic. Broadly it is a set of processes and algorithms that enable a compiler to advance from translating code from (say) C into assembly language to translating an algorithm expressed in C into a functionally identical one expressed in assembly. This is a subtle, but important difference.

A key aspect of optimization is memory utilization. Typically, a decision must be made in the trade-off between having fast code or small code – it is rare to have the best of both worlds. This decision also applies to data. The way that data is stored into memory affects its access time. With a 32-bit CPU, if everything is aligned with word boundaries, access time is fast; this is termed “unpacked data”. Alternatively, if bytes of data are stored as efficiently as possible, it may take more effort to retrieve data and hence the access time is slower; this is “packed” data. So, you have a choice much the same as with code: compact data that is slow to access or a bit of wasted memory, but fast access to data.

Most embedded compilers have a switch to select what kind of code generation and optimization is required. However, there may be a situation where you decide to have all your data unpacked for speed, but have certain data structures where you would rather save memory by packing. Or perhaps you pack all the data and have certain items which you want unpacked either for speed or for sharing with other software. For these situations, many embedded compilers feature two extension keywords – packed and unpacked – which override the appropriate code generation options. It is unlikely that you would use both keywords in one program, as only one of the two code generation options can be active at any one time.

]]>0Colin Wallshttp://www.mentor.comhttps://www10.edacafe.com/blogs/embeddedsoftware/?p=2362017-02-16T13:48:44Z2017-02-16T13:48:44ZA significant factor in getting any job done properly is having the right tools. This is true whether you are building a kitchen, fixing your car or developing embedded software. Of course, it is the last of these that I am interested in here. I have been evangelizing on this topic for years (decades!). The problem is that there is a similarity – arguably superficial – between programming an embedded system and programming a desktop computer. The same (kind of) languages are used and software design techniques are fairly universal. However, there are really some major differences …

There are three key areas of difference between desktop and embedded programming:

The degree of control required by the embedded developer is much greater, in order to utilize resources (time and memory) effectively.

The approaches to verification and debugging and quite different, as an external connection and/or a selection of instruments need to be employed. Also, further tools may be needed to optimize the performance of the application.

Every embedded system is different, whereas every PC is basically the same. This means that the tools (like the programmers) need to be much more flexible and adaptable.

Because there are so many desktop programmers, who are all working in the same environment, there is a huge demand for tools. The result is that very good tools are effectively (or literally) free. The apparent similarity of embedded to desktop programming means that developers have a misguided expectation that their tools should be free too – regardless of their specialized needs and much lower demand level. In the electronic design (EDA) world, there is no such expectation. Tools are valued and price tickets are commonly in five figures.

There are essentially three ways that embedded developers can currently get tools:

They can purchase commercial tools that are dedicated to the needs of the embedded developer. This is undoubtedly the best approach, but their costs are not insubstantial. There is a reasonable expectation that the tools will work “out of the box” and that technical support is available.

They can take open source (“free”) tools, which have been adapted for embedded use, or do the adaptation themselves. The direct costs are, of course, lower, but the extra time needed to get the tools in shape and to obtain support from the open source community cannot be ignored.

But, maybe there is a third way. What if a vendor were to take the best-in-class open source tools, comprehensively adapt them to the needs of the embedded developer, add some additional tools that fulfill their specialized requirements and offer this as a reasonably priced package? This package might be available in various editions, recognizing the diverse needs of embedded developers, and be available for immediate purchase and download on the Web. High quality technical support would also be available. For users with particularly specialized needs, services would be available to further adapt the tools to fit their specific requirements.

How does that sound?

]]>0Colin Wallshttp://www.mentor.comhttp://www10.edacafe.com/blogs/embeddedsoftware/?p=2332017-01-16T10:15:15Z2017-01-16T10:15:15ZI have often talked about the process that might be applied to the selection of an embedded operating system and I hope that I can provide some guidance. However, developers tend to stick with a specific OS [or, at least, with a particular OS vendor] – recent research suggested that only about 20% of developers anticipated a change of OS for their next project.

I started thinking about why there is this apparently high degree of loyalty …

I do not think that there is a single, simple reason why embedded software engineers choose to use the same OS time and again. One motivation is that embedded guys have a pragmatic conservatism: “if it ain’t broke, don’t fix it”. Although that attitude is quite reasonable, I think that we can identify two specific reasons not to change OS:

Vendor satisfaction. If the level of support and quality of documentation is very good or excellent, that is definitely a reason to stick with a particular OS vendor [as it is with almost any product].

Skills and IP lock-in. The technical characteristics of a given OS permeate the application code and the skill set of the team. This has primarily two manifestations:

Drivers and middleware are often very specific to a specific OS. Moving to a new OS implies the acquisition of new skills and rewriting of a lot of code.

The application program interface [API] ties the application code to the OS and also represents part of the team’s skill set. It is true that many RTOS products have a proprietary API. Moving to another OS would require changes to the application code. Alternatively, many developers us an OS abstraction layer to protect themselves from such a change – only the layer needs to be modified to accommodate a change in OS. Another approach is to embrace a standard and the common API standard is POSIX. Although this is the native API for Linux, it is also supported by many RTOS products and its use provides a degree of code portability.

If you are sticking with a specific OS for other reasons, I would be interested to hear about your motivations and experiences.

]]>0Colin Wallshttp://www.mentor.comhttp://www10.edacafe.com/blogs/embeddedsoftware/?p=2282016-12-15T13:28:18Z2016-12-15T13:28:18ZI am often asked questions about embedded software. Sometimes they are complex; other times they are simple. But frequently, the simplest ones are what leads to an interesting train of thought. The one that set my brain working recently was something like this: “I have some non-volatile memory in my design, which is used to retain specific parameters through power cycling. The first time the device is used, the memory contains garbage and needs to be initialized. When the software starts up, how can I detect that this is the first time it has executed and an initialization sequence needs to be run?”

My first thought was to suggest that simple inspection of the data would show whether it was valid or not. In some applications, that would certainly be true. In others, perfectly valid data could look like a jumble of ones and zeros. There must a be simple, reliable way to make it clear that the memory/data has been initialized …

There are probably several ways to solve this problem, but I think the best approach is to dedicate a tiny part of the non-volatile memory to be a “signature”.

A signature is just a quickly recognizable sequence of bytes which cannot occur randomly. Of course, this ideal is impossible, as any sequence of bytes, however long, could occur randomly. It is just a matter of minimizing that possibility, whilst still making the check quick and easy. If the signature is just 4 bytes, there is a 4 billion to 1 chance of it occurring randomly. I think that for almost any application I can imagine, that is good enough. And a 32-bit value may be checked very quickly.

By careful choice of the signature values, the chances of an accidental occurrence may be reduced. Intuitively, a sequence of consecutive numbers [say 1, 2, 3, 4] would feel more unlikely than a “random” set. After all, when did the lottery last yield a consecutive sequence of digits? Of course, such a sequence is just as unlikely as any other. However, by thinking about how memory works, the unlikelihood of a specific sequence may be increased. What values might memory have when it is first powered up? I can think of 4 possibilities:

If it is (1), then any signature will give us the 4 billion to 1 chance. Any of the others can be detected by use of the right signature. I would suggest the following: 0x00, 0xff, 0xaa, 0x55. This should cover all of (2), (3) and (4) and still be just 32 bits.

Some care is needed with the initialization sequence. It is essential to set up valid data and then initialize the signature as the very last thing in the procedure.

Of course, the use of a signature does not guarantee the integrity of the data. It may be wise to use a checksum or CRC for error checking or even a mechanism for self-correction of data. This results in the start-up sequence:

if signature invalid

initialize

else

if data invalid

initialize

endif

endif

]]>1Colin Wallshttp://www.mentor.comhttp://www10.edacafe.com/blogs/embeddedsoftware/?p=2222016-11-15T13:10:11Z2016-11-15T13:10:11ZI was recently approached for help by a Mentor Graphics customer, who was planning a new project and needed to select an operating system. They wanted guidance with that choice. Of course, one is tempted to say that it does not matter which of our products they chose (as, between them, Nucleus RTOS and Mentor Embedded Linux do cover most possibilities), but I felt they needed something more objective.

There is actually a huge choice. Given that it is decided to purchase an OS, instead of developing something in-house (an expensive option which rarely makes sense), there is the choice between the “heavyweight” OSes, like Windows CE and various flavors of Linux, and around 200 other, mostly real time (RTOS), products. What the customer was after was a simple decision driven process, like a flowchart …

I did not know of the existence of such a tool and thought about whether I could create one. However, I quickly realized that too many of the parameters were inter-connected for a straightforward flow-chart to handle. Instead, I thought it would be better to formulate a concise list of key questions, the aggregate answers to which would lead to a decision. This is a topic that I commonly address in web seminars, conference papers etc. Here, I can only give a taste of the kinds of questions to be asked:

Is your application real time? In other words, does it need to respond to external events in a very predictable fashion? If the answer is yes, then looking at a true RTOS may be your best option. Although real-time extensions may be available for other OSes, I might question the benefits of making such a choice.

Is memory size an issue? All embedded systems have some kind of limit to how much memory they can have, but if this is quite stringent, the choice of one of the heavyweight OSes may not be wise.

Do you have plenty of CPU power? Or is the processor you have only just about powerful enough for the application. Efficient CPU usage – low overhead, if you like – is a trademark of most RTOSes, which make them a good choice if you do not have power to spare.

Does your device have power consumption issues? Particularly, but not exclusively, with portable devices, power consumption is a key factor. The previous two parameters – memory and CPU power – are both significant here. You may also be looking for power management facilities within the OS.

Do you have a requirement to support obscure devices or unusual communications protocols? Although most RTOS products tend to have a wide range of middleware and drivers, Linux is likely to trump them when it comes to anything out of the ordinary. The validation of protocols should be checked, of course.

Does (or could) your target system have a memory management unit (MMU)? If not, the heavyweight OSes are unlikely to be an option, as they do require an MMU. Typically, RTOSes (with a few exceptions) are thread based (not process), so an MMU is not mandatory. However, in many cases, an available MMU can be used to provide inter-task protection.

What kind of security does your device need? How important is it to protect tasks from one another? If this is a requirement, then a process model OS may be a good choice. This includes all the heavyweights and a few RTOS products. Other RTOSes may, as I mentioned above, use an MMU to facilitate “thread protected mode”, which offers some security, with a lower overhead than process model.

Do you need to apply for certification for your application? This is mandatory for certain types of product in particular industries, like aerospace and medical. The certification process tends to be complex and expensive. It requires the availability of source code. The cost is also very sensitive to the volume of code to be processed, which mitigates in favor of the leaner RTOS products.

Do you require interoperability with enterprise software systems? If so, this may imply that one of the Microsoft products may be a good choice, as they are strong in this area.

What is the sale price of your product and what volumes do you anticipate shipping? Cost models for OSes vary. They may be royalty based, perhaps with a sliding scale on volume. They may be royalty free, with just an up-front charge per device/project. They may be “free”, requiring up-front tooling expenditure and perhaps an ongoing service/support charge. The OS cost must be factored into the overall cost of development and production. Furthermore, if the OS reduces memory and CPU power requirements, it helps minimize the bill of materials (BOM) of the product.

What is your past experience of embedded OSes? Do you have in-house expertise with specific APIs, like POSIX? If so, this can affect your choice. Linux uses POSIX, but this API is also available with many RTOSes. Your past experience with an OS company with support, documentation etc. is also well worth factoring in; I know that this is a strong driver for repeat business at Mentor Embedded.

This is a very broad guide, but I believe that if you methodically address the above questions, your OS choice will become, at least, clearer.