Monday, May 30, 2016

The biggest problems I see in industry code reviews are code complexity, real time performance, code quality, weak development process, and dependability gaps. Here's an index into blog postings and other sources that explains the problems and how to deal with them.

-----------------------------

Several times a year I fly or drive (or webex) to visit an
embedded system design team and give them some feedback on their
embedded software. I've done this perhaps 175 times so far (and counting).
Every project is different and I ask different questions every time. But
the following are the top five areas I've found that need attention in the past few years. "(Blog)" pointers will send you to my previous blog postings on these topics.

If so, in embedded systems much of the time you should be using a state machine design approach instead of a
flow chart (or no chart) design approach. (Book Chapter 13). Or perhaps you need to untangle your exception handling. (Blog)

If you have very high cyclomatic complexity you're pretty much guaranteed to have bugs that you won't find in unit test nor peer review. (Blog)

Did you follow an appropriate style guideline and use a static analysis tool?

Your code should compile with zero warnings for an appropriate warning set. Consider using the MISRA C rule set (Blog) and a good static analysis tool. (Blog)

Do you limit variable scope aggressively, or is your code full of global variables?

Less than 100% CPU usage does not mean you'll meet deadlines unless you can verify you meet some special conditions, and probably you don't meet those conditions if you didn't know what they were. (Blog)

If you have one long-running task that ties up the CPU only once per day, then you'll miss deadlines when it runs once per day. But perhaps you get lucky on timing most days and don't notice this in testing. (Blog)

Did you follow good practices for interrupts?

Interrupts should be short -- really short. (Blog) So short you aren't tempted to re-enable interrupts in them. (Blog)

If you haven't exercised, say, 95% of your code in unit test, you're waiting to find those bugs until later, when it's more expensive to find them. (Blog) (There is an assumption that the remaining 5% are exception cases that should "never" happen, but it's even better to exercise them too.)

In general, you should have coverage metrics and traceability for all your testing to make sure you are actually getting what you want out of testing. (Blog)

What's your peer review coverage?

Peer review finds half the defects for 10% of the project cost. (Blog) But only if you do the reviews! (Blog)

Are your peer reviews finding at least 50% of your defects?

If you're finding more than 50% of your defects in test instead of peer review, then your peer reviews are broken. It's as simple as that. (Blog)

Do your style guidelines include not just cosmetics, but also technical practices such as disabling task switching or using a mutex when accessing a shared variable? (Blog) Or avoiding stack overflow? (Blog)

(4) Is your software process methodical and rigorous enough?

Do you have a picture showing the steps in your software and problem fix process?

If it's just in your head then probably every developer has a different mental picture and you're not all following the same process. (Book Chapter 2)

Are there gaps in the process that are causing you pain or leading to problems?

Very often technical defects trace back to cutting corners in the development process or skipping review/test steps.

Skipping peer reviews and unit test in the hopes that product testing catches all the problems is a dangerous game. In the end cutting corners on development process takes at least as long and tends to ship with higher defect rates.

Are you doing a real usability analysis instead of just having your engineers wing it?

Engineers are a poor proxy for users. Take human usability seriously. (Blog; Book Chapter 15)

Do you have configuration management, version control, bug tracking, and other basic software development practices in place?

You'd think we would not have to ask. But we find that we do.

Do you prioritize bugs based on value to project rather than severity of symptoms? (Blog)

Is your test to development effort ratio appropriate? Usually you should
have twice as many hours on test+reviews than creating the design and
implementation

Time and again when we poll companies doing a reasonable job on embedded software of decent quality we find the following ratios. One tester for every developer (1:1 head count ratio). Two test/review hours (including unit test and peer review) for every development hour (2:1 effort ratio). The companies that go light on test/review usually pay for it with poor code quality. (Blog)

Do you have the right amount of paperwork (neither too heavy nor too light)

Yes, you need to have some paper even if you're doing Agile. (Blog) It's like having ballast in a ship. Too little and you capsize. Too much and you sink. (Probably you have too little unless you work on military/aerospace projects.) And you need the right paper, not just paper for paper's sake. (Book Chapters 3-4)

(5) What about dependability aspects?

Have you considered maintenance issues, such as patch deployment?

If your product is not disposable, what happens when you need to update the firmware?

Have you done stress testing and other evaluation of robustness?

If you sell a lot of units they will see things in the field you never imagined and will (you hope) run without rebooting for years in many cases. What's your plan for testing that? (Blog)

Probably if this is the first time you've dealt with safety and security you should either consult an internal expert or get external help. Some critical aspects for safety and security take some experience to understand and get right, such as avoiding security pittfalls (Blog) and eliminating single points of failure. (Blog)

And while we're at it, you do have written, complete, and measurable requirements for everything, don't you? (Book Chapters 5-9)

For another take on these types of issues, see my presentation on Top 43 embedded software risk areas (Blog). There is more to shipping a great embedded system than answering all the above questions. (Blog) And I'm sure everyone has their own list of things they like to look for that can be added. But, if you struggle with the above topics, then getting everything else right isn't going to be enough.

Monday, May 16, 2016

I have been doing research in the area of robustness testing for many years, and once in a while I have to explain how that approach to testing fits into the bigger umbrella of fault injection and related ideas. Here's a summary of typical approaches (there are many variations and extensions beyond these as you might imagine). At the end is a description of the robustness testing work my group has been doing over many years.

Mutation Testing:Goal: Evaluate coverage/effectiveness of an existing test suite. (Also known as "bebugging.")Approach: Modify System under Test (SuT) with a hypothetical bug and see if an existing test suite finds it.Narrative: I have a test suite. I wonder how thorough it is? Let me put a bug (mutation) into my code and see if my test suite finds it. If it finds all the mutations I insert, maybe my test suite is thorough.Fault Model: Source code bug that is undetected by testingStrengths: Can find problems even if code was already 100% branch-covered in test suite (e.g., mutate a comparison to > instead of >= to see if test suite exercises the equality case for that comparison)Limitations: Requires an existing test suite (but, can combine with automated test generation to create additional tests automatically). Effectiveness heavily depends on inserting realistic mutations, which is not necessarily so easy.

Classical Fault Injection Testing:Goal: Determine robustness of SuT when its code or data is corrupted.Approach: Corrupt the binary image of the SuT code or corrupt data during run-time to see if system crashes, is unsafe, or tolerates the fault.Narrative: I have a running system. I wonder what happens if I flip a bit in the data -- does the system crash? I wonder what happens if I corrupt the compiled code -- does the system act in an unsafe way?Fault Model: Hardware bit flip or software-based memory corruption.Strengths: Can find realistic failures caused by single-event upsets, hardware faults, and software defects that corrupt computational state.Limitations: The fault model is memory and program bit-level corruption. Fault injection testing is useful for high-integrity systems deployed in large volume (they have to survive even very infrequent faults), and essential for aviation and space systems that will see high rates of single event upsets. But it is arguably a bit excessive for non-safety-critical applications.

Robustness Testing:Goal: Determine robustness of SuT when it is fed exceptional or unusual values.Approach: Corrupt the inputs to the SuT during run-time to see if system crashes, is unsafe, or tolerates the fault.Narrative: I have a running system or subsystem. I wonder what happens if the inputs from other components or sensors have garbage, unusual, or random values -- does the system crash or act in an unsafe way?Fault Model: Some other module than the SuT has a bug that results in exceptional data being sent to the SuT.Strengths: Can find realistic failures caused by likely run-time faults such as null pointers, NaNs (Not-a-Number floating point values), corrupted input data, and in general faults in modules that are not the SuT, but rather other software or sensors present in the system that might have bugs that generate exceptional data.Limitations: The fault model is generally that some other piece of software has a bug and that bug will generate bad data that kills the SuT. You have decide how likely that is and whether it's OK in such a case for the SuT to misbehave. We have found many situations in which such test results are important, even in systems that are not safety critical.

Fuzzing:
A classical form of robustness testing is "fuzzing," in which random inputs are tossed into a system to see what happens rather than carefully selected specific input values. My research group's work centers on finding efficient ways to do robustness testing so that fewer tests are needed to find system-killer values.

Ballista:
The Ballista project pioneered efficient robustness testing in the late 1990s, and is still active today on stress testing robots and autonomous vehicles.

Two key ideas of Ballista are:

Have a dictionary of interesting exceptional values so you don't have to stumble onto them by chance (e.g., just try a NULL pointer straight out rather than wait for a random number generator to happen to generate a zero value as a fuzzing input value)

Make it easy to generate tests by basing that dictionary on the data types taken by a function call instead of the function being performed. So we don't care if it is a memory management function or a file write being tested - we just say for example that if it's a memory pointer, let's try NULL as an input value to the function. This gets us excellent scalability and portability across systems we test.

A key benefit of Ballista and other robustness testing approaches is that they look for holes in the code itself, rather than holes in the test cases. Consider that most test coverage approaches (including mutation testing) are interested in testing all the code that is there (which is a good thing!). In contrast, robustness testing goes beyond code coverage to find the places where you should have had code to handle exceptional situations, but that code is missing. In other words, robustness testing often finds bugs due to missing code that should have been there. We find that it is pretty typical for software to be non-robust unless this type of testing has been done it identify such problems.

You can find more about our research at the Stress Tests for Autonomy Architectures (STAA) project page, which includes video of what goes wrong when you stress test a couple robotic systems:

About Me

I've done embedded systems for big industry, the US military, startup companies, and now Carnegie Mellon University. I'm the author of the book Better Embedded System Software, which goes into more detail on most of the topics discussed in my corresponding blog.As with any blog, these posts often contain speculative and partially formed thoughts, and should not be interpreted as a fully considered opinion unless stated otherwise.Key pages:Academic home page at CMUEmbedded Software Blog Checksum and CRC Blog