1.5 Ease the development of safety compliant software

Hello. Today, we will present some principles and techniques that we can use to simplify and accelerate software development in a functional safety environment. To understand this, we will first look at some of the basic principles of functional safety software development, and we will touch upon risk management-- what can go wrong; system partition principles-- how do we understand our system, which is composed of many, many parts.
This will lead us to the concept of the SafeTI Compliance Support Package, CSP, which can be used as a safe foundation for safe software. And we will look at it in the context of the CSP execution process, and how this fits into the broader themes of software safety.
To begin with, look at the graphic on the screen. What could go wrong? How big would the impact be? Those are the key principles that we need to look at in all of our functional safety domains. When we look at the block diagrams of our system and when we look at the system requirements, we see that our systems are often composed of many parts that are built on top of each other. And so not only need we consider what the probability of failure is, but what the impact of failure is. So these are what we're going to look at as we consider the elements of functional safety.
So what are the elements of functional safety? The key idea is what makes a system safe, what makes software safe. To understand this, we need to think that a safe system is the system the does what it is supposed to do. And that's the functional part of functional safety, understanding the function of the system and understanding ways to make sure it does what it is supposed to do.
As our systems are increasingly more driven by software, as more and more electromechanical electrical components are replaced by software systems that do many things and have many more states than we can possibly imagine, this is a paramount concern in system safety. IEC 61508 is a key driver of this, and introduces concepts such as safety integrity level and mean time between failure in the context of safe systems, which we will look at going forward.
To understand IEC 61508, we should consider where it fits in the framework of software safety. DO-178B is often referred to as one of the principal standards in software safety because it is a little older than the others, and also it has a lot of very specific guidelines in terms of what to do at various levels of software safety. IEC 61508 draws on many of the principles of DO-178B, and it elaborates on them so that those principles can be classified into other domains, and so other standards, such as the railway nuclear automotive medical device and process standards, borrow heavily from IEC 61508 in terms of best principles to understand what to do for safe software.
The other part here is that, when we're thinking about how to apply our efforts, we should think of both the typical trend line-- finding errors, finding failures late in the test and development process, as well as the preferred trend line, finding them early. And the reason this is important is the concept of cost. Cost goes up exponentially as we find our errors later and later in the software development process. In fact, many software errors, especially in functional safety, have to do with requirements, making sure you understand what the system is supposed to do. And therefore, finding errors early saves us a lot of time and money.
Another concept is safety integrity level. How safe do we need our software to be? And sometimes in domains such as IEC 61508, this is classed in terms of mean time between failures and understanding how hazardous the effects of the failure are. In DO-178B, for instance, software safety levels go from E, where a failure has no effect, to A, where a failure is catastrophic.
In other levels, such as IEC 61508, safety levels go from one to four, where each safety level is intended to reduce the risk of failure by a factor of 10. So how do we do this? What are the principal things that we should do?
Well, they are the principal things that are typically part of a software design life cycle. So we should think about risk management in terms of the whole software life cycle. Finding failures early in system design flowing through specification, through implementation and back up to the system as a whole, verification and test. Shown is the V diagram for ISO 26262, which is derived from IEC 61508.
The V diagram is a classic system engineering model that can be applied to most engineering processes. But in the software safety realm, it also has another element, which is those arrows that you see on the screen. In terms of software safety in the design life cycle, the arrows, understanding the connections between the boxes, is as important as the boxes. Making sure you understand the connections between your requirements, your software, and your test verification is of particular importance, because these are some of the key principles that allow us to make sure our software does what it is supposed to do.
Let's also think about verification activities. In terms of software safety, verification activities do vary based on the safety standard. Shown are the verification activities for 62304, and how they relate to IEC 61508, including the broadest senses, such as life cycle processes, safety integrity level, and tasks, flowing through to more specific design and coding standards, such as limiting use of interrupts, pointers, and recursion.
Depending on your safety level, these rules may be either recommended or highly recommended-- R or HR-- on the screen. There are also dynamic analysis and test recommendations that, again, do vary based on the safety standards, including specific requirements for structural coverage, such as statement, branch, and MCDC coverage. And also, static analysis rule checks, such as understanding data and control flow statically so that you know that the different pieces of your system execute and fit together the way you think that they should.
The other part that is important in many safety standards, such as ICE 62304, is the concept of system partition. Our systems often compose of many parts. And as we consider those many parts and as we consider how we will design our system, we need to remember that a software system inherits the worst safety class, as failures in the worst safety class can lead to failures in the system as a whole. So this is the principle that we will think about when we think about a safe foundation for safe software.
And on the screen, you see a slide that shows the concept that we're going to show in terms of the Compliance Support Package and the SafeTI diagnostic library. In this context, we will show how the SafeTI framework can be used in a Hercules Safety MCU as a safe foundation. And [? Siddharth ?] will walk us through the specifics here.
Thank you so much, [? Jay ?]. Hello, everyone. This is [? Siddharth ?] from TI.
Here we see a typical software stack. The customer application is built on top of low-level drivers and application libraries. Typically, the software components of the software stack are developed by different vendors, which are then integrated by the system developers into a system or application.
For safety of a critical system, it's very important to have safe foundation to start with. TI provides a safe foundation through HALCoGen and the SafeTI Diagnostic Library.
The software development process for these software components has been recently certified TUV NORD to meet ASIL D and SIL 3 levels of safety integrity.
The SafeTI Compliance Support Packages are developed as [? process ?] [? specific ?] factors of the development process and are now available for HALCoGen and the SafeTI Diagnostic Library.
I'll talk about the software components brought in by TI before explaining the contents of the CSP.
HALCoGen or the Hardware Abstraction Layer Code Generation tool is a graphical user interface tool used generate low-level drivers for Hercules safety MCUs. The tool provides options to the user to [? concept ?] by reference [? interrupts, ?] [INAUDIBLE] and other MC parameters.
Depending on the configuration, HALCoGen generates C code for device initialization and peripheral drivers. The tool supports multiple areas like CCS, [? ARK, ?] and GHS. The code generated by this tool can be directly imported into these IDEs. It also includes example codes and had an interactive help system.
The Compliance Support Package for HALCoGen is now available on ti.com at the link specified here.
The other software component provided by TI is the SafeTI Diagnostic Library. This is the diagnostic library for the Hercules safety MCUs.
It helps the customer to easily use the various safety and diagnostic features of the Hercules MCU. It provides a collection of software functions and response handlers for the various safety features.
It includes surface diagnostic APIs, which can be used to do periodic diagnostics and latent checks. It also includes a APIs for fault injection, which can be used to test the application's fault handling. It also provides an exception and error handler to capture input faults.
The device safety manual lists all the safety features in the Hercules MCU. The diagnostic library provides optimized APIs mapping to these safety features. The CSV for the SafeTI Diagnostic Library is available on ti.com at the like specified here.
Moving on, I'll talk about the Compliance Support Package. Artifacts included in the Compliance Support Package can be classified into three categories. The first one is requirements and design. It includes the following documents-- software safety requirements specification, software architecture document, and the software safety manual.
The software safety requirements specification provides purpose, scope, and requirements of the software unit. It lists all the software requirements assumed during development. And each of the safety requirements is assigned an ASIL and a SIL level.
The software architecture document provides an overview of the architecture design and the database it offers. The software architecture is designed so that it is modular, has minimal complexity, and provides abstraction to the upper layers. The software safety manual provides [? re-learned ?] instructions on how to integrate the software image in a safety critical system.
The second category of artifacts is the test reports. This includes the following documents-- the detailed static analysis report, a detailed dynamic analysis report, a test results report, and the traceability matrix.
I'll describe the generation process of these stress reports when I talk about the CSP execution process.
The third category is the Test Automation Unit or the TAU. This allows the customer to execute test cases based on their configuration. I'll provide an overview and demo of the TAU in the coming slides.
Apart from these reports, the CSP includes standard documents like the software user guide, various notes on the data sheet.
When the system [? was sent ?] to safety certification, at an atom or an element level, all the software components in the system are assessed to determine compliance to functional safety standards. This is a Herculean task for the system developer if he has to provide all the artifacts for all the software components in the system.
The SafeTI Compliance Support Packages make this task easier for the system developer. The artifacts included in the CSP provide evidence to the safety software development practices for the software modules at the [INAUDIBLE] or component level.
The CSP can also provide a helpful starting point for customers who need to provide similar evidence for their functional safety software.
If you are familiar with the safety standards, it imposes a lot of clauses and work products to be provided as evidence to comply with these standards. For example, here, you see the different clauses and work products as mandated by the ISO 26262 and IEC 61508 standards.
All the CSP artifacts are listed and mapped to the test running clauses and work products as specified in the standards. These artifacts provide evidence of adherence to these clauses in the standards.
Moving ahead to the CSP execution process, TI uses the LDRA tool suite in the development of the CSPs. The CSP execution process has three phases.
The first one is the static analysis phase. And here this test package needs to perform this. TI's come with custom coding standard, which is primarily derived from the MISRA C:2004 guidelines. Then enforcement of this coding standard improves the quality and restricts usage of the dangerous language features.
Apart from this, quality metrics are also measured for the source code. These metrics are a subset of HIS quality metrics. HIS is a consortium of five major automotive manufacturers, Audi, BMW, DaimlerChrysler, Porsche, and Volkswagen. This group has specified a fundamental set of metrics to be used in the evaluation of software. Comment density, cyclomatic complexity, fan in/fan out, number of global variables are some of the metrics used to determine the quality of the source code.
The static analysis report is opened at the end of this phase and provides information about software quality metrics and the source course violations.
The next phase is the dynamic analysis phase. The CSP features a Test Automation Unit, which is used to perform this. The Test Automation Unit, or the TAU, it will allow this LDRAunit to perform the dynamic analysis.
LDRAunit analyzes the program execution information to determine the code coverage attributed in the execution. It generates the dynamic analysis report. This report indicates the statements in the code, branch conditions, MC/DC conditions, which are covered and not covered in the test run.
These code coverage metrics help to evaluate the completeness of the test cases and demonstrate that there is no unintended functionality or unreachable code. It also generates the regression report, which mentions the input and output parameters and a test result for each test case.
The third phase is the bi-direction traceability. The traceability matrix report provides the traceability between different phases of software development. It lists the requirements and the work product [? implementation ?] and the verification of those requirements.
All these reports generated in these three phases are included in the CSP Test Reports. I'll explain about the TAU on the domain side.
As mentioned earlier, HALCoGen allows you to configure and create customized peripheral drivers. The default configuration is tested by TI, but the user may configure it as per his use case. Also, the built environment may be different compared to what TI used for testing these software components.
Hence, fixed test results and code coverage reports will not suit customer needs. The Test Automation Unit helps in this regard and allows the flexibility to the customer to execute the unit test cases with their particular configuration.
It generates a dynamic and the regression report for that particular configuration. The customer can change it to a different compiler version, or he can change the build options used for compiler test cases.
The unit-level test cases are included in the TAU. These test [? cases ?] are brought into Excel format, which helps the user to easily add, modify the test cases.
Moving ahead, I'll explain the functional blocks of the TAU. So the TAU comprises of all the [INAUDIBLE] test cases in the Excel format. This Excel template lists all the APIs in the source file, the input and output parameters for each of the APIs. The user can use this template to specify the input and output parameters and create a test case.
The TAU GUI lists all the available test cases. A user can select the test cases that are relevant to their configuration. It also has a script engine, which then converts the selected test cases in Excel to Test Case File format, or the TCF format.
This format is delivered by the LDRAunit. The LDRAunit then executes this Test Case File onto the target via JTAG. The program education information is then analyzed by determining the code coverage at the end of the test run. It generates a dynamic and regression report based on this information.
So before progressing further, I'll walk you through the CSP package and give a short demo on the TAU.
So this is the HALCoGen CSP. The CSP artifacts are available in this folder. The index page is just all artifacts, which I just mentioned, and can be used to browse and [? use the ?] artifact.
So this is how the TAU viewer looks like. It provides options for the user to select their HALCoGen project. Sample projects are included in the CSP. I have a Hercules TMS570LS3137 board connected to my PC.
So I'll go ahead and choose a sample project for this device. Once I choose the HALCoGen project, I can add the test cases by clicking on this button. The available test cases are displayed here.
You can change the HALCoGen version, or you can change the target configuration file by using this option. You can also change the build options used to compile the test cases. The build options are specified in a text file. But you can also change the compiler version used to compile the test case.
You can select or deselect the modules by right-clicking and clicking on the window here. Clicking on any of the modules lists all the available test cases for that module. So it also provides a description for the modules, for the test cases.
Again, just select the test cases here. Let me select one test case and start the execution. So it will take a couple of minutes for the test case execution to complete.
The output print out, here, displays progress of the test case execution. It generates the file for the test case. It generates the test case file, depending on the selected test cases, and starts the compilation of the test case.
During compilation, LDRAunit automatically examines the source file and inserts probes at strategic points into the source file. These instrumented code probes are simple function calls, which store the program information about the program execution.
So it takes a couple of minutes for the compilation to compete. So once the compilation is complete and successfully, then they'll note the test's on [? route to ?] target, to the JTAG. It prints the test execution information on this window.
So it mentions the test case execution information, the position on the test, input parameters, and output parameter, and the result on this window.
So once the test execution is complete, the program execution information is analyzed by LDRAunit to determine the code coverage attending the test. You can open the reports folder using this button.
So the TAU generates the dynamic coverage reports. This includes a summary report and an end-user report for each of the source files. The summary report provides a summary of the code coverage metrics at file level. The end-user report can be opened by clicking the file name in the summary report. The end-user report provides the coverage methods at an API level.
For each API, within the source file, you get the statement and the branch coverage. This indicates the statement executed in the current run.
So the TAU also generates a regression report for each test sequence. Here also, a high-level summary report and individual reports are generated. The summary report provides a summary of the total number of test cases executed, the number of test cases which have passed and failed.
The individual report for each sequence can be browsed by clicking on the sequence name and summary report. The end-user report provides detailed information about the test cases executed. It provides the test case description, the input/output parameters and the test result for each test beginning the sequence.
So now going back to the presentation, I just went over these reports. These snapshots are included in the slides for your reference. I'll go over some of the reports generated in the CSP.
The static analysis report is one of them. This report provides a summary of the software quality metrics and the source code violations as per the programming guideline. It also uses the quality metrics, the minimum and maximum value for all the quality metrics. Whenever there is a deviation, appropriate justification is provided for each metric.
The traceability matrix report is another report included in the CSP. This helps in showing completeness of the requirements coverage. This is a very comprehensive report and details the traceability between the requirements to design, design to source, and requirements for the test cases. It also provides bi-directional flexibility.
So as we have seen, at each phase, of the software development life cycle, in the V model, the CSP artifact helps in providing evidence for the safety standards.
The CSP provides a helpful starting point for customers who need to provide similar evidence for their functional safety software. The CSP reduces the customer software validation effort and provides work products that can assist them in their end system functional safety certification. Thus, the CSPs will help the customer to simplify the development of their functional safety software.
I'll now pass the control back to Jay for the concluding slides.
So let's look at these pieces in terms of how tools can be used in the process. LDRA tools include traceability management, static analysis, automated unit tests, and integration testing, and test verification. All of which fit into key parts of the SDLC.
To conclude, developing functionally safe systems is difficult. Functional safety introduces a slew of new requirements in system design. Risk management is a key part of the process, understanding how your system can fail, and what you do to mitigate that failure.
Software system partitioning can be used to make sure that you can build the system to be as safe as you need it to be. And starting with a strong foundation really helps you know that your system can comply to your safety requirements as they evolve and as you need them.
The SafeTI CSP and the LDRA Tool Suite simplify this process and act as the safe foundation that allow you to quickly and easily develop software in a functional safety environment. Thank you.

Details

Date:
March 24, 2015

This is the final of four videos in the functional safety training series. In this video, we partner with industry expert LDRA to address the challenges of developing functional safety compliant software and review how utilizing SafeTI™ Compliance Support Packages helps ease the software development process.