Monthly Archive for February, 2012

My last discussion of Off-The-Shelf software validation only considered the high-level regulatory requirements. What I want to do now is dig deeper into the strategies for answering question #5:

How do you know it works?

This is the tough one. The other questions are important, but relative to #5, answering them is pretty easy. How to answer this question (i.e. accomplish this validation) is the source of a lot of confusion.

There are many business and technical considerations that go into the decision to use OTS or SOUP software as part of a medical device. Articles and books are available that include guidance and general OTS validation approaches. e.g. Off-the-Shelf Software: A Broader Picture (warning PDF) is very informative in this regard:

Define business’ use of the system, ideally including use cases and explicit clarification of in-scope and out-of-scope functionality

Determine validation deliverables set based on system type, system risk, project scope, and degree of system modification

Review existing vendor system and validation documentation

Devise strategy for validation that leverages vendor documentation/systems as applicable

Put in place system use, administration, and maintenance procedures to ensure the system is used as intended and remains in a validated state

This is great stuff, but unfortunately it does not help you answer question #5 for a particular type of software. That’s what I want to try to do here.

OTS really implies Commercial off-the-shelf (COTS) software. The “commercial” component is important because it presumes that the software in question is a purchased product (typically in a “shrink-wrapped” package) that is designed, developed, and supported by a real company. You can presumably find out what design controls and quality systems are in place for the production of their software and incorporate these findings into your own OTS validation. If not, then the product is essentially SOUP (keep reading).

Contrast OTS with Software of Unknown Provenance (SOUP). It is very unlikely that you can determine how this software was developed, so it’s up to you to validate that it does what it’s supposed to do. In some instances this may be legacy custom software, but these days it probably means the integration of an open source program or library into your product.

This following list is by no means complete. It is only meant to provide some typical software categories and the strategies used for validating them. Some notes:

I’ve included a Hazard Analysis section in each category because the amount of validation necessary is dependent on the level of concern.

The example requirements are not comprehensive. I just wanted to give you a flavor for what is expected.

Always remember, requirements must be testable. The test protocol has to include a pass/fail criteria for each requirement. This is QA 101, but is often forgotten.

I have not included any example test protocol steps or reports. If you’re going to continue reading, you probably don’t need help in that area.

Operating Systems

Examples:

Windows XP SP3

Windows 7 32-bit and 64-bit

Red Hat Linux

Approach:

Hazard Analysis: Do a full assessment of the risks associated with each OS.

Pay particular attention to the hazards associated with device and device driver interactions.

List all hazard mitigations.

Provide a residual Level of Concern (LOC) assessment after mitigation — hopefully this will be negligible.

If the residual LOC is major, then Special Documentation can still be provided to justify its use.

Use your full product verification as proof that the OS meets the OTS requirements. This has validity since your product will probably only be using a small subset of the full capabilities of the OS. All of the other functionality that the OS provides would be out of scope for your product.

This means that a complete re-validation of your product is required for any OS updates.

There is no test protocol or report with this approach. The OS is considered validated when the product verification has been successfully completed.

Compilers

Examples:

Visual Studio .NET 2010 (C# or C++)

Approach:

Hazard Analysis:

For a vast majority of cases, I think it is safe to say that a compiler does not directly affect the functioning of the software or the integrity of the data. What a program does (or doesn’t do) depends on the source code, not on the compiled version of that code.

The compiler is also not responsible for faults that may occur in devices it controls. The application just needs to be written so that it handles these conditions properly.

For some embedded applications that use specialized hardware and an associated compiler, the above will not necessarily be true. All functionality of the compiler must be validated in these cases.

For widely used compilers (like Microsoft products) full product verification can be used as proof of the OTS requirements.

Validation of a new compiler version , e.g. upgrading from VS 2008 to VS 2010: Showing that the same code base compiles and all Unit Tests pass in both can be used as proof. This assumes of course that the old version was previously validated.

The compiler is considered fit for use after the product verification has passed so there is also no test protocol or report in this case.

Integrated Libraries

Hazard Analysis: Both of these open source libraries are integrated into the product software. The impact on product functioning, in particular data integrity, must be fully assessed.

You first list the requirements that you will be using. For example, typical logging functionality that might include:

The logging system shall be able to post an entry labeled as INFO in a text file .

The logging system shall be able to post an entry labeled as INFO in a LEVEL column of a SQL Server database.

… same for ERROR, DEBUG, WARN, etc.

The logging system shall include time/date and other process information formatted as “YYYY-MM-DD HH:MM:SS…” for each log entry.

The logging system shall be able to log exceptions at all log levels, and include full stack traces.

For database functionality, listing basic CRUD requirements plus other specialized needs can be done in the same way.

I have found that the easiest way to test these kinds of requirements is to simply write unit tests that prove the library performs the desired functionality. The unit tests are essentially the protocol and a report showing that all asserts have passed is a great artifact.