For non-functional requirements as performance, one can find simple metrics to measure the software quality. If you want to know, for example, if your system's performance got better or worse after some code changes, you can compare the time taken to perform certain actions (ideally all actions) before and after these changes.

For functional requirements, however, whether the system got better or worse since the previous version is a more subjective issue. A simple comparison of the number of failed test cases across versions may bring misleading results, as this does not account for the importance of the test case nor the impact of the problem on the user experience. One could even try to give appropriate weights to the test cases according to their importance, but I think the rules to assign the weights would themselves be subjective.

What kind of metrics can be used as an indicative of how well a piece of software meets its functional requirements?

Judging from the description, is this the question you really want to ask: "What kind of metrics can be used to indicate how well a post-1.0 piece of software meets its functional requirements?"
–
user246May 12 '11 at 15:34

4 Answers
4

When I define my test cases, (Normally in Team Foundation Server) I cross reference them to the requirements. Then, as they pass or fail I use that as a way to determine if a requirement has been delivered or not.

Once all the test cases have passed to, it is the testers, not the developers who set the requirement from 'Ready for test' to 'Built'.

That is how I get the two key measures there that you refer to in your question.

The trend of the requirements being set to "built" which we follow by UAT and setting them to "Verified by customer" is the indication of how well it meets the requirements.

The test results trend over time, gives the long term, version-to-version view of how the software maintains that quality over time.

This is a great question, and it's an issue that every product company faces. The answer is very subjective -- it depends on your product, your organization, and your customer base. Although it is subjective, there are trends you should keep in mind when trying to evaluate whether a product meets its functional requirements.

As time passes, not only does your product evolve, but your customer base evolves too. Your first customers are more likely to be early adopters. As your business grows, you will probably take on larger, more risk-averse customers. Your functional requirements will need to evolve with your customer base. Bugs and ease-of-use features that were low-priority to your early adopter customers may be high-priority to your newer customers.

If you host your own product, you may also have operational needs that evolve as well, e.g. scalability requirements and diagnostic/trouble-shooting capabilities.

So how does this mean for QA metrics? You need to revisit your bug list on a regular basis to ensure that priorities evolve with your customer needs. You also need to judge any product metrics you collect, e.g. capacity measurements, responsiveness, or speed, in the context of your customer needs.

Of course, you never have enough information to know with full certainty what the functional requirements need to be. Someone will take their best guess, and when you're wrong, your customers will tell you.

Bottom line: everything changes, including functional requirements. Whatever you decided constitutes high-quality in the last release needs to be re-evaluated for this release.

I don't think there's a single, magic, metric to give you that.
You can measure some common metrics and combine them somehow to determine the quality of the software. For example- number of covered requirements like mentioned above (very misleading by itself since we don't know how good the coverage is), number of low-med.-high severity bugs found in this version and whether they are all closed (not sure how to interpret the results though) etc.