The vast majority of software testing is functional verification testing. In this scenario, you have a test case which consists of one or more inputs to the system under test, and an expected value of some sort. The result of the test case is pass or fail, nothing else. However, in some testing scenarios, you must assign metrics. A good example is performance testing, where you measure the time required for some component of your system under test. But what if you need to assign a metric to some component which is inherently subjective? For example, suppose you are writing a search engine and you want to measure how good your system’s results are relative to previous builds of your system. A very elegant math construct called a rank order centroid (ROC) can do this for you. Suppose your current build is #00012 and you want to measure its quality against previous builds #00011, #00010, and #00009. You simply rank the four builds from best to worst — say, #00012 is best, #00009 is 2nd best, #00011 is 3rd, and #00010 is 4th. You can convert the ranks into ratings like this:

#00012: (1 + 1/2 + 1/3 + 1/4) / 4 = 0.5208

#00009: (0 + 1/2 + 1/3 + 1/4) / 4 = 0.2708

#00011: (0 +0+ 1/3 + 1/4) / 4 = 0.1458

#00010: (0 +0+0+ 1/4) / 4 = 0.0625

The pattern should be clear to you. Notice that the weights sum to 1.0 (subject to rounding error). So, rank order cenroids are a neat way to turn subjective test results in the form of rabnks (1st, 2nd, etc.) into ratings (0.5208, 0.2708, etc.) which can be analyzed mathematically and tracked over time. Check out my MSDN Magazine article "Competitive Analysis Using MAGIQ" at http://msdn.microsoft.com/msdnmag/issues/06/10/TestRun/default.aspx for more information.