I will later be posting a companion video which summarizes the DRE/DREw presentation for those who were not able to attend. (Not as much fun as comparing Swan and Dolphin test strategies, but the content should help.) The core slideset is at this link..

DRE Query. Here is the 'code' behind the DRE query, primarily so you can see where to place the parentheses. Note the set of assumptions: the cq database in which i write the query contains all and only the defects of interest. If these assumptions are not valid, just add the the filters when building the query.
SELECT 100 *
(select COUNT(distinct T1.dbid) from enttable T1 where T1.dbid <> 0 and (T1.date_raised between {ts '2008-01-01 05:00:00'} and {ts '2008-10-25 03:59:59'}) )
/
(select COUNT(distinct T1.dbid) from enttable T1 where T1.dbid <> 0 and (T1.date_raised between {ts '2008-01-01 05:00:00'} and {ts '2009-05-01 04:59:59'}) )

And finally, the Testing Effectiveness query. Notice that thes statements here are all just subsets or 'tweaks' on the same statements generated to create the above queries. Although this looks long and complicated, it is made up of simple parts.

Today I had the privilege and pleasure of attending a VoiCE discussion on Rational Method Composer (RMC) which included some discussion on the IBM Measured Capability Improvement Framework (MCIF). Dr Chris Sibbald presented, then opened the forum for discussion. What happens in VoiCE stays in VoiCE, so I won't touch on those items. Instead, let me share two separate (but related) requirements for successful process improvement: metrics and measures.

From a process engineer's perspective, business and operational objectives are a given whether derived by MCIF or some other method. We certainly escalate suggestions of benefits to be derived from exploiting an opportunity for change through adopting a set of one or more practices, but we are typically tasked with attacking the current shortcomings apparent to executives. For this posting, let's agree we are given objectives.

Our task is then to find a set of practices which will move the teams toward achieving those objectives. Reqt # 1. We cannot select practices without metrics.

In particular, I strongly assert that metrics which represent the current state and which are expected to indicate the value (or loss) from process change must be expressed before any attempt is made at practice selection. How else am I to compare available practices and select from among them? We might implement any without baselining*, but would afterwards be unable to tell 'different' from 'better.'

Let's now assume that a successful GQM or other technique provides some metrics with which to select and assess process changes. MCIF would then prioritize these and define a roadmap for their adoption.

Metrics are necessary but insufficient for achieving process improvement. Reqt # 2. Metrics must be decomposed to measures, and those measures (as well as their relationship to other components of metrics) must be communicated to the development team.Please note: This does not imply quotas for measures.

The distinction is important because members of a software development team have direct control over product and process measures but may have no ability to control (or even to view) metrics. An analyst may not know the average cost to deliver test results per use case point but can directly affect the minutes required to outline the scenario currently under development. Process engineers need to provide the team with measures which can be viewed, tracked, and controlled by the development team members. Importantly, fluctuations in those measures for special cause can be identified immediately by the team and communicated to the process engineer.

- Descriptive statistics are a special form of metric which relate a set of measurements to itself to predict expected values for such measurements. (Longest running transaction, Typical defect severity)