Thursday, January 17, 2008

We are a ten-year-old product company and would like to conduct a CMMI Appraisal. We want to use the Staged Representation but thought we had to be appraised first at Maturity Level 2, then, 3, 4, and finally, 5. Can't we just go right for ML 5?

Sure, you could try do that, meaning that it is not explicitly forbidden by the SEI, but I have to ask, why would you want to? The entire purpose of the Staged Representation is to first build a foundation, then institutionalize it, then identify the variation in the process, and then optimize it. I don't see any value in skipping any of these steps - in fact you would not be successful if you tried to skip any of it!

Even though you may skip appraisals, you will never be able to achieve ML5 without performing the processes, building the foundation, gathering and analyzing the data, and so on. So, there really is no "skipping."

If you choose to do all of the work, but skip the appraisals and attempt to achieve ML5 during your first SCAMPI A, you should be aware of two things:

First, even though it's a ML5 appraisal, you will still be appraised on the ML2, 3, and 4 process areas - just in a more rigorous fashion than if they had just been done at ML2. This is because once we pass ML2, we are including GP3.1 and GP3.2 for all of the ML 2 Process Areas.

The second, and this is really the kicker, is that the SEI has said that they will be scrutinizing and auditing EVERY ML4/5 appraisal, and giving special attention to those that attempt to go directly to ML4 or ML5. Red flag.

One of the reasons for this is that they have stated that no consultant, or LA, that was competent and ethical would recommend such an approach, except under the most extreme circumstances.

I worked with a company once that was ISO9001 Certified, had achieved CMMML5, had adopted ITIL and were rated at SPICE Level Three. They were ready for ML5, and didn't require the appraisals at ML2, 3. and 4.

If this is your situation, then great! If not, I wonder if your CMMI Consultant really is "top notch!"

Tuesday, January 8, 2008

We are preparing for a SCAMPI ML 3 Appraisal and the appraising company says that "the Quality Head of a company cannot be an Appraisal Team Member, due to the objectivity and interests attached." I'm not convinced. What do you think?

The SCAMPI Method Definition Document outlines the requirements for Appraisal Team Members, and it doesn't speak to whether ANY particular job function is excluded. It does, however, urge us to avoid real and perceived conflicts of interest, and specifically to avoid "chain of command" conflicts. What this means is that if you're a SW Development VP and everyone being interviewed works for you, you "shouldn't" be an ATM. I say "shouldn't" because I have observed situations where it works, it's just not that common or easy to do.

The problem, for those who have not had the experience, is that if your boss is in the room you are more likely to say what he/she wants to hear, instead of what actually occurred. And that taints the outcome of the appraisal.

You didn't say how your company was organized, but if you're the head of a quality department and you have several people on your team who may be interviewed, you can always leave the room and abstain from providing any input to that practice being evaluated. On the other hand, if you're the "author" of all of the processes, and you are the individual being measured on whether the appraisal is successful, I would discourage you from participating.

The two questions that the appraisal company should be concerned with are: 1) will you be interviewing people who work for you (not recommended); and 2) will you be in a position to evaluate your own work?

If the answer to both of these questions is "no" then I don't see a problem with it as long as you can show that objectivity is maintained. And the SCAMPI MDD has no issue with it either.

Friday, January 4, 2008

Isn't CAR (Causal Analysis and Resultion) just issue and defect prevention? Why is it ML5?

It’s true that CAR can be applied to “any issue,” but that’s not the whole story. CAR is way more than simple “defect prevention” or “root cause analysis” and this PA is oft misunderstood.

To understand CAR we need to put it into the context of “high maturity” and remember that the quantitative data that we establish and analyze using OPP feeds the data we use to identify the actual cause of the problem, or even the problem itself, that we are trying to solve (sometimes we don’t really know, right?). CAR guides us through the process of the “problem solving process” (e.g.; select data, analyze causes, implement action proposals, evaluate effort, record data). This is much like how DAR guides us through the process of creating a decision making process.

CAR is intended to help us manage a “process lever” that we turn, pull, push and otherwise manipulate to change or adjust the process, so that problems or inefficiencies are avoided, rather than fixed after the fact, by improving the process itself. That “lever” is created and pulled by a grand trio of OPP, CAR, and OID that work in concert to design, create, and pull the process lever that solves our problem. In the end, we update our baselines (OPP) so we know that it is indeed making us perform better. True, it says in the CMMI Intro class that we can use CAR if we are a “low maturity” organization, but then it points out that the benefits will be minimized. So, as you can see, “defect prevention” in high-maturity terms is an integrated process that includes OPP, CAR, and OID. CAR is only one of three important components.

I once worked with a CIO that was unhappy because his “estimate to actual” report on project delivery was all over the map (it was a scatter diagram). After slamming his fist on the table and insisting that it get “fixed” he proclaimed “we need a better estimating process!” And so they went off and spent close to 1 million dollars on developing an estimating process and deploying tools. Guess what? It was the same chart next year. Estimating wasn’t his problem. After developing some baselines and models (using OPP) we learned that Requirements churn was one of the likely causes – and we piloted a number of small, innovative process changes using CAR and OID – all the time updating our baselines and models. We even simulated some outcomes using different process components that were sitting in the PAL already – and guess what? After deployment of the improvement to the process, his scatter chart had the dots all along the mid-line.

So, you see, CAR is most valuable when performed using data generated from OPP. Like many of the other PA's in the CMMI, it's not as simple as one Process Area. You need to think big!

I am told by some people that the companies that follow the Agile methodologies can achieve only up to CMMI ML 3 and they cannot aim and achieve CMMI ML 4 or 5.

Is this true ?

As my mother in law would say: "I'd agree with you if you were right." There is much confusion and mis-information about Agile in the industry. Everyone talks about it, but few really understand and can execute on the methods. Everyone thinks it means "no documentation" and that is just wrong. It does mean we should make intelligent decisions about the amount and type of documents that are produced, and that we should remain vigilant about focusing on what's important (the software and the customer) but that is a different issue.

Conversely, there are too many people that think the CMMI means "lots of documents" and that ML4/5 need even more. This too is untrue. The CMMI tells us that if we perform a process there will be evidence of it being performed. That's different than saying "if we create documents we are performing a process."

I am working with one such Agile organization now that is about to complete a ML4 appraisal - and the are finding ML4 much MORE valuable to them than they did ML2 and ML3.

One reason for this is that at ML4 we have good data to tell us how we can minimize and optimize our process. Agile proponents always talk about "just enough" (and I think that's great) but until you reach ML4 and have collected the right data to perform OPP (statistical analysis) how do you KNOW what "just enough" is? The answer is "you don't."

This organization now knows which process assets bring them value, which ones don't, and where they should insert (or remove) a process component to further optimize their development process.

So, there is no truth to what you've heard . . . plenty of "agile" organizations adopt ML4 and ML5. The question I would ask first is, are the companies you're hearing about really "agile" or are they that other, new-fangled methodology: "lazy?"

Ah, the 'ol "bi-directional traceability question!" This one is a topic of much discussion at Lead Appraiser geek fests, and most people have there own particular spin on it. Here's mine.
Traceability helps us to understand the relationships between work products, whether they be requirements, code, designs, tests, or others, and helps ensure that those relationships have integrity.
For Requirements, traceability helps us to understand the link between the elicited customer need and the product, sub-product, and test (VER) requirements (often a 1:many relationship). The ability to start "at the top" and trace it all the way to the many test cases, or start at a single test case (or sub product requirement) and trace it "up to the top" is what we call "bi-directional traceability." This is also an example of vertical traceability and is also what is implied by "requirements traceability" in the CMMI.

If you're a software engineer it helps to think of these relationships as an object model. A master object has below it many others that directly support it and inherit its attributes and behaviors, and these can all be traced together as part of the object model taxonomy. The same goes for requirements.

Horizontal traceability is less common (although equally important), and applies to the tracing of functionality across multiple related components, such as interfaces or data access components, and helps to enable more effective integration testing, troubleshooting, and implementation.

Most seem to agree that vertical traceability is what REQM.SP1.4 is all about (in its bi-directional form) and that horizontal traceability, though valuable, is not specifically expected by the CMMI.

We have a defined template for SRS & DDD in our organization. For some of the projects, the customer is providing the requirements and hence they are using their own template and format. Kindly clarify whether we can accept this modification / removal of the existing approved template in our QMS.

You haven't indicated at what level of CMMI your organization is performing at, but it sounds as if your customer is providing an alternative to your "standard" template that projects are expected to use.

Both OPD and IPM provide for the introduction of new or alternative "process assets" that are introduced at the project level, and this situation is one of the reasons those SP's exist in the model.

Assuming that projects know this at the beginning, and that this was planned for in their Configuration Planning, AND your tailoring guidelines allow for it (if you're ML3 and you have tailoring guidelines) AND the customer-provided template has all or most of the attributes required (such as traceability) then this is a perfectly acceptable alternative.

As a matter of fact, successful demonstration and management of this practice shows a higher level of process maturity than forcing everyone to use the same document regardless of the situation they are in.

I see organizations usually implement GP 2.8 with product and process metrics. But, now I am working in an organization that says they have talked with lead appraisers who told them that it is possible to implement GP 2.8 without any metrics, just systematically review the process execution, generate a qualitative report, identify issues, take corrective actions and follow them upon closure without mention any metric. I think this is not the intention of the practice. Am I wrong?

I love questions that are "hot topics" anytime a bunch a Lead Appraiser get together - and this is one of them! Yes, unfortunately we are geeks and we argue a lot about process. There are those LA's who would agree with you and say that the intention of the practice is to use a metric. I know one at the SEI in particular who insists that this is required. But, there are those who say, no, that isn't always the case. I say "it depends." So, how to decide?

The Generic Practices are frustrating because little or no guidance is given (compared to SP's) as to what is really expected. But the authors do provide the answer, albeit indirectly. So I cracked open my “signed by the author” copy of the book and checked it out for myself. In order to understand the GPs, we need to consider the reason why the GP is there in the first place, and then map back to the PA from which it came. In the case of GP2.8, it clearly maps to PMC, not to MA, because we want to know "how is it going?"

Can the Specific Goals in PMC be satisfied without a metric? While metrics are often presented as part of the objective evidence for PMC during an appraisal, there are many parts of PMC that do not require metrics to be successful. As a matter of fact, the SP’s don’t speak to metrics at all! However, there is another clue – and it’s right in your original question (great questions often have the answer in them don’t they?).

In your question you say the client has told you that they "systematically review the process execution, generate a qualitative report, identify issues, take corrective actions and follow them upon closure." That reads a LOT like the SPs in PMC and all of those could certainly be performed without a metric couldn't they? As a matter of fact, the SG’s in PMC could be satisfied without a formal metric (unless you consider schedule and budget part of a metric . . . which it kind-of sort-of is). Now, as to whether they actually DID all of this or not is another question, and a metric about this would be great evidence that this actually occurred. Darn! That CMMI is so circular!

So, using Socratic reasoning as my guide, GP2.8 is derived from PMC, and PMC is largely about “systematically review the . . . “ and not about “metrics” THEREFORE, it’s logical to assume that yes, it is possible for GP2.8 to be satisfied without a metric ASSUMING they are really doing all these things they claim to be doing (which may require a metric show!).

Question answered? Not so fast!

Now, let’s fast forward to ML4/5 where we all start wishing we attached a metric to GP2.8 back when we were ML2. Why? Because that data gathered from GP2.8 happens to be the very same data we desperately need in order to perform OPP as we develop models and baselines in order to begin QPM.

This is where the real power of the CMMI becomes evident and it is a shame that many organizations get to this point and are stopped dead in their tracks because they have no historical data about the performance of the process.

So there you go. The analysis tells us we don’t HAVE to have a metric, but that if we don’t, we’ll be sorry. Go figure.