At worst, the CMM is a whitewash
that obscures the true dynamics of software engineering, suppresses
alternative models. If an organization follows it for its own sake,
rather than simply as a requirement mandated by a particular government
contract, it may very well lead to the collapse of that company's
competitive potential.

James Bach

.

"In times of universal deceit, telling the truth will be a
revolutionary act."

-- George Orwell
[Eric Arthur Blair] (1903-1950) British author

The Software Capability Maturity Model (CMM), is a software development
methodology that is as close to scam as ISO 9000. The current version was
released in December 2001 by the Software Engineering Institute and is often
called version 1.1 of the Capability Maturity Model Integration (CMMI).

More politically inclined authors would claim that this is a variant
of "Brezhnev socialism" applied to software engineering (or worse a variant
of Lysenkoism as there is
some government pressure to get the certification), but that's another story.
But labels aside the fact that some organization is CMM-certified (and it
does not matter to what level -- see below) in the current environment should
probably be viewed as a slick marketing trick (especially useful for outsourcers).

The initial development of CMM is attributed to
Watts Humphrey,
who founded the Software Process Program of the Software Engineering Institute
(SEI) at Carnegie Mellon University. From 1959 to 1986 he worked for IBM.
He holds a bachelor's degree in physics from the University of Chicago,
a master's degree in physics from the Illinois Institute of Technology,
and a master's degree in business administration from the University of
Chicago. It looks like CMM originated from 1987 document written by Watts
S. Humphrey (Evolution
of CMM and CMMI from SEI):

1987, September - Watts S. Humphrey
authored the first CMM, A Method for Assessing the Software Engineering
Capability of Contractors, CMU/SEI-87-TR-23.

This was a 40-page document containing a list of questions to
be used as an assessment tool. Each question was mapped to the five
levels, still present today. To achieve a level, an organization
had to demonstrate they could answer "Yes" to 90% of the "starred"
questions and 80% of all questions for that level.

However, there was also a second "Technology" dimension, with
two levels A and B, which was displayed vertically (with the 5 -levels
horizontal)! The technology dimension assessed the level of automation
present. Organizations "matured" from 1A to 5B.

Like it is often the case with questionable doctrines his views of CMM
are more realistic then many of his followers and he had second thought
about the effectiveness of his creation (Sidebar
Watts Humphrey on Software Quality):

Is CMM the only quality tool software developers need?

The CMM framework is essentially aimed at how do you establish a
good management environment for doing engineering work. It's about the
planning you need, the configuration management, the practices, the
policies -- all that stuff. It doesn't talk about how you do things.

When I looked at organizations that were at high [CMM] levels, I
discovered that the engineering practices hadn't changed that much.
I had naively assumed that when we put good practices in place, like
planning and measurement and quality management, that it would seep
down to the engineers [programmers], and they'd start to use it in their
personal work. It didn't happen.

"The essence of a software entity is a construct of interlocking
concepts ... I believe the hard part of building software to be the
specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the
representation ... If this is true, building software will always
be hard. There is inherently no silver bullet."

Road to hell is paved with good intentions. Good work can be done under
any software development model. But excessive bureaucratization
stimulates performing bad work by redirecting energy from worth goal to
"raising the flag and marching with the banner" activities. In excessive
bureaucratization is in place it is actually not so important what software development methodology is used:
you are screwed anyway.

What is really bas in all this CMM junk is that it shifts focus from improvement of real capabilities
of software organization into creation of useless, excessive, expensive
and time consuming
bureaucratic perversions. Excessive, I would say obsessive, as the focus
is on formal procedures
as well as an illusive goal of "process improvement".

The latter is the most detrimental
and dangerous feature of CMM. This naive (or crooked) approach directly encourages excessive bureaucratization,
mandates wasteful paperwork in best "mature socialism" style. Some problem
associated with the CMM are somewhat similar to problems of traditional
waterfall approach to software development: the most detached from reality
software life-cycle model.
In many ways CMM's activity-based measurement approach mimics the
sequential paradigm inherent in the
waterfall software development
model. Here is a relevant quote from
CMM vs. CMMI: from Conventional to Modern Software Management - article
originally published in The Rational Edge, February 2002

Is the CMM Obsolete?

Some issues associated with the practice of the CMM are also recurring
symptoms of traditional waterfall approaches and overly process-based
management. The CMM's activity-based measurement approach is very much
in alignment with the sequential, activity-based management paradigm
of the waterfall process (i.e., do requirements activities, then design
activities, then coding activities, then unit testing activities, then
integration activities, then system acceptance testing).
This probably
explains why many organizations' perspectives on the CMM are anchored
in the waterfall mentality.

Alternatively, iterative development techniques, software industry
best practices, and economic motivations drive organizations to take
a more results-based approach: Develop the business case, vision, and
prototype solution; elaborate into a baseline architecture; elaborate
into usable releases; and then finalize into fieldable releases. Although
the CMMI remains an activity-based approach (and this is a fundamental
flaw), it does integrate many of the industry's modern best practices,
and it discourages much of the default alignment with the waterfall
mentality.

One way to analyze CMM and CMMI alignment with the waterfall model
and iterative development, respectively, is to look at whether each
model's KPAs motivate sound software management principles for these
two different development approaches. First, we will define those software
management principles. Over the last ten years, I have compiled two
sets: one for succeeding with the conventional, waterfall approach and
one for succeeding with a modern, iterative approach. Admittedly, these
"Top Ten Principles" have no scientific basis and provide only a coarse
description of patterns for success with their respective management
approaches. Nevertheless, they do provide a suitable framework for my
view that the CMM is aligned with the waterfall mentality, whereas the
CMMI is more aligned with an iterative mentality.

Like one critic of CMM aptly noted, the initial level of CMM (Level 2
or "managed") is actually a certification
"of the ability to stand upright
and make fire" applied to software development. This is so basic (and fuzzy)
that any software development organization can legitimately claim CMM level
2 readiness.

The most insightful critique of CMM was provided by James Bach in his
article The Immaturity
of CMM originally published in the September 1994 issue of American
Programmer. Here is one relevant quote from James Bach's paper (I strongly encourage
to read it ) which dispels the "institutionalization"
myth that is the cornerstone of CMM:

The idea that process makes up for mediocrity
is a pillar of the CMM, wherein humans are apparently subordinated to
defined processes. But, where is the justification for
this? To render excellence less important the problem solving tasks
would somehow have to be embodied in the process itself. I've never
seen such a process, but if one exists, it would have to be quite complex.
Imagine a process definition for playing
a repeatably good chess game. Such a process exists, but is useful only
to computers; a process useful to humans has neither been documented
nor taught as a series of unambiguous steps. Aren't software problems
at least as complex as chess problems?

The CMM reveres institutionalization of process for its own sake.
Since the CMM is principally concerned with an organization's ability
to commit, such a bias is understandable. But, an organization's ability to commit is merely an expression
of a project team's ability to execute. Even if necessary
processes are not institutionalized formally, they may very well be
in place, informally, by virtue of the skill of the team members.

Institutionalization guarantees nothing,
and efforts to institutionalize often lead to a bifurcation between
an oversimplified public process and a rich private process that must
be practiced undercover. Even if institutionalization
is useful, why not instead institutionalize a system for identifying
and keeping key contributors in the organization, and leave processes
up to them? The CMM contains very little information on process dynamics.

In other words, the right organizational processes can improve the output
of a group of talented software developers, but they do not create one.
By ignoring this critical, fundamental fact, the CMM
completly loses credibility with anyone experienced
with a wide range of software development projects.

In his review of Back's groundbreaking and courageous paper Kelly Nehowig
observed:

The author describes six basic problem areas that he has identified
with the CMM:

The CMM has no formal theoretical basis and in fact is based
on the experience “of very knowledgeable people”. Because of
this lack of theoretical proof, any other model based on experiences
of other experts would have equal veracity.

The CMM does not have good empirical support and this same
empirical support could also be construed to support other models.
Without a comparison of alternative process models under a controlled
study, an empirical case cannot be built to substantiate the SEI’s
claims regarding the CMM. Primarily, the model is based on the experiences
of large government contractors and of Watts Humprey’s own experience
in the mainframe world. It does not represent the successful experiences
of many shrink-wrap companies that are judged to be a “level 1”
organization by the CMM.

The CMM ignores the importance of people involved with the
software process by assuming that processes can somehow render individual
excellence less important. In order for this to be the case,
problem-solving tasks would somehow have to be included in the process
itself, which the CMM does not begin to address.

The CMM reveres the institutionalization of process for its
own sake. This guarantees nothing and in some cases, the institutionalization
of processes may lead to oversimplified public processes, ignoring
the actual successful practice of the organization.

The CMM does not effectively describe any information on
process dynamics, which confuses the study of the relationships
between practices and levels within the CMM. The CMM does not
perceive or adapt to the conditions of the client organization.
Arguably, most and perhaps all of the key practices of the CMM at
its various levels could be performed usefully at level 1, depending
on the particular dynamics of an organization. Instead of modeling
these process dynamics, the CMM merely stratifies them.

The CMM encourages the achievement of a higher maturity level
in some cases by displacing the true mission, which is improving
the process and overall software quality. This may effectively
“blind” an organization to the most effective use of its resources.

The author’s most compelling argument against the CMM is that many successful
software companies that, according to the CMM, should not exist. Many
software companies that provide “shrink wrap” software such as Microsoft,
Symantec, and Lotus would definitely be classified by the CMM as level
1 companies. In these companies, innovation
reigns supreme, and it is from the perspective of the innovator that
the CMM seems lost.

The author claims that innovation
per se does not appear in the CMM at all, and is only suggested by level
5. Preoccupied with predictability, the CMM is ignorant of the
dynamics of innovation. In fact, where innovators advise companies
to be flexible, to push authority down into the organization, and to
recommend constant constructive innovation, the CMM mistakes all of
these attributes to the chaos that it represents in level 1 companies.
Because the CMM is distrustful of personal
contributions, ignorant of the environment needed to nurture innovative
thinking, and content to bury organizations under an ineffective superstructure,
achieving level 2 on the CMM scale may actually destroy the very thing
that caused the company to be successful in the first place.

The highest level (Level 5) has nothing to do with software quality:
what it really means is the ability to use a double set of
books and produce a lot of bogus paperwork in English language. As such,
it has tremendous
marketing value, especially if the other side is represented by PHBs. It
creates really nice opening for outsourcers (and outsourcers play
important, may be critical role in keeping CMM afloat) who can claim
being certified at level 5 as the best thing in software development
since sliced bread. For
that reason this level of CMM certification is simply loved by outsourcers.
Nine Indian firms claim level 5 certification, and not without a reason
:-).

If you look at Usenet discussions of CMM hype the most strong defenders of this marketing trick are people connected
to outsourcers. The level of argumentation reminds me the
USSR Communist Party Congresses with its long applauses, changing into a
standing ovation for each monstrous stupidity invented by the Politburo
jerks ;-).

If you look at Usenet discussions of CMM hype the most strong defenders of this marketing trick are people connected
to outsourcers. The level of argumentation reminds me the
USSR Communist Party Congresses with its long applauses, changing into a
standing ovation for each monstrous stupidity invented by the Politburo
jerks ;-).

Sometimes CMM-compliance is mandated. In this case the less efforts spend
on obtaining it, the better. In fact, anyone can proclaim themselves to be
on CMM Level they want without any significant changes in the actual software development
process. This is a "paper tiger" type of certifications: all that is needed
is (bogus) paperwork.

At the same time too much zeal in achieving CMM-compliance can be very
destructive for the organization. The CMM absolutize the value of formal
processes, but ignores people. And it is people, the software developers,
who are the key to success. This is readily apparent to anyone who is familiar
with the work of Gerald Weinberg on programming psychology. The net result
of excessive zeal in achieving CMM compliance can be the proliferation of
dangerous and clueless "software development bureaucracy" and micromanagers.
If this is possible I would recommend for CIO to find volunteers who work
on CMM-compliance, created an appropriate organization unit and after it
is achieved dismantle or outsource the unit and let go people who were the
most enthusiastic about the whole process ;-). They were extremely dangerous
for the organization health anyway.

All-in-all obtaining CMM certification is by-and-large a waist of organizational
resources, but you might need to do in order to participate in government
contacts. If you do, please it take it easy and understands that this is
pretty much useless exercise. Still the world is not perfect and sometime
you need to play the game. The most constructive was to play CMM game is
to concentrate on introduction of automation tools like bug tracking software
(for example Bugzilla), test automation tools (for example
DejaGnu) and compilation and linkage
automation software ( for example
OMU can be adapted
for this purpose).

But the key issue here is to block the promotion to management ranks
a special category of people who thrive under the organizational atmosphere
of "software development socialism". Those people are the most dangerous
and destructive for any software development organization and CMM can serve
as a litmus test for exposing them. If CMM process helps them to grow in
the management ranks everything is lost, if opposite is true (as reflected
by the shrewd suggestion above that CMM-compliance unit should first be
created and then outsourced :-) then CMM process can even be useful.

Remember about the danger of "software development socialism", folks
;-). Such side effects of typical CMM adoption as bureaucratization, micromanagement
and promotion of wrong type of people should never be overlooked as they
kill software developers creativity and any innovation capability within
the organization. Everything becomes way too predictable as in "predictable
failure". And you know what happen with the USSR with its "mature socialism",
don't you ?

Software companies which try to push technology envelope would be better
off ignoring CMM. As Back noted

"Studies alleging that the CMM is valuable
don't consider alternatives, and leave out critical data that would allow
a full analysis of what's going on in companies that claim to have moved
up in CMM levels and to have benefited for that reason."

"My thesis, in this essay, is that the CMM is a particular mythology
of software process evolution that cannot legitimately claim to be a
natural or essential representation of software processes."

This article was originally published in the September 94 issue
of American Programmer.

The Software Engineering Institute's (SEI) Capability Maturity Model
(CMM) gets a lot of publicity. Given that the institute is funded by
the US Department of Defense to the tune of tens of millions of dollars
each year [1], this should come as no surprise' the folks at the SEI
are the official process mavens of the military, and have the resources
to spread the word about what they do. But, given also that the CMM
is a broad, and increasingly deep, set of assertions as to what constitutes
good software development practice, it's reasonable to ask where those
assertions come from, and whether they are in fact complete and correct.

My thesis, in this essay, is that the CMM is a particular mythology
of software process evolution that cannot legitimately claim to be a
natural or essential representation of software processes.

The CMM is at best a consensus among a particular group of software
engineering theorists and practitioners concerning a collection of effective
practices grouped according to a simple model of organizational evolution.
As such, it is potentially valuable for those companies that completely
lack software savvy, or for those who have a lot of it and thus can
avoid its pitfalls.

At worst, the CMM is a whitewash that obscures the true dynamics
of software engineering, suppresses alternative models. If an organization
follows it for its own sake, rather than simply as a requirement mandated
by a particular government contract, it may very well lead to the collapse
of that company's competitive potential. For these reasons, the CMM
is unpopular among many of the highly competitive and innovative companies
producing commercial shrink-wrap software.

A short description of the CMM

The CMM [7] was conceived by Watts Humphrey, who based it on the
earlier work of Phil Crosby. Active development of the model by the
SEI began in 1986.

It consists of a group of "key practices", neither new nor unique
to CMM, which are divided into five levels representing the stages that
organizations should go through on the way to becoming "mature". The
SEI has defined a rigorous process assessment method to appraise how
well a organization satisfies the goals associated with each level.
The assessment is supposed to be led by an authorized lead assessor.

The maturity levels are:

1. Initial (chaotic, ad hoc, heroic)

2. Repeatable (project management, process discipline)

3. Defined (institutionalized)

4. Managed (quantified)

5. Optimizing (process improvement)

One way companies are supposed to use the model is first to assess
their maturity level and then form a specific plan to get to the next
level. Skipping levels is not allowed.

The CMM was originally meant as a tool to evaluate the ability of
government contractors to perform a contracted software project. It
may be suited for that purpose; I don't know. My concern is that it
is also touted as a general model for software process improvement.
In that application, the CMM has serious weaknesses.

Shrink-wrap companies, which have also been called commercial off-the-shelf
firms or software package firms, include Borland, Claris, Apple, Symantec,
Microsoft, and Lotus, among others. Many such companies rarely if ever
manage their requirements documents as formally as the CMM describes.
This is a requirement to achieve level 2, and so all of these companies
would probably fall into level 1 of the model.

Criticism of the CMM

A comprehensive survey of criticism of the CMM is outside the scope
of this article. However, Capers Jones and Gerald Weinberg are two noteworthy
critics.

In his book Assessment & Control of Software Risks [6], Jones
discusses his own model, Software Productivity Research (SPR), which
was developed independently from CMM at around the same time and competes
with it today. Jones devotes a chapter to outlining the weaknesses of
the CMM. SPR accounts for many factors that the CMM currently ignores,
such as those contributing to the productivity of individual engineers.

In the two volumes of his Quality Software Management series
[12,13], Weinberg takes issue with the very concept of maturity as applied
to software processes, and instead suggests a paradigm based on patterns
of behavior. Weinberg models software processes as interactions between
humans, rather than between formal constructs. His approach suggests
an evolution of "problem-solving leadership" rather than canned processes.

General problems with CMM

I don't have the space to expand fully on all the problems I see
in the CMM. Here are the biggest ones from my point of view as a process
specialist in the shrink-wrap world:

The CMM has no formal theoretical basis. It's based on the experience
of "very knowledgeable people". Hence, the de facto underlying theory
seems to be that experts know what they're doing. According to such
a principle, any other model based on experiences of other knowledgeable
people has equal veracity.

The CMM has only vague empirical support. That is, the empirical
support for CMM could also be construed to support other models.
The model is based mainly on experience of large government contractors,
and Watts Humphrey's own experience in the mainframe world. It does
not account for the success of shrink-wrap companies, and levels
1, 4, and 5 are not well represented in the data: the first because
it is misrepresented, the latter two because there are so few organizations
at those levels. The SEI's, Mark Paulk can cite numerous experience
reports supporting CMM, and he tells me that a formal validation
study is underway. That's all well and good, but the anecdotal reports
I've seen and heard regarding success using the CMM could be interpreted
as evidence for the success of people working together to achieve
anything. In other words, without a comparison of
alternative process models under controlled conditions, the empirical
case can never be closed. On the contrary, the case is kept wide
open by ongoing counterexamples in the form of successful level
1 organizations, and by the curious lack of data regarding failures
of the CMM (which may be due to natural reluctance on the part of
companies to dwell on their mistakes, or of the SEI to record them).

The CMM reveres process, but ignores people. This is readily
apparent to anyone who is familiar with the work of Gerald Weinberg,
for whom the problems of human interaction define engineering. By
contrast, both Humphrey and CMM mention people in passing [5], but
both also decry them as unreliable and assume that defined processes
can somehow render individual excellence less important. The idea
that process makes up for mediocrity is a pillar of the CMM, wherein
humans are apparently subordinated to defined processes. But, where
is the justification for this? To render excellence less important
the problem solving tasks would somehow have to be embodied in the
process itself. I've never seen such a process, but if one exists,
it would have to be quite complex. Imagine a process definition
for playing a repeatably good chess game. Such a process exists,
but is useful only to computers; a process useful to humans has
neither been documented nor taught as a series of unambiguous steps.
Aren't software problems at least as complex as chess problems?

The CMM reveres institutionalization of process for its own
sake. Since the CMM is principally concerned with an organization's
ability to commit, such a bias is understandable. But, an organization's
ability to commit is merely an expression of a project team's ability
to execute. Even if necessary processes are not institutionalized
formally, they may very well be in place, informally, by virtue
of the skill of the team members. Institutionalization guarantees
nothing, and efforts to institutionalize often lead to a bifurcation
between an oversimplified public process and a rich private process
that must be practiced undercover. Even if institutionalization
is useful, why not instead institutionalize a system for identifying
and keeping key contributors in the organization, and leave processes
up to them?

The CMM contains very little information on process dynamics.
This makes it confusing to discuss the relationship between practices
and levels with a CMM proponent, because of all the hidden assumptions.
For instance, why isn't training on level 1 instead? Training is
especially important at level 1, where it may take the form of mentoring
or of generic training in any of the skills of software engineering.
The answer seems to be that nothing is placed at level 1,
because level 1 is defined merely as not being at level 2. The hidden
assumption here is that who we are, what problems we face, and what
we're already doing doesn't matter: just get to level 2.
In other words, the CMM doesn't perceive or adapt to the conditions
of the client organization. Therefore training or any other informal
practice at level 1, no matter how effective it is, could be squashed
accidentally by a blind and static CMM. Another example: Why is
defect prevention a level 5 practice? We use project post mortems
at Borland to analyze and improve our processes -- isn't that a
form of defect prevention? There are many such examples I could
cite, based on a reading of the CMM 1.1 document (although I did
not review the voluminous Key Practices document) and the appendix
of Humphrey's Managing the Software Process [5]. Basically,
most and perhaps all of the key practices could be performed usefully
at level 1, depending on the particular dynamics of the particular
organization. Instead of actually modeling those process dynamics,
the way Weinberg does in his work, the CMM merely stratifies them.

<![endif]>The CMM encourages displacement of goals from the
true mission of improving process to the artificial mission of achieving
a higher maturity level. I call this "level envy", and it generally
has the effect of blinding an organization to the most effective
use of its resources. The SEI itself recognizes this as a problem
and has taken some steps to correct it. The problem is built in
to the very structure of the model, however, and will be very hard
to exorcise.

The world of technology thrives best when individuals are left
alone to be different, creative, and disobedient. -- Don Valentine,
Silicon Valley Venture Capitalist [8]

Apart from the concerns mentioned above, the most powerful argument
against the CMM as an effective prescription for software processes
is the many successful companies that, according the CMM, should not
exist. This point is most easily made against the backdrop of the Silicon
Valley.

Tom Peters's, Thriving on Chaos [9], amounts to a manifesto
for Silicon Valley. It places innovation, non-linearity, ongoing revolution
at the center of its world view. Here in the Valley, innovation reigns
supreme, and it is from the vantage point of the innovator that the
CMM seems most lost. Personal experience at Apple and Borland, and contact
with many others in the decade I've spent here, support this view.

Proponents of the CMM commonly mistake its critics as being anti-process,
and some of us are. But a lot of us, including me, are process specialists.
We believe in the kinds of processes that support innovation. Our emphasis
is on systematic problem-solving leadership to enable innovation, rather
than mere process control to enable cookie-cutter solutions.

Innovation per se does not appear in the CMM at all, and it is only
suggested by level 5. This is shocking, in that the most innovative
firms in the software industry, (e.g., General Magic, a pioneer in personal
digital communication technology) operate at level 1, according to the
model. This includes Microsoft, too, and certainly Borland [2]. Yet,
in terms of the CMM, these companies are considered no different than
any failed startup or paralyzed steel company. By contrast, companies
like IBM, which by all accounts has made a real mess of the Federal
Aviation Administration's Advanced Automation Project, score high in
terms of maturity (according to a member of a government audit team
with whom I spoke).

Now, the SEI argues that innovation is outside of its scope, and
that the CMM merely establishes a framework within which innovation
may more freely occur. According to the literature of innovation, however,
nothing could be further from the truth. Preoccupied with predictability,
the CMM is profoundly ignorant of the dynamics of innovation.

Such dynamics are documented in Thriving on Chaos, Reengineering
the Corporation [4], and The Fifth Discipline [10], three
well known books on business innovation. Where innovators advise companies
to get flexible, the CMM advises them to get predictable. Where the
innovators suggest pushing authority down in the organization, the CMM
pushes it upward. Where the innovators recommend constant constructive
innovation, the CMM mistakes it for chaos at level 1. Where the innovators
depend on a trail of learning experiences, the CMM depends on a trail
of paper.

Nowhere is the schism between these opposing world-views more apparent
than on the matter of heroism. The SEI regards heroism as an unsustainable
sacrifice on the part of particular individuals who have special gifts.
It considers heroism the sole reason that level 1 companies succeed,
when they succeed at all.

The heroism more commonly practiced in successful level 1 companies
is something much less mystical. Our heroism means taking initiative
to solve ambiguous problems. This does not mean burning people up and
tossing them out, as the SEI claims. Heroism is a definable and teachable
set of behaviors that enhance and honor creativity (as a unit of United
Technologies Microelectronics Center has shown [3]). It is communication,
and mutual respect. It means the selective deployment of processes,
not according to management mandate, but according to the skills of
the team.

Personal mastery is at the center of heroism, yet it too has no place
in the CMM, except through the institution of a formal training program.
Peter Senge [10], has this to say about mastery:

"There are obvious reasons why companies resist encouraging personal
mastery. It is 'soft', based in part on unquantifiable concepts such
as intuition and personal vision. No one will ever be able to measure
to three decimal places how much personal mastery contributes to productivity
and the bottom line. In a materialistic culture such as ours, it is
difficult even to discuss some of the premises of personal mastery.
'Why do people even need to talk about this stuff?' someone may ask.
'Isn't it obvious? Don't we already know it?'"

This is, I believe, the heart of the problem, and the reason why
CMM is dangerous to any company founded upon innovation. Because the
CMM is distrustful of personal contributions, ignorant of the conditions
needed to nurture non-linear ideas, and content to bury them beneath
a constraining superstructure, achieving level 2 on the CMM scale may
very well stamp out the only flame that lit the company to begin with.

I don't doubt that such companies become more predictable, in the
way that life becomes predictable if we resolve never to leave our beds.
I do doubt that such companies can succeed for long in a dynamic world
if they work in their pajamas.

An alternative to CMM

If not the maturity model, then by what framework can we guide genuine
process improvement?

Alternative frameworks can be found in generic form in Thriving
on Chaos, which contains 45 "prescriptions", or The Fifth Discipline,
which presents--not surprisingly--five disciplines. The prescriptions
of Thriving on Chaos are embodied in an organizational tool called
The Excellence Audit, and The Fifth Discipline Fieldbook
[11], which provides additional guidance in creating learning organizations,
is now available.

An advantage of these models is that they provide direction, without
mandating a particular shape to the organization. They actually provide
guidance in creating organizational change.

Specific to software engineering, I'm working on a process model
at Borland that consists of a seven-dimensional framework for analyzing
problems and identifying necessary processes. These dimensions are:
business factors, market factors, project deliverables, four primary
processes (commitment, planning, implementation, convergence), teams,
project infrastructure, and milestones. The framework connects to a
set of scaleable "process cycles". The process cycles are repeatable
step by step recipes for performing certain common tasks.

The framework is essentially a situational repository of heuristics
for conducting successful projects. It is meant to be a quick reference
to aid experienced practitioners in deciding the best course of action.

The key to this model is that the process cycles are subordinated
to the heuristic framework. The whole thing is an aid to judgment,
not a prescription for institutional formalisms. The structure
of the framework, as a set of two-dimensional grids, assists in process
tailoring and asking "what if...?"

In terms of this model, maturity means recognizing problems (through
the analysis of experience and use of metrics) and solving them (through
selective definition and deployment of formal and informal processes),
and that means developing judgment and cooperation within teams. Unlike
the CMM, there is no a priori declaration either of the problems, or
the solutions. That determination remains firmly in the hands of the
team.

The disadvantage of this alternative model is that it's more complex,
and therefore less marketable. There are no easy answers, and our progress
cannot be plotted on the fingers of one hand. But we must resist the
temptation to turn away from the unmeasurable and sometimes ineffable
reality of software innovation.

After all, that would be immature.

Postscript 02/99

In the five years since I wrote this article, neither the CMM situation,
nor my assessment of it, has changed much. The defense industry continues
to support the CMM. Some commercial IT organizations follow it, many
others don't. Software companies pursuing the great technological goldrush
of our time, the Internet, are ignoring it in droves. Studies alleging
that the CMM is valuable don't consider alternatives, and leave out
critical data that would allow a full analysis of what's going on in
companies that claim to have moved up in CMM levels and to have benefited
for that reason.

One thing about my opinion has shifted. I've become more comfortable
with the distinction between the CMM philosophy, and the CMM issue list.
As a list of issues worth addressing in the course of software process
improvement, the CMM is useful and benign. I would argue that it's incomplete
and confusing in places, but that's no big deal. The problem begins
when the CMM is adopted as a philosophy for good software engineering.

Still, it has become a lot clearer to me why the CMM philosophy is
so much more popular than it deserves to be. It gives hope, and an illusion
of control, to management. Faced with the depressing reality that software
development success is contingent upon so many subtle and dynamic factors
and judgments, the CMM provides a step by step plan to do something
unsubtle and create something solid. The sad part is that this
step-by-step plan usually becomes a substitute for genuine education
in engineering management, and genuine process improvement.

Over the last few years, I've been through Jerry Weinberg's classes
on management and change artistry: Problem Solving Leadership,
and the Change Shop. I've become a part of his Software Engineering
Management Development Group program, and the SHAPE forum.
Information about all of these are available at http://www.geraldmweinberg.com.
In my view, Jerry's work continues to offer an excellent alternative
to the whole paradigm of the CMM: managers must first learn to see,
hear, and think about human systems before they can hope to control
them. Software projects are human systems'deal with it.

One last plug. Add to your reading list The Logic of Failure,
by Dietrich Dorner. Dorner analyzes how people cope with managing complex
systems. Without mentioning software development or capability maturity,
it's as eloquent an argument against CMM philosophy as you'll find.

The paper being reviewed was written to support the thesis that the
Software Engineering Institute's Capability Maturity Model (SEI CMM)
is a collection of software engineering practices that are organized
according to a simple model based on process evolution that are not
completely effective in every software organization. The author makes
his case by describing six areas in which he has general problems with
the CMM. This is followed by a section outlining the author’s claim
that a “level 1” organization is completely misunderstood by the SEI
and that effective software can be (and actually is) created by many
level 1 organizations. Finally, the author briefly describes an alternative
to CMM that can be used as a framework for process improvement.

For the most part, I believe that the author has accurately critiqued
the CMM and, from my experience, I would agree with the problems he
discusses. In my mind, the CMM is a good theoretical guideline for establishing
a basic understanding of the characteristics of a good software development
organization, but by stringently following its processes and procedures
to the letter, an organization is not guaranteed to be successful. The
CMM does not deal effectively with innovation issues and people issues.
It also does not reconcile the fact that many successful software organizations
can claim various attributes associated with four (or sometimes all
five) of the CMM levels but, due to the rules established by the CMM,
would officially be designated a Level 1 organization, which unfairly
describes the organization’s capabilities.

Summary of the Reviewed Article

The article is broken into several sections that describe the CMM in
general, the problems that the author has with the CMM, an alternative
to the CMM, and a postscript that was added to the original paper in
February of 1999.

Brief description of the CMM

The author describes the CMM as a group of key practices that are
divided into five levels representing various maturity levels that organizations
should go through on their way to becoming “mature”.

The author lists the CMM levels as follows:

Initial (chaotic,
ad hoc, heroic)

Repeatable
(project management, process discipline)

Defined (institutionalized)

Managed (quantified)

Optimizing
(process improvement)

The author states that the original intent of the CMM was that of
a tool to evaluate the ability of government contractors to perform
a contracted software project. His primary concern is that many tout
the CMM as a general model for process improvement and he believes that
in this area, it has many weaknesses.

General Problems with the CMM

The author describes six basic problem areas that he has identified
with the CMM:

The CMM has no formal theoretical basis and in fact is based
on the experience “of very knowledgeable people”. Because of this
lack of theoretical proof, any other model based on experiences
of other experts would have equal veracity.

The CMM does not have good empirical support and this same empirical
support could also be construed to support other models. Without
a comparison of alternative process models under a controlled study,
an empirical case cannot be built to substantiate the SEI’s claims
regarding the CMM. Primarily, the model is based on the experiences
of large government contractors and of Watts Humprey’s own experience
in the mainframe world. It does not represent the successful experiences
of many shrink-wrap companies that are judged to be a “level 1”
organization by the CMM.

The CMM ignores the importance of people involved with the software
process by assuming that processes can somehow render individual
excellence less important. In order for this to be the case, problem-solving
tasks would somehow have to be included in the process itself, which
the CMM does not begin to address.

The CMM reveres the institutionalization of process for its
own sake. This guarantees nothing and in some cases, the institutionalization
of processes may lead to oversimplified public processes, ignoring
the actual successful practice of the organization.

The CMM does not effectively describe any information on process
dynamics, which confuses the study of the relationships between
practices and levels within the CMM. The CMM does not perceive or
adapt to the conditions of the client organization. Arguably, most
and perhaps all of the key practices of the CMM at its various levels
could be performed usefully at level 1, depending on the particular
dynamics of an organization. Instead of modeling these process dynamics,
the CMM merely stratifies them.

The CMM encourages the achievement of a higher maturity level
in some cases by displacing the true mission, which is improving
the process and overall software quality. This may effectively “blind”
an organization to the most effective use of its resources.

The author’s most compelling argument against the CMM is the many successful
software companies that, according to the CMM, should not exist. Many
software companies that provide “shrink wrap” software such as Microsoft,
Symantec, and Lotus would definitely be classified by the CMM as level
1 companies. In these companies, innovation reigns supreme, and it is
from the perspective of the innovator that the CMM seems lost.

The author claims that innovation per se does not appear in the CMM
at all, and is only suggested by level 5. Preoccupied with predictability,
the CMM is ignorant of the dynamics of innovation. In fact,
where innovators advise companies to be flexible, to push authority
down into the organization, and to recommend constant constructive innovation,
the CMM mistakes all of these attributes to the chaos that it represents
in level 1 companies. Because the CMM is distrustful of personal contributions,
ignorant of the environment needed to nurture innovative thinking, and
content to bury organizations under an ineffective superstructure, achieving
level 2 on the CMM scale may actually destroy the very thing that caused
the company to be successful in the first place.

The author discusses the issue of “heroism”, defined as the individual
effort beyond the call of duty to make a project successful. The SEI
regards heroism as a negative and as an unsustainable sacrifice on people
that have special gifts. It considers heroism the sole reason that Level
1 companies can survive. The author claims a different definition for
heroism – taking initiative to solve ambiguous problems. He claims that
this is a definable and teachable set of behaviors that enhance creativity,
which leads to personal mastery of the subject matter. In his opinion,
it is not a negative, but it is a requirement of most successful organizations.

As an alternative to the CMM, the author introduces the idea of a
framework based on heuristics for conducting successful projects. The
key to this model is that it is an aid for judgment, not a prescription
for institutional formalisms. In this model, maturity means recognizing
problems through the analysis of experience and the use of metrics and
to solve them through selective definition and deployment of processes.

This process model consists of a seven-dimensional framework for
analyzing problems and identifying the correct processes. These dimensions
include: business factors, market factors, project deliverables, four
primary processes (commitment, planning, implementation, and convergence),
teams, project infrastructure, and milestones. This framework connects
to a set of processes that are repeatable for performing certain common
tasks.

In an addendum to the original thesis, the author comments that not
much has changed in his opinion on the CMM in the five years since originally
writing his paper. Some software companies are successfully using the
CMM in their organizations, but many, including most of the newer Internet-based
software companies, are not using the CMM.

The author does comment on one shift in his thinking – that is the
fact that he has become more comfortable with the idea of using CMM
as a basic philosophy and not as an issues list. He now believes that
using CMM to identify a list of issues worth addressing in the course
of overall software improvement may be useful, but that it should not
be adapted as a philosophy for good software engineering.

For the most part, I agree with the author’s assessment of the CMM.
Some of his arguments seem weaker than others, but I believe they are
valid.

Due to the fact that the CMM has no theoretical basis and that it
has no empirical proof, it loses value from an academic point of view.
Although this (in my opinion) is one of the author’s weaker arguments,
it is important from the aspect of substantiation of the claims made
by SEI. Without the theoretical proof and the lack of empirical support
based on comparison of alternative models under a controlled study,
the SEI’s case for promoting CMM as the optimal model for software development
is weakened.

The author’s implication that the CMM institutionalizes process for
its own sake without regard to current practices is an accurate assessment
in my view. I have seen organizations that implement policies without
regard to current organizational practices (some of which are quite
successful). The result is a confused development group that gets a
series of mixed messages from “management” which do not necessarily
improve the development process.

Another key fault in the CMM described by the author is the overriding
pressure to move to the next maturity level, sometimes by ignoring the
true mission, which is the quality of the software product. The factors
important in moving up to the next level may or may not necessarily
benefit the organization and its products because all subjectivity is
removed. What benefits one organization may not have the same effect
in another organization.

The author’s claims about heroism are interesting, but I differ slightly
with the conclusions that he draws. I agree that heroism, as defined
by taking the initiative to solve ambiguous problems, is critical to
the success of an organization. However, in my experience, heroism is
a trait that is difficult to teach. I believe it is inherent within
the individual for the most part and, without a good process model,
can be abused by the organization in order to accomplish its goals.

Probably one of the more important points that the author touches
on is the CMM’s implied claim of the importance of process over people.
It has been my experience that processes are not direct substitutes
for the quality of the development team personnel. In other words, the
right organizational processes can improve the output of a group of
talented software developers, but they do not create one. By ignoring
this critical item, the CMM loses credibility with anyone experienced
with a wide range of development teams.

A striking example that is prevalent throughout the author’s thesis
is the number of software companies that probably exist at CMM Level
1, but that are incredibly successful. Microsoft is a prime example
– although they do not model their organization in a manner that the
SEI considers to be “mature”, their products mostly meet or exceed the
customer’s needs and this creates a very successful company.

In considering the alternatives to the CMM, I believe that the author
is correct in his assertion that a model based on past experience and
the use of metrics is probably more effective in practice as compared
to the CMM. The implementation of such a model is based on selective
definition of problems and the selective deployment of specific processes.

Description: As global competitiveness comes to
the software development industry, the search is on for a better way
to create first-class software rapidly, repeatedly, and reliably. Lean
initiatives in manufacturing, logistics, and services have led to dramatic
improvements in cost, quality and delivery time; can they do the same
for software development? The short answer is “Absolutely!”
Of the many methods that have arisen to improve software development,
Lean is emerging as one that is grounded in decades of work understanding
how to make processes better. Lean thinking focuses on giving customers
what they want, when and where the want it, without a wasted motion
or wasted minute.

You will learn how to:

Develop a value stream map for your current software development
organization, and then create a new map for the future.

Create customer-focused value that lasts over time.

Reorganize the software development process around iteration
cycles and implement responsibility-based planning and control.

Assess the state of the basic disciplines which determine a
software development process capability.

Drive software quality by moving testing to the front and center
of the development process.

Organize so as to most effectively deliver superb software rapidly
and at minimum cost.

Apply queuing theory to effectively manage the software development
pipeline.

Create a financial model for a software development project
and use it to make optimal tradeoff decisions.

Software Process Improvement centers around three goals: Productivity,
quality and predictability. Productivity we normally understand
as a measure of the effort required to deliver a product. Quality is
related to meeting requirements and defects are deviations from requirements.
Predictability regards our ability to predict process performance using
historical data and principles of statistical process control. In CMMI
we thus see process performance as a measure of actual results achieved
by following a process. As examples of process measures CMMI identifies
e.g. effort, cycle time, and defect removal efficiency, while product
measures could be reliability, defect density, or response time. Obviously
this means that traditional SPI is about establishing a known base of
established practices, and improved process performance is about discrete
variations in otherwise repeated practices from this base. Planning,
stability and repetition are cornerstones in professional software development
according to this view.

My questions are simple, perhaps even naive:

Are maturity models dating back 20 years or more still a useful
proposition to software developing organizations using contemporary
technologies and organizational principles?

Will high maturity levels in high-cost countries increase the
competitiveness of IT-companies competing in a global economy?

Will competitiveness depend on productivity more than on adding
value to our customers?

Should we still see quality as synonymous with following specifications?

Is predictability more important than accountability and response-ability?

Without commenting on the fraud aspect of the article, it still hurts
to realize how CXOs believe a CMM certification has anything to do with
the quality of the software process... As someone mentioned in a past
CMM assessment recap meeting, CMM (as well as ISO 900X) is about making
sure you're following the process you say you're following. Nothing
more, nothing less.

Yeah, at my previous job we were bought out by a very large defense
contractor with a CMM 3 rating. So, in order to make sure we also operated
at CMM level 3, they gave everyone in the
company 2 days training. Then declared that we were a CMM level 3 shop.
Woohoo!

I worked with a level 5 certified vendor. After a little bit we realized the certification does not matter(at
least with that vendor) and we had to rewrite lot of stuff due to their
thousand internal process problems and turn over of employees
etc.

It's easy to regress back to Tayloristic thinking once "your team"
becomes "your IT organization" or "your software factory" or "your offshore
center". If you're holding a CMM level 5
assessment, you're definitely not the smallest headcount in town and
you probably don't know even half of your coworkers by face, let alone
by name. In that setting, it's difficult to keep in mind that software
development is about people first and foremost.

You can have a defined, repeatable, optimized process in place, but
still have bad programmers.

In India,there is a fashion-companies like to advertise themselves
as CMM level 5.I wonder if its a scam(easy to get such certification,or
these companies r lying).I had joined such a company last year,and I
was shocked the way they worked.I was assigned to write a proof-of-concept
for an Ajax library,which was nothing but taken from a website(I was
told their team has developed).There were other documents that I had
to prepare.
I wonder if in US also you find such companies.How good they are?

jam
Friday, February 09, 2007

====

India has a huge drive for companies to get CMM level 5 accreditation,
because it is seen as a requirement in the west. The Dilbert managers
may not have a clue about the benefits (and losses) involved in outsourcing,
but they are easy to buy out with Gartner reports and CMM accreditation.
So while it may not be easy to get CMM5, I think overall it has very
little bearing on the internal messiness of a company.

"So while it may not be easy to get
CMM5, I think overall it has very little bearing on the internal messiness
of a company. "

Actually, it has MUCH bearing on their
internal organization...the development process is what is judged. Yes,
it is very difficult. I work for an upper fortune 50 company and our
local dev shop worked very very hard to get CMM level 3. Either all
those shops are lying that they have level 5, or the CMM judges in India
are easily bought. I am not saying that there are not CMM L5 companies
in India....just that if there are...they are very few and far between.

DH
Saturday, February 10, 2007

====

When I was working at a large company as a contractor, we outsourced
some gui components to a CMM 5 company in India. I was not impressed
with their quality. There was a lot of turnover in the company, both
developers and managers. Code reviews showed very junior coding. Project
wizards did not functions the way they should.

To their credit, they
did EXACTLY what the specifications said even it they did not make sense
for a developer. Our management was just bad at writing the specs ;-)

Remember CMM is about the process of making software, making it repeatable
and optimizing it. You can have a defined, repeatable, optimized process
in place, but still have bad programmers. They just have to be able
to follow the process that is in place.

SteveM
Saturday, February 10, 2007

====

The cio.com article is pretty good, but the softpanorama.org article
is pretty selective about its quotes. He quotes:

"In fact, the study found that Level 5 companies on average had higher
defect rates than anyone else."

The full quote says:

"In fact, the study found that Level 5 companies on average had higher
defect rates than anyone else. But Reasoning did see a difference when
it sent the code back to the developers for repairs and then tested
it again. The second time around, the code from CMM companies improved,
while the code from the non-CMM companies showed no improvement."

Of course both articles fail to mention that the reason CMM-5 companies
show more defects per line of code than CMM-1 companies is because the
CMM-5 companies actually know how many bugs they have because they have
a actual repeatable and accurate quality control process, and then CMM-1
company doesn't. What would you rather have? More defects from a company
that actually measures how many defects they have, or 'less defects'
from a company that has absolutely no idea how many defects they have,
so they are just making up a random number?

Nice quote: "They said they were Level 4, but in fact they had never
been assessed"

Truth in Advertising

Stories about false claims abound. Ron Radice, a longtime lead appraiser
and former official with the SEI, worked with a Chicago company that
was duped in 2003 by an offshore service provider that falsely claimed
to have a CMM rating. "They said they were Level 4, but in fact they
had never been assessed," says Radice, who declined to name the guilty
provider.

... ... ...

How Much for That
Certification?

Appraisers continue to cheat too, according to their colleagues.
The pressure on appraisers, in fact, is higher than ever today, especially
with offshore providers competing in the outsourcing market. Frank Koch,
a lead appraiser with Process Strategies Inc., another software services
consultancy, says some Chinese consulting companies he dealt with promised
a certain CMM level to clients and then expected him to give it to them.
"We don't do work for certain [consultancies in China] because their
motives are a whole lot less than wholesome," he says. "They'd say we're
sure [certain clients] are a Level 2 or 3 and that's unreasonable, to
say nothing of unethical. The term is called selling a rating."

... ... ...

A quick Nexis search revealed four companies — Cognizant, Patni,
Satyam and Zensar —claiming "enterprise CMM 5," with no explanation
of where the assessments were conducted or how many projects were assessed,
or by whom. Dozens more companies trumpet their CMM levels with
little or no explanation.

Indeed, all of the services companies we interviewed for this
story claimed that their CMM assessments applied across the company
when in fact only 10 percent to 30 percent of their projects were assessed.

" They then got CMM level 4 rating and now CMM level 4 ... all fraudulently.
They create a database of training records and classes... populate it and
show the auditor LOOK at all the classes...after they get the level they
cancel ALL training and proceed with biddness as usual. Pathetic. "

I am subcontractor at a large defense corp. They claimed to be CMM level
3 when i started 5 years ago. Yet they didn't do training or peer reviews.
They then got CMM level 4 rating and now CMM level 4 ... all fraudulently.
They create a database of training records and classes..populate it
and show the auditor LOOK at all the classes...after they get the level
they cancel ALL training and proceed with biddness as usual. Pathetic.
They are now trying for level 5. Despite not having ANY business for
any products other than level 3. our quality has not changed one bit
in 5 years. The workers are more miserable now and spend less time on
the PRODUCT than they do on CMM bullshat though. Its only dedicated
WORKERS that ensure the customer still gets a quality product. Management
is insane with this CMM quest even though it has NO ADDED VALUE whatsoever
and none of our current customers care about paying for anything other
than level 3.

" CMM cert process is itself subject to manipulation
and fraud by the fact that anybody can submit any project
(even one they didn't do) for review to the people at Carnegie Mellon. "

If it's not clear, I meant to say that the CMM cert process is itself subject to manipulation and fraud
by the fact that anybody can submit any project (even
one they didn't do) for review to the people at Carnegie Mellon.

The "true believers" refers to those at CM and elsewhere who
continue to preach "Software Engineering" when the vast majority of
its adherents cannot reliably or even consistently produce success from
project to project. None who has far more failures than successes
when using their own methods is in a position to lecture others on the
"right way" to make successful software. Once again, the emperor
has no software project magic fix, and processes which demand innate
skill cannot be mass-produced in a population without that inate skill.
Get over it.

Durba, your idiotic generalization will make you
nice fodder for the next c by markusbaccus OCT 09, 2003 02:23:05
AM

The CMM is a cert in that it rates a
company's adoption of an apparently unquestionable methodologly which
has a 2/3 rate of failure. It is the logical equivalent
of saying, "If you don't blow on that dice three times before you roll
it, you only have a one in six chance of rolling a six. Umm-- prove
it.

Do me a favor, learn how to recognize logically falacious arguments
like an "appeal to authority" or a "non sequitor", ("why isn't the SEI
doing something about it?" == fallacious belief that SEI is in a postion
to adequately identify fraud merely because it is a recognizable authority,
or that it would even have an incentive to do so. e.g. "He is an expert
in physics so he would never lie to protect his project's funding."
Oh, and since we're on it, you implicitly made an error of misplaced
deduction when you missed my point. (e.g: "I lit one match, so all matches
will light." i.e., it may be true that ONE project met the standards
of the Capability Maturity Model Level 5, but that is not an indicator
of whether that company really lives up to those standich rests upon
a statistically insignificant sampling of people (one guy who is self-selected
to be non-technical, or else they would have no need to offshore their
work to your company, now would they??? Duh!

Here's a clue Durba: Offshoring is not due to a shortage of American
talent, it's due to a shortage of American talent who could afford to
live in America on $10 per hour. Now, drawing upon my many years of
experience with teams from many nationalities, it may surprise you to
know that I would estimate that about one in ten IT workers are worth
their pay, the other nine are worthless or a menace, and this ratio
holds true regardless of their nationality (Although Eastern Europeans
do seem to do much better than 10%). Since you guys merely adopted our
IT training and introduced no new methods (unlike the communist bloc
countries), I would suggest that this should surprise no one who thought
about it.

Continuation for Durba so he can catch the clue train.

by markusbaccus OCT 09, 2003 02:26:21 AM

If you want to go down the road of idiotic generalizations about
particular nationalities, I could tell many stories of *real* one-dimensional
thinking by Indian techs which led to far more catastrophic results
than inconveniencing you with a non-consequential question. If such
a trivial issue is your idea of bad, it makes me wonder if you even
know what bad is. Since you're using a web browser (undoubtably IE)
as your FTP client, I can only imagine how lost your team would be if
you Windows-jockeys had to rely upon a command line FTP client, which
of course would never have such a problem and would have superior performance
to IE's lame-ass implementation. Maybe the guy didn't know to look in
his browser settings because he actually is used to using a different
and better tool for the job than you are?

That wouldn't surprise me, because I've met many Indians who seem
to have a special gift for assuming they know better than people with
many times their experience and ignoring what they are told until after
the predictable disaster strikes, at which time they usually act like
they have discovered something remarkable all by themselves or become
strangely silent as they scramble to fix their opus to fuckology. People
like that will almost nevewill need to rely upon protectionism, nationalistic
prejudice, and nepotism if they want to keep their job in the face of
global competition.

Which, since we're on the topic, Durba, let me ask you a simple question:
How are you going to keep your job when you have to compete with people
who will work for $3.00 USD per hour, or worse, $7 a day? What worth
will your four year degree be then, genius? Get it yet? Think about
it. Wipro is already working the Vietnam angle for when you guys get
uppity. Given that little reality, your heyday won't last for four decades
like ours did. Maybe an American will bail you out when someone finally
convinces a critical mass of managers that development quality, not
cost, is what leads to better ROI. Then only the truly skilled will
do well.

Past history supports Alan's view

by gerbilinheat OCT 06, 2003 09:58:51 AM

Most of us recall the flight of aircraft engineers
/ aerospace technicians in the late 1980's after the meltdown of the
Reagan Perpetual War Budget that resulted in the Reagan and Bush tax
increases on the middle class.

Ultimately, we wound up with Lockheed retiring from the commercial
aircraft business entirely, McDonnell Douglass and Boeing both suffering
in worldwide sales from the British - French consortium Aerospatial
and its world class Airbus series.

Currently, China, Thailand, Burma, Peru and several U.S. carriers are
going Airbus.
All these steps, and these identical results, occurred in the steel,
aluminum, automobile, shipbuilding and textile industries. NONE have
returned to significant and lasting profitability to date.

Simply, if you let go of your expertise, you let go of your market.

The economy!

by Harley OCT 06, 2003 02:58:20 PM

Ignoring the issue of religion, really don't need to travel down
that rabbit hole, the real issue that no one has talked about here is
the impact on the economy. Simple math, replace a 100K software job
with a 30K job and the baker, butcher, laundry, auto, home repair etc.
that the 100K software job supported are gone also. This is simple trickle
down poverty for America! For heavens sake, the US government is sending
contract software jobs over seas while millions of unemployed Americans
can and are capable of doing the work. Overseas outsourcing needs to
be controlled now! Whether you believe Wall Street or not, the economy
has not hit bottom yet, and I believe it is just taking a breath before
it plunges much further. Sometimes people need to hear the radical extreme
to open their eyes to what could happen.

Bursting the CMM Hype - Software Quality - CIO Magazine Mar 1,2004 BY
CHRISTOPHER KOCH U.S. CIOs want to do business with offshore companies with
high CMM ratings. But some outsourcers exaggerate and even lie about their
Capability Maturity Model scores. Why CIOs should never take CMM ratings
at face value. Only if CIOs ask tough questions will they be able to distinguish
between the companies that are exaggerating their CMM claims and those that
are focused on real improvement. Here's the list.Read More

The paper being reviewed was written to support the thesis that the
Software Engineering Institute's Capability Maturity Model (SEI CMM)
is a collection of software engineering practices that are organized
according to a simple model based on process evolution that are not
completely effective in every software organization. The author makes
his case by describing six areas in which he has general problems with
the CMM. This is followed by a section outlining the author’s claim
that a “level 1” organization is completely misunderstood by the SEI
and that effective software can be (and actually is) created by many
level 1 organizations. Finally, the author briefly describes an alternative
to CMM that can be used as a framework for process improvement.

For the most part, I believe that the author has accurately critiqued
the CMM and, from my experience, I would agree with the problems he
discusses. In my mind, the CMM is a good theoretical guideline for establishing
a basic understanding of the characteristics of a good software development
organization, but by stringently following its processes and procedures
to the letter, an organization is not guaranteed to be successful. The
CMM does not deal effectively with innovation issues and people issues.
It also does not reconcile the fact that many successful software organizations
can claim various attributes associated with four (or sometimes all
five) of the CMM levels but, due to the rules established by the CMM,
would officially be designated a Level 1 organization, which unfairly
describes the organization’s capabilities.

Summary of the Reviewed Article

The article is broken into several sections that describe the CMM in
general, the problems that the author has with the CMM, an alternative
to the CMM, and a postscript that was added to the original paper in
February of 1999.

Brief description of the CMM

The author describes the CMM as a group of key practices that are
divided into five levels representing various maturity levels that organizations
should go through on their way to becoming “mature”.

The author lists the CMM levels as follows:

Initial (chaotic,
ad hoc, heroic)

Repeatable
(project management, process discipline)

Defined (institutionalized)

Managed (quantified)

Optimizing
(process improvement)

The author states that the original intent of the CMM was that of
a tool to evaluate the ability of government contractors to perform
a contracted software project. His primary concern is that many tout
the CMM as a general model for process improvement and he believes that
in this area, it has many weaknesses.

General Problems with the CMM

The author describes six basic problem areas that he has identified
with the CMM:

The CMM has no formal theoretical basis and in fact is based
on the experience “of very knowledgeable people”. Because of this
lack of theoretical proof, any other model based on experiences
of other experts would have equal veracity.

The CMM does not have good empirical support and this same empirical
support could also be construed to support other models. Without
a comparison of alternative process models under a controlled study,
an empirical case cannot be built to substantiate the SEI’s claims
regarding the CMM. Primarily, the model is based on the experiences
of large government contractors and of Watts Humprey’s own experience
in the mainframe world. It does not represent the successful experiences
of many shrink-wrap companies that are judged to be a “level 1”
organization by the CMM.

The CMM ignores the importance of people involved with the software
process by assuming that processes can somehow render individual
excellence less important. In order for this to be the case, problem-solving
tasks would somehow have to be included in the process itself, which
the CMM does not begin to address.

The CMM reveres the institutionalization of process for its
own sake. This guarantees nothing and in some cases, the institutionalization
of processes may lead to oversimplified public processes, ignoring
the actual successful practice of the organization.

The CMM does not effectively describe any information on process
dynamics, which confuses the study of the relationships between
practices and levels within the CMM. The CMM does not perceive or
adapt to the conditions of the client organization. Arguably, most
and perhaps all of the key practices of the CMM at its various levels
could be performed usefully at level 1, depending on the particular
dynamics of an organization. Instead of modeling these process dynamics,
the CMM merely stratifies them.

The CMM encourages the achievement of a higher maturity level
in some cases by displacing the true mission, which is improving
the process and overall software quality. This may effectively “blind”
an organization to the most effective use of its resources.

The author’s most compelling argument against the CMM is the many successful
software companies that, according to the CMM, should not exist. Many
software companies that provide “shrink wrap” software such as Microsoft,
Symantec, and Lotus would definitely be classified by the CMM as level
1 companies. In these companies, innovation reigns supreme, and it is
from the perspective of the innovator that the CMM seems lost.

The
author claims that innovation per se does not appear in the CMM at all,
and is only suggested by level 5. Preoccupied with predictability,
the CMM is ignorant of the dynamics of innovation. In fact,
where innovators advise companies to be flexible, to push authority
down into the organization, and to recommend constant constructive innovation,
the CMM mistakes all of these attributes to the chaos that it represents
in level 1 companies. Because the CMM is distrustful of personal contributions,
ignorant of the environment needed to nurture innovative thinking, and
content to bury organizations under an ineffective superstructure, achieving
level 2 on the CMM scale may actually destroy the very thing that caused
the company to be successful in the first place.

The author discusses the issue of “heroism”, defined as the individual
effort beyond the call of duty to make a project successful. The SEI
regards heroism as a negative and as an unsustainable sacrifice on people
that have special gifts. It considers heroism the sole reason that Level
1 companies can survive. The author claims a different definition for
heroism – taking initiative to solve ambiguous problems. He claims that
this is a definable and teachable set of behaviors that enhance creativity,
which leads to personal mastery of the subject matter. In his opinion,
it is not a negative, but it is a requirement of most successful organizations.

As an alternative to the CMM, the author introduces the idea of a
framework based on heuristics for conducting successful projects. The
key to this model is that it is an aid for judgement, not a prescription
for institutional formalisms. In this model, maturity means recognizing
problems through the analysis of experience and the use of metrics and
to solve them through selective definition and deployment of processes.

This process model consists of a seven-dimensional framework for
analyzing problems and identifying the correct processes. These dimensions
include: business factors, market factors, project deliverables, four
primary processes (commitment, planning, implementation, and convergence),
teams, project infrastructure, and milestones. This framework connects
to a set of processes that are repeatable for performing certain common
tasks.

In an addendum to the original thesis, the author comments that not
much has changed in his opinion on the CMM in the five years since originally
writing his paper. Some software companies are successfully using the
CMM in their organizations, but many, including most of the newer Internet-based
software companies, are not using the CMM.

The author does comment on one shift in his thinking – that is the
fact that he has become more comfortable with the idea of using CMM
as a basic philosophy and not as an issues list. He now believes that
using CMM to identify a list of issues worth addressing in the course
of overall software improvement may be useful, but that it should not
be adapted as a philosophy for good software engineering.

For the most part, I agree with the author’s assessment of the CMM.
Some of his arguments seem weaker than others, but I believe they are
valid.

Due to the fact that the CMM has no theoretical basis and that it
has no empirical proof, it loses value from an academic point of view.
Although this (in my opinion) is one of the author’s weaker arguments,
it is important from the aspect of substantiation of the claims made
by SEI. Without the theoretical proof and the lack of empirical support
based on comparison of alternative models under a controlled study,
the SEI’s case for promoting CMM as the optimal model for software development
is weakened.

The author’s implication that the CMM institutionalizes process for
its own sake without regard to current practices is an accurate assessment
in my view. I have seen organizations that implement policies without
regard to current organizational practices (some of which are quite
successful). The result is a confused development group that gets a
series of mixed messages from “management” which do not necessarily
improve the development process.

Another key fault in the CMM described by the author is the overriding
pressure to move to the next maturity level, sometimes by ignoring the
true mission, which is the quality of the software product. The factors
important in moving up to the next level may or may not necessarily
benefit the organization and its products because all subjectivity is
removed. What benefits one organization may not have the same effect
in another organization.

The author’s claims about heroism are interesting, but I differ slightly
with the conclusions that he draws. I agree that heroism, as defined
by taking the initiative to solve ambiguous problems, is critical to
the success of an organization. However, in my experience, heroism is
a trait that is difficult to teach. I believe it is inherent within
the individual for the most part and, without a good process model,
can be abused by the organization in order to accomplish its goals.

Probably one of the more important points that the author touches
on is the CMM’s implied claim of the importance of process over people.
It has been my experience that processes are not direct substitutes
for the quality of the development team personnel. In other words, the
right organizational processes can improve the output of a group of
talented software developers, but they do not create one. By ignoring
this critical item, the CMM loses credibility with anyone experienced
with a wide range of development teams.

A striking example that is prevalent throughout the author’s thesis
is the number of software companies that probably exist at CMM Level
1, but that are incredibly successful. Microsoft is a prime example
– although they do not model their organization in a manner that the
SEI considers to be “mature”, their products mostly meet or exceed the
customer’s needs and this creates a very successful company.

In considering the alternatives to the CMM, I believe that the author
is correct in his assertion that a model based on past experience and
the use of metrics is probably more effective in practice as compared
to the CMM. The implementation of such a model is based on selective
definition of problems and the selective deployment of specific processes.

CMMI Version 1.1 Tutorial
[also available in PDF]
Mike Phillips
This slide presentation addresses: why to focus on Process, why to use
a model, CMMI structure, comparisons with SW-CMM v1.1, SE-CMM, and EIA/IS
731, a process areas overview, appraisal methodology, and training.

CMMI-SE/SW V1.1 to SW-CMM V1.1 Mapping [PDF]
USAF Software Technology Support Center (STSC)
Contains mappings of the Capability Maturity Model for Software (SW-CMM)
Version 1.1 to and from the Capability Maturity Model- Integrated -
Systems Engineering/Software (CMMISE/SW/IPPD) Version 1.1 by the Software
Technology Support Center to answer the questions What does this mean
to me?" and "How does this compare to what I am already doing with regard
to an existing model?". Also includes sections on SW-CMM key process
areas, CMMI-SE/SW specific practices and how to read the maps.

Using the Software CMM® in Small Organizations [PDF]
Mark C. Paulk
This paper discusses how to use the CMM in any business environment
but focuses on the small organization with the use of examples. The
conclusion of this paper is that using the CMM may be different in degree
between small or large projects or organizations, but they are not different
in kind.

Using the Software CMM® With Good Judgement [PDF]
Mark C. Paulk
This paper discusses how to use the CMM in any organization but focuses
on the small organization, rapid prototyping projects, maintenance shops,
R&D outfits, and other environments with the use of examples. This paper
concludes that the issues of interpreting the CMM are the same for any
organization, they may be different in degree but they are not different
in kind.

India became a software outsourcing hub by reassuring multinational
clients it could compete on quality as well as on cost. Now that quality
movement is rapidly spreading around the globe, as other countries pursue
the same strategy.

The emphasis on quality is almost a no-brainer when it comes to outsourcing
such demanding work as software development, where a small error can
undermine an entire project. No matter the cost savings, turning that
work over to strangers would be impractical without some means to control
against quality risks.

"You're entering the unknown," says Neil McKearney, a software manager
with the Swiss arm of France Telecom SA's cellphone unit, Orange, which
recently signed a software-development contract with Tata Consultancy
Services, the Bombay outsourcing titan. No one in his group has ever
traveled to India to meet the software developers working on the project,
and Mr. McKearney says no one needs to, because the quality of Tata's
work has been certified.

The gold standard in the quality-certification business is the Capability
Maturity Model, or CMM, which sets out specific steps needed for an
effective development process to be completed. The CMM was conceived
by the Software Engineering Institute at Carnegie Mellon University
in Pittsburgh, a group funded by the U.S. government because it wanted
a standardized way to assess the work of contractors. CMM certification
is awarded by consulting firms that can charge companies to evaluate
and train their personnel in CMM methods.

CMM has become so popular in India that a technician on a recent
visit there saw the CMM logo stamped on a burlap bag of basmati rice,
presumably endorsing the grain.

"If you don't have the quality certification), you're not even considered,"
says Dion Wiggins, a Hong Kong-based analyst at the Stamford, Conn.,
consultancy Gartner Inc. "It's a must-have."

But now the same standards that allowed Indian companies to lure
business from Europe and the U.S. have begun to migrate, helping upstarts
from Chile to Egypt to Vietnam chip away at India's outsourcing empire.
Despite a shortage of English speakers and skilled programmers, China
is pre-eminent among them.

In 2002, only 18 Chinese companies were CMM-certified, compared with
153 Indian ones, according to the Software Engineering Institute. Now
that number has climbed to 243, compared with the 387 Indian companies
that are accredited.

One of the companies certified at CMM's highest level is Bamboo Networks,
which is based in Hong Kong while most of its employees are in mainland
China. Bamboo's client list now includes the likes of Credit Suisse
Group's Credit Suisse First Boston and Bank of America Corp.

When Bamboo was founded in 1999, costs were out of control and some
projects were unprofitable, says Gene T. Kim, the company's 35-year-old
chief executive. So Mr. Kim, who had left a hedge fund in New York to
found the software company, asked three of his top managers to spend
a month researching what the Indian companies were doing that Bamboo
wasn't. They came back to him and said that a quality certification
was a must.

"It's the passport to the global market," says Mr. Kim.

As part of its bid to become certified, Bamboo created more than
250 types of forms and checklists that programmers need to fill out
as they type code. Some resisted the transition to the regimented process,
Mr. Kim says -- and were fired.

Now the company is profitable and looking to expand. When Bamboo
began, it found it could bill its customers at only $14 an hour. After
accreditation, its rate shot up to $20 an hour. In wooing new clients,
Mr. Kim says, CMM is "the first thing we mention and the last thing
we mention."

Critics of CMM complain that companies boast of being CMM-rated when
perhaps only one or two divisions have earned the distinction. The Software
Engineering Institute has received complaints of fraud from corporate
clients.

"Is it a perfect answer? No, it's not," says Michael Phillips, a
program manager at the institute. "The opportunity for abuse is there."
Indeed, the institute says clients themselves must investigate the quality
claims that an outsourcing company makes for its work.

A cottage industry of quality watchdogs approved by the institute
has cropped up across the region to rank and certify companies. The
appraisal process can take anywhere from a week to a few months, depending
on the size of the organization being evaluated. But it isn't uncommon
for companies to spend a year or more overhauling their entire software
development process.

PP 6 Strategic Planning for Software Process Improvement
The CMMI, like its predecessors, contains the essential elements of
effective processes for specific disciplines. That is, it reflects what
the community currently considers to be "best practices" for software
engineering, systems engineering, integrated product development, and
systems acquisition. In that capacity, the CMMI provides guidance and
a frame of reference for organizations that are developing or improving
their processes. It also provides a benchmark against which organizations
can assess their processes.

After the SW-CMM was first released in 1991, the SEI developed maturity
models for several other disciplines, including systems engineering,
acquisition, workforce management, and integrated product development.
Although each model was focused on a particular discipline, there was
considerable overlap in content – after all, "a project is a project
is a project". Further, two different structural representations were
used: the systems engineering model used a "continuous" structure; the
other models used a "staged" structure.

As the SEI was preparing the next
generation of its maturity models, the SEI's sponsor directed the SEI
to establish a single model that integrates the practices found in the
various discipline-specific models. The CMMI development team was initially
charged with combining three source models—(1) Capability Maturity Model
for Software (SW-CMM) v2.0 draft C, (2) Electronic Industries Association
Interim Standard (EIA/IS) 731, and (3) Integrated Product Development
Capability Maturity Model (IPD-CMM) v0.98for use by organizations pursuing
enterprise-wide process improvement. More recently, the effort was expanded
to include the supplier sourcing discipline. There was also an objective
of ensuring that the new model would be compatible with ISO 15504, an
international standard for software process assessment.

For organizations who have been
using SW-CMM v1.1, the CMMI represents a significant advancement. It
incorporates most of the current thinking on software management practices
and corrects many of the shortcomings of the SW-CMM. However, decisions
regarding if, when, and how to make the transition to the CMMI should
not be taken lightly.

Major Changes (relative to the
SW-CMM v1.1)

The major changes found in the
CMMI fall into three categories: disciplines covered, maturity levels
and process areas, and model structure.

Multiple Disciplines

For those who are familiar with
any of the source models, the most obvious change is that the CMMI covers
multiple bodies of knowledge or "disciplines". Currently the CMMI addresses
four disciplines:

Software
Engineering (SW)

Software engineering covers
the development of software systems. Software engineers focus on
applying systematic, disciplined, and quantifiable approaches to
the development, operation, and maintenance of software.

Systems
Engineering (SE)

Systems engineering
deals with the development of total systems,
which may or may not include software. Systems engineers focus on
transforming customer needs, expectations, and constraints into
product solutions and supporting these product solutions throughout
the life of the product.

Integrated
Product and Process Development (IPPD)

Integrated product and process
development is a systematic approach that achieves a timely collaboration
of relevant stakeholders throughout the life of the product to better
satisfy customer needs, expectations, and requirements. If a project
or organization chooses an IPPD approach, it performs IPPD-specific
practices concurrently with other specific practices to produce
products.

Supplier
Sourcing (SS)

The supplier sourcing discipline
is applicable to projects that use suppliers to perform functions
that are critical to the success of the project. Supplier sourcing
deals with identifying and evaluating potential sources for products,
selecting the sources for the products to be acquired, monitoring
and analyzing supplier processes, evaluating supplier work products,
and revising the supplier agreement or relationships as appropriate.

An organization may adopt the CMMI
for software engineering, systems engineering, or both. The IPPD and
Supplier Sourcing disciplines are used in conjunction with SW and SE.
For example, a software-only organization might select the CMMI for
SW, an equipment manufacturer might select the CMMI for SE and SS, while
a systems integration organization might choose the CMMI for SW, SE,
and IPPD.

Most practices in the CMMI are
applicable to each of the disciplines. For example, the practice "Define
Project Life Cycle" in the Project Planning process area is applicable
to both software engineering projects and systems engineering projects.
Implementations of the practice in the two disciplines are likely to
be quite different, however. The model provides "discipline amplifications",
which contain information relevant to a particular discipline, to aid
in understanding a practice in the context of a specific discipline.
If, for instance, you want to find a discipline amplification for software
engineering, you would look in the model for items labeled “For Software
Engineering”.

Figure 1

Maturity Levels and Process Areas

The CMMI's maturity levels have
the same definitions as in the earlier models, although some changes
to the names of the levels were made. Levels 1, 3, and 5 retained their
names, i.e., Initial, Defined, and Optimizing,
but Levels 2 and 4 are now named Managed and Quantitatively
Managed, respectively, perhaps to more clearly emphasize the evolution
of the management processes from a qualitative focus to a quantitative
focus.

The CMMI contains twenty-five process
areas for the four disciplines currently covered (see Figure 1). (By
comparison, the SW-CMM contained eighteen process areas.) Although many
of the process areas found in the CMMI are essentially the same as their
counterparts in the SW-CMM, some reflect significant changes in scope
and focus and others cover processes not previously addressed.

Level 2 survived the transition
to the CMMI relatively unscathed. Software Subcontracting has been renamed
Supplier Agreement Management and covers a broader range of acquisition
and contracting situations. Measurement and Analysis is a new
process area that primarily consolidates the practices previously found
under the SW-CMM's Measurement and Analysis Common Feature into a single
process area.

Level 3 has seen the most amount
of reconstruction. Software Product Engineering, which, in the SW-CMM,
covered nearly the entire range of engineering practices, has exploded
into five process areas. Requirements Development addresses analysis
of all levels of requirements. Technical Solution covers design
and construction. Product Integration addresses the assembly
and integration of components into a final, deliverable product.
Verification covers practices such as testing and peer reviews that
demonstrate that a product reflects its specified requirements (i.e.,
"was the thing built right?") and Validation covers practices
such as customer acceptance testing that demonstrate that a product
fulfills its intended use (i.e., "was the right thing built?").

Integrated Project Management
covers what was addressed by Integrated Software Management and Intergroup
Coordination in the SW-CMM. Risk Management is a new process
area, as is Decision Analysis and Resolution, which focuses on
a supporting process for identifying and evaluating alternative solutions
for a specific issue.

The Supplier Sourcing discipline
adds Integrated Supplier Management, which builds upon Supplier
Agreement Management (Level 2) by specifying practices that emphasize
proactively identifying sources of products that may be used to satisfy
a project’s requirements and maintaining cooperative project-supplier
relationships.

Level 4 of the CMMI states more
clearly what is expected in a quantitatively controlled process. Specifically,
statistical and other quantitative techniques are expected to be used
on selected processes (i.e., those that are critical from a business
objectives perspective) to achieve statistically predictable quality
and process performance. Software Quality Management and Quantitative
Product Management in the SW-CMM have been replaced with two new process
areas. Organizational Process Performance involves establishing
and maintaining measurement baselines and models that characterize the
expected performance of the organization's standard processes. Quantitative
Project Management focuses on using the baselines and models to
establish plans and performance objectives and on using statistical
and quantitative techniques to monitor and control project performance.

The focus and intent of Level 5
has not changed dramatically with the release of CMMI. Process Change
Management and Technology Change Management from the SW-CMM have been
combined into one process area, Organizational Innovation and Deployment,
which builds upon Organizational Process Focus (Level 3)
by emphasizing the use of high-maturity techniques in process improvement.
Defect Prevention has been renamed Causal Analysis and Resolution.

With the increase in the number
of process areas and practices, the CMMI is significantly larger than
the SW-CMM—the Staged Representation of the CMMI-SE/SW has a total of
80 goals and 411 practices, while the SW-CMM has 52 goals and 316 practices.
Early adopters of the CMMI have found that this model inflation has
a significant impact on both improvement efforts and assessments.

Structural Changes

As mentioned in the introduction, each of the source models was defined
as either a staged model, which focuses on maturity, or as a continuous
model, which focuses on capability. Use of the terms maturity
and capability can be confusing initially—maturity levels
relate to an entire organization; capability levels relate to
individual process areas. The CMMI provides both representations!

The actual contents (i.e., goals,
practices, subpractices, etc.) are essentially the same in each representation
(in the continuous representation there are a few additional practices
that are needed to provide a sufficient degree of granularity in process
area capability). The representations primarily differ in how they are
organized and presented:

Staged Representation

In the staged representation,
each process area is associated with one of five maturity
levels, as shown in Figure 2. The maturity levels and their
process areas represent a recommended path for process improvement.
The maturity levels serve as benchmarks that can be used
to characterize an organization's overall process maturity.
An organization achieves a maturity level when it has successfully
implemented all applicable process areas that exist at and
below that level.

Figure 2

Continuous Representation

In the continuous representation
maturity levels do not exist. Instead, as shown in Figure
3, capability levels are designated for process areas, from
"Incomplete" (capability level 0) to "Optimizing" (capability
level 5) and, thus, provide a recommended order for approaching
process improvement within each process area.

A continuous representation
promotes flexibility in the order in which the process areas
are addressed. A technique called "equivalent staging" may
be used to relate the process areas’ capability levels to
a staged representation’s maturity levels.

Figure 3

Both representations recognize
that the process areas may be grouped into four general categories:
project management, engineering, support, and process management. This
grouping is helpful in discussing the interactions among process areas.

An organization adopting the CMMI
must decide which representation would be most useful. It is anticipated
that most organizations who have experience with the SW-CMM will choose
the Staged Representation since maturity level ratings are a widely-used
means of benchmarking and comparing organizations.

The ways in which goals and practices
are used as model components have improved in the CMMI in two respects.
The first affects the mapping between goals and practices. In the SW-CMM,
a practice may be mapped to more than one goal; in the CMMI practices
are defined such that they map to one, and only one, goal. The second
improvement is the introduction of generic goals, which address process
institutionalization. In the SW-CMM the institutionalization practices
are not explicitly mapped to goals. In the CMMI, institutionalization
practices are called generic practices and are mapped to generic goals.
By clarifying the relationship between goals and practices, the CMMI
is easier to use as an improvement guide and data management in assessments
is simplified.

The Future of the SW-CMM

The SEI has stated that there will
be no further changes to the SW-CMM model or the CBA-IPI assessment
method. The SEI will continue to offer training in the SW-CMM for two
years after the release of CMMI v1.1 and will train CBA-IPI Lead Assessors
through December 2003. Data from SEI-authorized assessments against
the SW-CMM will continue to be accepted and included in the maturity
profiles reported by the SEI.

Migrating to the CMMI

Should your organization adopt
the CMMI? The simple answer is "yes, sometime". Since the CMMI is intended
to be a replacement for the SW-CMM and the other source models, the
real question is "When is the right time for us to migrate to the CMMI?"
Like most significant management decisions, the best answer may not
be obvious.

If your organization has not yet
initiated a CMM-based process improvement program or has only made limited
progress towards Maturity Level 2, we recommend that you consider adopting
the CMMI as your process framework now. The improvements in the CMMI
make it a clear choice over the SW-CMM.

For organizations who have made
significant investments in CMM-based improvement, the decision to adopt
the CMMI is not trivial. It's similar to deciding whether to upgrade
to the newest version of Windows—it seems likely that the migration
from the SW-CMM to the CMMI will be painful in the short term but worthwhile
in the long run. We offer the following suggestions for those facing
the decision:

If your organization is approaching
Level 2 or 3, we recommend that you continue to use SW-CMM until
you reach your current goal, then begin the transition to the CMMI.
This allows your organization to maintain its process improvement
momentum and avoid the risk of missing its goal due to the interruption
caused by introducing the CMMI.

If your organization recently
achieved Level 2 or above, start now to migrate to the CMMI. The
benefits of making the transition to the CMMI at this point will
outweigh any impact on the current process improvement plan.

Once the decision to adopt the
CMMI is made, the organization must choose a representation, i.e., staged
or continuous. If your organization plans its process improvements based
on business objectives, risks, expected benefit, or other such factors,
then the Continuous Representation may be more useful. If, however,
your organization tends to follow the path indicated by maturity levels
or is focused on achieving maturity level ratings, then the Staged Representation
may be more appropriate. We suggest you consider also a third approach:
use the Continuous Representation for improvement and use the Staged
Representation for assessment.

Conclusion

The CMMI is a long-overdue and
necessary upgrade to the earlier, single discipline CMMs. Although the
CMMI doesn't have the same software engineering flavor as the SW-CMM,
the changes in structure, scope, and content are significant and we
are confident that it will prove to be an important framework for organizations
who develop software and systems.

There is a significant learning curve ahead for those
who adopt the CMMI, although not unlike the experience most of us had
with the SW-CMM. The CMMI model documents and other CMMI information
is available from the SEI's web site (www.sei.cmu.edu/cmmi/) and SEI
Transition Partners are providing CMMI training. We encourage you to
take advantage of these resources to guide your decisions about using
the CMMI in your organization.

The CMM fraud was a perverted response of the US Air Force's to their
frustration with its software buying process in the 1980s. Like any
large bureaucracies they were milked by unscrupulous contractors and
it's understandable that they had trouble figuring out which companies
to pick ;-). Cronies at Carnegie Mellon University in Pittsburgh won
a bid to create an organization, the SEI, to improve the vendor vetting
process. They hired Humphrey, IBM's former software development chief,
to participate in this effort in 1986. This genius decided immediately
that the Air Force was chasing the wrong problem. "We were focused
on identifying competent people, but we saw that all the projects [the
Air Force] had were in trouble—it didn't matter who they had doing the
work," he recalls. "So we said let's focus on improving the work rather
than just the proposals."

The first version of CMM in 1987 was a questionnaire designed to
identify good software practices within the companies doing the bidding.
It was bogus test that was easy to fake "It was easy to cram for
the test," says Jesse Martak, former head of a development group for
the defense contracting arm of Westinghouse, which is now owned by Northrop
Grumman. "We knew how to work the system."

So the SEI "refined" it in 1991 to become a monstrous pseudo-scientific
perversion (or slick marketing trick, if you wish) that supposedly provides
detailed model of software development best practices. Compliance is
verified by group of cronies at SEI: lead appraisers. The lead appraisers
head up a team of people from inside the company being assessed (usually
three to seven, depending on the size of the company). Together, they
look for proof that the company is implementing the policies and procedures
of CMM across a "representative" subset (10-30%) of the company's software
projects. There are also other perversions like interviews with pre-selected
and pre-instructed what to say project managers and developers.

Internal people of course will fake everything they can. A lead appraiser
who asked to remain anonymous noted: "They have conflicting objectives.
They need to be objective, but the organization wants to be assessed
at a certain level."

The depth and wisdom of the CMM itself is extremely questionable.
Having a higher maturity level does not reduce the risk over hiring
a company with no CMM level at all. But for contractor the certification
has huge marketing value. If you are a Level 5 organization you can
win some nice contracts even if you always produce software that is
complete garbage.

A recent survey of 89 different software applications by Reasoning,
an automated software inspection company, on average, found no difference
in the number of code defects in software from companies that identified
themselves to be on one of the CMM levels and those that did not. In fact, the study found that Level 5 companies on average had higher
defect rates than anyone else.

... ... ...

Even if we discard the fact the certification is a complete nonsense,
it's actually pretty difficult to discover the mere fact whether the
organization was certified is fake or not and whether actual certification
ever performed. Appraisers are required to submit formal documentation
of all their assessments to the SEI and to customers. Lead appraisers
must write up something called a Final Findings Report that includes
"areas for improvement" if the appraiser finds any (they usually do,
even with Level 5 companies). But there is no requirement for the content
or format in the reports to be consistent across appraisers or companies.
The report can be easily faked. According to one appraiser who asked
not to be named, companies will often ask appraisers to "roll up" the
detailed findings into shallow PowerPoint presentations that conceal
actual picture of the company and its software development processes.
"The purpose of the report is to tell companies where they need to improve—that's
the whole point of CMM," she says. "But they make us write these fluffernutters
that can gloss over important details." She conveniently forgot to mention
that she is a willing accomplice of this scheme.

The Final Findings Report is what company officials present internally
to the big brass and to customers knowledgeable enough to ask for it.
But there's no obligation to do it. They can declare their CMM
level without producing any evidence. They can even hire their
own lead appraisers inside the company and assess their CMM capabilities
themselves. They don't have to hire a lead appraiser from the
outside who might be under less pressure to give a good assessment.
And they can characterize their CMM level any way they want
in their marketing materials and press releases.

FAIR USE NOTICE This site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
in our efforts to advance understanding of environmental, political,
human rights, economic, democracy, scientific, and social justice
issues, etc. We believe this constitutes a 'fair use' of any such
copyrighted material as provided for in section 107 of the US Copyright
Law. In accordance with Title 17 U.S.C. Section 107, the material on
this site is distributed without profit exclusivly for research and educational purposes. If you wish to use
copyrighted material from this site for purposes of your own that go
beyond 'fair use', you must obtain permission from the copyright owner.

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no
less then 90 days. Multiple types of probes increase this period.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.

Original materials copyright belong
to respective owners. Quotes are made for educational purposes only
in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development
of this site and speed up access. In case softpanorama.org is down currently there are
two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with.We do not warrant the correctness
of the information provided or its fitness for any purpose.