This introductory chapter reviews the leading models for software development and proposes a robust software development model based on the best practices of the past, while incorporating the promise of more recent programming technology.

Quality
is a
many-splendored
thing,
and
every
improvement
of its
attributes
is at
once
an advance
and
an advantage.

—C.
V.
Ramamoorthy

Overview

Both
personal
productivity
and
enterprise
server
software
are
routinely
shipped
to their
users
with
defects,
called bugs from
the
early
days
of computing.
This
error
rate
and
its
consequent
failures
in operation
would
not
be tolerated
for
any
manufactured
or "hardware" product
sold
today.
But
software
is not
a manufactured
product
in the
same
sense
as a
mechanical
device
or household
appliance,
even
a desktop
computer.
Since
programming
began
as an
intellectual
and
economic
activity
with
the
ENIAC
in 1946,
a great
deal
of attention
has
been
given
to making
software
programs
as reliable
as the
computer
hardware
they
run
on.
Unlike
most
manufactured
goods,
software
undergoes
continual
redesign
and
upgrading
in practice
because
the
system
component
adapts
the
general-purpose
computer
to its
varied
and
often-changing,
special-purpose
applications.
As needs
change,
so must
the
software
programs
that
were
designed
to meet
them.
A large
body
of technology
has
developed
over
the
past
50 years
to make
software
more
reliable
and
hence
trustworthy.
This
introductory
chapter
reviews
the
leading
models
for
software
development
and
proposes
a robust
software
development
model
based
on the
best
practices
of the
past,
while
incorporating
the
promise
of more
recent
programming
technology.
The Robust
Software
Development
Model
(RSDM) recognizes
that
although
software
is designed
and "engineered," it
is not
manufactured
in the
usual
sense
of that
word.
Furthermore,
it recognizes
an even
stronger
need
in software
development
to address
quality
problems
upstream,
because
that
is where
almost
all
software
defects
are
introduced. Design
for
Trustworthy
Software
(DFTS) addresses
the
challenges
of producing
trustworthy
software
using
a combination
of the
iterative Robust
Software
Development
Model, Software
Design
Optimization
Engineering,
and Object-Oriented
Design
Technology.

Chapter
Outline

Software
Development: The Need for a New
Paradigm

Software
Development Strategies and Life-Cycle
Models

Software
Process Improvement

ADR
Method

Seven
Components of the Robust Software
Development Process

Robust
Software Development Model

Key
Points

Additional
Resources

Internet
Exercises

Review
Questions

Discussion
Questions and Projects

Endnotes

Software Development: The Need for a New Paradigm

Computing has been the fastest-growing technology in human history. The
performance of computing hardware has increased by more than a factor of
1010 (10,000 million times) since the commercial exploitation of the
electronic technology developed for the ENIAC 50 years ago, first by Eckert and
Mauchly Corp., later by IBM, and eventually by many others. In the same amount
of time, programming performance, a highly labor-intensive activity, has
increased by about 500 times. A productivity increase of this magnitude for a
labor-intensive activity in only 50 years is truly amazing, but unfortunately it
is dwarfed by productivity gains in hardware. It’s further marred by low
customer satisfaction resulting from high cost, low reliability, and
unacceptable development delays. In addition, the incredible increase in
available computer hardware cycles has forced a demand for more and better
software. Much of the increase in programming productivity has, as you might
expect, been due to increased automation in computer software production.
Increased internal use of this enormous hardware largesse to offset shortcomings
in software and "manware" have accounted for most of the gain.
Programmers are not 500 times more productive today because they can program
faster or better, but because they have more sophisticated tools such as
compilers, operating systems, program development environments, and integrated
development environments. They also employ more sophisticated organizational
concepts in the cooperative development of programs and employ more
sophisticated programming language constructs such as Object-Oriented
Programming (OOP), class libraries, and object frameworks. The first automation
tools developed in the 1950s by people such as Betty Holburton1 at
the Harvard Computation Laboratory (the sort-merge generator) and Mandalay
Grems2 at the Boeing Airplane Company (interpretive programming
systems) have emerged again. Now they take the form of automatic program
generation, round-tripping, and of course the ubiquitous Java Virtual Machine,
itself an interpretive programming system.

Over the years, a number of rules of thumb or best practices have developed
among enterprise software developers, both in-house and commercial or
third-party vendors. Enterprise software is the set of programs that a firm,
small or large, uses to run its business. It is usually conceded that it costs
ten times as much to prepare (or "bulletproof") an enterprise
application for the marketplace as it costs to get it running in the
"lab." It costs another factor of 2 from that point to market a software
package to the break-even point. The high cost of software development in both
time and dollars, not to mention political or career costs (software development
is often referred to as an "electropolitical" problem, and a high-risk
project as a "death march"), has encouraged the rise of the third-party
application software industry and its many vendors. Our experience with leading
both in-house and third-party vendor enterprise software development indicates
that the cost of maintaining a software system over its typical five-year life
cycle is equal to its original development cost.

Each of the steps in the software life cycle, as shown in Figure
1.1, is
supported by numerous methods and approaches, all well-documented by textbooks
and taught in university and industrial courses. The steps are also supported by
numerous consulting firms, each having a custom or proprietary methodology, and
by practitioners well-trained in it. In spite of all of this experience
supported by both computing and organizational technology, the question remains:
"Why does software have bugs?" In the past two decades it has been
popular to employ an analogy between hardware design and manufacture and
software design and development. Software "engineering" has become a
topic of intense interest in an effort to learn from the proven practices of
hardware engineering—that is, how we might design and build bug-free
software. After all, no reputable hardware manufacturer would ship products
known to have flaws, yet software developers do this routinely. Why?

One response is that software is intrinsically more complex than hardware
because it has more states, or modes of behavior. No machine has 1,000 operating
modes, but any integrated enterprise business application system is likely to
have 2,500 or more input forms. Software complexity is conventionally described
as proportional to some factor—say, N—depending on the type
of program, times the number of inputs, I, multiplied by the number of
outputs, O, to some power, P. Thus

software complexity = N*I*OP

This can be thought of as increasing with the number of input parameters but
growing exponentially with the number of output results.

Computers, controlled by software, naturally have more states—that is,
they have larger performance envelopes than do other, essentially mechanical,
systems. Thus, they are more complex.

Sidebar 1.1: Computer Complexity

When one of the authors of this book went from being an aircraft designer to
a computer architect in 1967, he was confronted by the complexity of the then
newly developing multiprocessor computer. At the time, Marshall McLuhan’s
book Understanding Media was a popular read. In it, this Canadian
professor of English literature stated that a supersonic air transport plane is
far simpler than a multiprocessor computer system. This was an amazing insight
for a professor of English literature, but he was correct.

One of the authors of this book worked on the structural optimization of the
Concorde and on a structural aspect of the swing-wing of the Boeing
SST. In 1968 he was responsible for making the Univac 1108 function as a
three-way multiprocessor. Every night at midnight he reported to the Univac test
floor in Roseville, Minnesota, where he was assigned three 1108 mainframe
computers. He connected the new multiprocessor CRT console he had designed and
loaded a copy of the Exec 8 operating system modified for this new
functionality. Ten times in a row the OS crashed at a different step of the
bootstrap process. He began to wonder if this machine were a finite automaton
after all. Of course it was, and the diverse halting points were a consequence
of interrupt races, but he took much comfort from reading Marshall McLuhan.
Today, highly parallel machines are commonplace in business, industry, and the
scientific laboratory—and they are indeed far more complex than supersonic
transport aircraft (none of which are still flying now that the
Concorde has been taken out of service).

Although software engineering has become a popular subject of many books and
is taught in many university computing curricula, we find the
engineering/manufacturing metaphor to be a bit weak for software development.
Most of a hardware product’s potential problems become apparent in
testing. Almost all of them can be corrected by tuning the hardware
manufacturing process to reduce product and/or process variability. Software is
different. Few potential problems can be detected in testing due to the
complexity difference between software and hardware. None of them can be
corrected by tuning the manufacturing process, because software has no
manufacturing process! Making copies of eight CD-ROMs for shipment to the
next customer along with a box of installation and user manuals offers little
chance for fine-tuning and in any case introduces no variability. It is more
like book publishing, in which you can at most slip an errata sheet into the
misprinted book before shipping, or, in the case of software, an upgrade or
fix-disk.

So, what is the solution? Our contention is that because errors in software
are almost all created well upstream in the design process, and because software
is all design and development, with no true manufacturing component, everything
that can be done to create bug-free software must be done as far upstream in the
design process as possible. Hence our advocacy of Taguchi Methods (see Chapters
2, 15, and 17) for robust software architecture. Software development is an
immensely more taxing process than hardware development. Although there is no
silver bullet, we contend that the Taguchi Methods described in the next chapter
can be deployed as a key instrument in addressing software product quality
upstream at the design stage. Processes are often described as having upstream
activities such as design and downstream activities such as testing. This book
advocates moving the quality-related aspects of development as far upstream in
the development process as possible. The RSDMpresented in this book
provides a powerful framework to develop trustworthy software in a
time- and cost-effective manner.

This introductory chapter is an overview of the software development
situation today in the view of one of the authors. Although he has been
developing both systems and applications software since 1957, no single
individual’s career can encompass the entire spectrum of software design
and development possibilities. We have tried in this chapter to indicate when we
are speaking from personal experience and sharing our personal opinions, and
when we are referring to the experience of others.