Summary
Bertrand Meyer talks with Bill Venners about the increasing importance of
software quality, the commercial forces on quality, and
the challenges of complexity.

Bertrand Meyer is a software pioneer whose activities have spanned both the academic and business worlds.
He is currently the Chair of Software Engineering at ETH, the Swiss Institute of Technology.
He is the author of numerous papers and many books, including the classic Object-Oriented
Software Construction (Prentice Hall, 1994, 2000).
In 1985, he founded Interactive Software Engineering, Inc., now called Eiffel Software, Inc., a company
which offers Eiffel-based software tools, training, and consulting.

On September 28, 2003, Bill Venners conducted a phone interview with Bertrand Meyer.
In this interview, which will be published in multiple installments on Artima.com,
Meyer gives insights into many software-related topics, including quality, complexity,
design by contract, and test-driven development.
In this initial installment, Meyer discusses the increasing importance of
software quality, the commercial forces on quality, and
the challenges of complexity.

The Importance of Software Quality

Bill Venners: In a 2001 interview with InformIT, you
said, "The going has been so good that the software profession has been able to treat
quality as one issue among many. Increasingly it will become the dominant issue." Why?

Bertrand Meyer:
As the use of computers pervades more and more of what society does, the effects of
non-quality software just becomes unacceptable.
Software is becoming more ambitious, and we rely on it more and more. Problems that could be dismissed quite
easily before are now coming to the forefront.

There is a very revealing quote by Alan
Perlis in his preface to the MIT book on Scheme, The Structure and Interpretation
of Computer Programming, by Abelson and Sussman. Alan Perlis wrote:

I think that it's extraordinarily important that we in computer science keep fun in
computing. When it started out, it was an awful lot of fun. Of course, the paying
customer got shafted every now and then, and after a while we began to take
their complaints seriously. We began to feel as if we really were responsible for
the successful, error-free perfect use of these machines. I don't think we are. I
think we're responsible for stretching them, setting them off in new directions, and
keeping fun in the house.

That is typical of the kind of attitude that says "Sure, we can do whatever we like. If
there's a problem we'll fix it." But that's simply not true anymore. People depend on
software far too fundamentally to accept this kind of attitude. In a way we had it even easier
during the dot-com boom years, between 1996 and 2000, but this is not
1998 anymore. The kind of free ride that some people were getting in past years
simply doesn't exist anymore.

The Harvard Business Review published an article in May 2003, "IT
Doesn't Matter" by Nicholas Carr, that stated that IT hasn't delivered on its promises. It
is a quite telling sign of how society at large is expecting much more seriousness and is
holding us to our promises much more than used to be the case. Even though it may still
seem like we do have a free ride, in fact that era is coming to a close. People are watching
much more carefully what we're doing and whether they're getting any return for their
money. And the heart of that is quality.

Commercial Forces on Software Quality

Bill Venners: In your paper, The Grand Challenge of Trusted
Components, you write "There is a quality incentive, but it only leads to the
acceptability point: the stage at which remaining deficiencies do not endanger [the product's]
usefulness to the market. Beyond that point, most managers consider that further quality
enhancing measures yield a quickly diminishing return on investment." How do
commercial pressures affect software quality?

Bertrand Meyer: Commercial pressures affect software quality partly positively
and partly negatively. It is almost tempting to use as an analogy the Laffer Curve, which
was popular for a while in so-called Reaganomics. I'm not an economist, and I hear that
the theory has been by now discredited, so I really don't want to imply that the Laffer
Cruve is fundamentally true in economics. Nevertheless, the Laffer Curve is the idea that
if you tax people at zero percent, the state suffers because it doesn't get any revenue. If
you tax people at 100%, it's in the end no better, because if people are not making any
money, they will stop working, and the state will also not get any revenue. It's a rather
simplistic argument. Although it is pretty clear the Laffer Curve has an element of truth,
I'm not sure how precise or accurate it is in economics. But as an analogy, it describes
well the commercial pressures on software quality.

If you produce a software system that has terrible quality, you lose because no one will
want to buy it. If on the other hand you spend infinite time, extremely large effort, and
huge sums of money to build the absolutely perfect piece of software, then it's going to
take so long to complete and it will be so expensive to produce that you'll be out of
business anyway. Either you missed the market window, or you simply exhausted all
your resources. So people in industry try to get to that magical middle ground where the
product is good enough not to be rejected right away, such as during evaluation, but
also not the object of so much perfectionism and so much work that it would take too
long or cost too much to complete.

The Challenge of Complexity

Bill Venners: You said in your book, Object Oriented Software
Construction, "The single biggest enemy of reliability and perhaps of software
quality in general is complexity." Could you talk a bit about that?

Bertrand Meyer: I think we build in software some of the most complex artifacts
that have ever been envisioned by humankind, and in some cases they just overwhelm us.
The only way we can build really big and satisfactory systems is to put a hold on
complexity, to maintain a grasp on complexity. Something like Windows XP, which is 45
million lines of code or so, is really beyond any single person's ability to comprehend or
even imagine. The only way to keep on top of things, the only way to have any hope for
a modicum of reliability, is to get rid of unnecessary complexity and tame the remaining
complexity through all means possible.

Taming complexity is really fundamental in the Eiffel approach. Eiffel is there really to
help people build complex, difficult things. You can certainly build easy or moderately
difficult systems using Eiffel better than using other approaches, but where Eiffel really
starts to shine is when you have a problem that is more complex than you would like and
you have to find some way of taming its complexity. This is where, for example, having
some relatively strict rules of object modularity and information hiding is absolutely
fundamental. The kinds of things that you find in just about every other approach to
circumvent information hiding don't exist in Eiffel. Such strict rules sometimes irritate
programmers at first, because they want to do things and they feel they can't, or they
have to write a little more code to achieve the result. But the strictness is really a major
guard against the catastrophes that start to come up when you're scaling up your design.

For example, in just about every recent object-oriented language, you have the ability,
with some restriction, of directly assigning to a field of an object: x.a = 1,
where x is an object, a is a field. Everyone who has been
exposed to the basics of modern methodology and object technology understands why
this is wrong. And then almost everyone says, "Yes, but in many cases I don't care. I
know exactly what I'm doing. They are my objects and my classes. I control all the
accesses to them, so don't bother me. Don't force me to write a special routine to
encapsulate the modification of field a." And on the surface, people are
correct. In the short term, on a small scale, it's true. Who cares?

But direct assignment is a typical kind of little problem that takes up a completely new
dimension as you start having tens of thousands, hundreds of thousands, or millions of
lines of code; thousands or tens of thousands of classes; many people working on the
project; the project undergoing many changes, many revisions, and ports to different
platforms. This kind of thing, direct assignment of object fields, completely messes up
the architecture. So it's a small problem that becomes a huge one.

The problem is small in the sense that fixing it is very easy in the source. You just
prohibit, as in Eiffel, any direct access to fields and require that these things be
encapsulated in simple procedures that perform the job—procedures which, of course,
may then have contracts. So it's really a problem that is quite easy to kill in the bud. But
if you don't kill it in the bud, then it grows to a proportion where it can kill you.

Another example is overloading: giving the same name to different operations within the
same class. I know this is controversial. People have been brainwashed so much that
overloading is a good thing that it is kind of dangerous to go back to the basics and say
that it's not a good thing. Again, every recent language has overloading. Their libraries tend
to make an orgy of overloading, giving the same name to dozens of different operations.
This kind of apparent short-term convenience buys a lot of long-term complexity,
because you have to find out in each particular case what exactly is the signature of every
variant of an operation. The mechanisms of dynamic binding as they exist in object
technology and of course in Eiffel are much more effective than overloading to provide the
kind of flexibility that people really want in the end.

So these are examples of cases in which being a little more careful in the language design
can make a large contribution to the goal of taming complexity. It's also sometimes why
people haven't believed our claims about Eiffel. The use of Eiffel is quite simple, but the
examples that we publish are simple not necessarily because the underlying problems are
simple, but because the solutions are. Eiffel is really a tool for removing artificial
complexity and finding the essential simplicity that often lies below it. What we realize
now is that sometimes people just don't believe it. They don't believe in simple solutions.
They think we must be either hiding something or that the language and methods don't
actually solve the real practical problems of software development, because they know
there has to be more complexity there. As this horrible cliche goes, "if it looks too good to
be true, then it must not be true," which is certainly the stupidest utterance ever proffered
by humankind. This is the kind of cliche we hear from many people, and it's just wrong in
the case of Eiffel. If you have the right tools for approaching problems, then you can get
rid of unnecessary complexity and find the essential simplicity behind it.

This is the key issue that anyone building a significant system is facing day in and day
out: how to organize complexity both by removing unnecessary, artificial, self-imposed
complexity, and by organizing what remains of inevitable complexity. This is where the
concepts of inheritance, contracts, genericity, object-oriented development in general, and
Eiffel in particular, can play a role.

Bill Venners: It sounds to me that you're talking about two things: getting rid of
unnecessary complexity and dealing with inherent complexity. I can see that tools, such as
object-oriented techniques and languages, can help us deal with inherent
complexity. But how can tools help us get rid of self-imposed complexity? What did you
mean by "getting at the simplicity behind the complexity?"

Bertrand Meyer: Look at modern operating systems. People bash the
complexity of Windows, for example, but I'm not sure the competition is that much
better. There's no need for any bashing of any particular vendor, but it's clear that some of
these systems are just too messy. A clean, fresh look at some of the issues would result
in much better architecture. On the other hand, it's also true that an operating system—be
it Windows XP, RedHat Linux, or Solaris—has to deal with Unicode, with providing a
user interface in 100 different languages. Especially in the case of Windows, it has to deal
with hundreds of thousands of different devices from lots of different manufacturers. This
is not a kind of self-inflicted complexity that academics like to criticize. In the real world,
we have to deal with requirements that are imposed on us from the outside. So there are
two kinds of complexity: inherent complexity, which we have to find ways to deal with
through organization, through information hiding, through modularization; and artificial
complexity, which we should just get rid of by simplifying the problem.

Next Week

Come back Monday, November 3 for the fifth installment of a conversation with
C# creator Anders Hejlsberg
If you'd like to receive a brief weekly email
announcing new articles at Artima.com, please subscribe to
the Artima Newsletter.

Talk Back!

Have an opinion about the design principles presented in this article?
Discuss this article in the News & Ideas Forum topic,
The Demand for Software Quality.