Наши партнеры

Since the mid-1980s most new language designs have included
native support for object-oriented programming
(OO). Recall that in object-oriented programming, the functions that
act on a particular data structure are encapsulated with the data in
an object that can be treated as a unit. By contrast, modules in
non-OO languages make the association between data and the functions
that act on it rather accidental, and modules frequently leak data or
bits of their internals into each other.

The OO design concept initially proved valuable in the design of
graphics systems, graphical user interfaces, and certain kinds of
simulation. To the surprise and gradual disillusionment of many, it
has proven difficult to demonstrate significant benefits of OO outside
those areas. It's worth trying to understand why.

There is some tension and conflict between the Unix tradition of
modularity and the usage patterns that have developed around OO
languages. Unix programmers have always tended to be a bit more
skeptical about OO than their counterparts elsewhere. Part of this is
because of the Rule of Diversity; OO has far too often been promoted
as the One True Solution to the software-complexity problem. But
there is something else behind it as well, an issue which is worth
exploring as background before we evaluate specific OO
(object-oriented) languages in Chapter═14. It will also help throw some
characteristics of the Unix style of non-OO programming into sharper
relief.

OO languages make abstraction easy — perhaps too easy.
They encourage architectures with thick glue and elaborate layers.
This can be good when the problem domain is truly complex and demands
a lot of abstraction, but it can backfire badly if coders end up doing
simple things in complex ways just because they can.

This tendency is probably exacerbated because a lot of
programming courses teach thick layering as a way to satisfy the Rule
of Representation. In this view, having lots of classes is
equated with embedding knowledge in your data. The problem with this
is that too often, the ‘smart data’ in the glue layers is
not actually about any natural entity in whatever the program is
manipulating — it's just about being glue. (One sure sign of
this is a proliferation of abstract subclasses or
‘mixins’.)

Another side effect of OO abstraction is that opportunities for
optimization
tend to disappear. For example, a═+═a═+═a═+═a can become a═*═4 and even
a═<<═2 if a is an integer. But if one creates a class with
operators, there is nothing to indicate if they are commutative,
distributive, or associative. Since one isn't supposed to look inside
the object, it's not possible to know which of two equivalent
expressions is more efficient. This isn't in itself a good reason to
avoid using OO techniques on new projects; that would be premature
optimization.
But it is reason to think twice before transforming non-OO code into a
class hierarchy.

One reason that OO has succeeded most where it has (GUIs,
simulation, graphics) may be because it's relatively difficult to get
the ontology of types wrong in those domains. In GUIs and graphics, for
example, there is generally a rather natural mapping between
manipulable visual objects and classes. If you find yourself
proliferating classes that have no obvious mapping to what goes
on in the display, it is correspondingly easy to notice that the glue
has gotten too thick.

One of the central challenges of design in the Unix style is how
to combine the virtue of detachment (simplifying and generalizing
problems from their original context) with the virtue of thin glue and
shallow, flat, transparent hierarchies of code and
design.

We'll return to some of these points and apply them when we
discuss object-oriented languages in Chapter═14.