In the interests of saving time, and keeping my post to a palatable
length, I stated my contentions as unsupported assertions. I intended
to elaborate only those points that aroused debate. Unfortunately,
people have already misconstrued or protested every single statement
I've made, so this thread has grown beyond the point I want to deal
with in detail.
Let me just try to explain what I think the essential distinction
between OO and FP programming is. In FP you ideally capture the essence
of your data in your data structures. In other words, they are ideally
wholly characterized by their data type. This is the reason conciseness
is considered a virtue in FP, because verbosity is likely evidence of a
conceptual error. That's not so in OO (with its associated problem
space), where you model things whose existence you know, but whose
essence you may not be able to capture in the language, its type and
class systems, or even in your head.
FP languages can handle such things to a greater of lesser extent,
but FP languages have many technical and practical disadvantages,
so it often doesn't make sense to use them in such a way. And
the further you deviate from the ideal, the less a rigid type
system helps keep your software sound, and the more it gets in your
way.
One of the areas where I do much of my programming is in mathematical
software. I view this as one of the most difficult areas, at the
opposite extreme from simple domains like formal methods and language
tools. The problems with characterizing mathematical objects come
up peripherally here with representations of numbers. This always leads
to endless, inconclusive discussions of issues like operator
overloading, inheritance and multiple inheritance, multiple dispatch,
multiple views of a data structure, constraints, etc., etc. And that's
just the tip of the iceberg. I find that the facilities of a language
like OCaml are so inadequate to represent this domain, that using
OCaml can be a distinct impediment. Sometimes Java is a better fit.
Not always, but sometimes.
The fundamental point is that OO puts interaction at the
center of the paradigm, not abstract characterization. That has
huge consequences and it's what makes the whole idea of a "theory"
of large scale design meaningful. OO commits to a certain way of
organizing interaction, state changes, behavior changes, .... that's
attractive in practice (but requires deeper theory than FP).
Consider the following historical example. Milner, the creator of ML,
began trying to grapple with the concept of interaction and concurrency
in the early 1970's. His first attempt was to use a functional approach
with higher order functions. That was a failure. Ultimately, he ended
up creating the process algebra CCS (and later went on to the
pi-calculus). Milner's CCS, and his account of it, have a striking
resemblance to the OO interactive paradigm, and completely abandon FP
concepts. This despite the fact that he created ML in the very same
period, and had no involvement with the OO projects of the time. In
the past decade, concurrent object calculi have been proposed as
foundations of the OO paradigm. They look very much akin to things like
the pi-calculus. Haskell's concurrency also has a pi-calculus like
semantics.
Some other historical perspectives ... 1. At the same time, Smalltalk
and Xerox PARC was underway, and within a few years, the Mac hit the
scene with its OO influenced interactive GUI -- arguably the most
important development in the history of software development.
2. Simula, the first OO language, and one of the most influential
languages of all time, was developed about a decade earlier.
Simula's objects had a built-in notion of concurrency, intended to
model interactive queuing networks.
3. Within a decade or so, OO based RAD visual development tools like
Visual Basic hit the scene and quickly gained enormous influence.
Is there even an FP analog of building software that way?
4. Over the decades, both hardware and software technologies have
inexorably evolved toward networked, decentralized, componentized
systems. Even CPU's are now trending toward multicore designs. This
makes the sequential computational model that FP is based on seem less
and less central.
Part of what I think accounts for the popularity of OO in the real
world is that it allows you to program in a very sloppy way in the early
stages of development without blowing your ability to muck with the way
things interact, and alter their behavior later in the development
process. Ideally, it allows you to build big, sloppy, poorly planned
systems that are nevertheless manageable to maintain. In the real
world, almost all programmers are hackers, with limited capacity for
forethought, so this is an attractive feature.
It's sometimes claimed that OCaml's type system gives you a similar
advantage, allowing you to quickly pinpoint all of the consequences
of a design change. But that's a rather different technique.