> My claim was that one would have to compile such programs with a C++> compiler, and the compiler would not be as "optimizing" as a C compiler.

> My reasons were:> 1) In practice the ultimate size of a compiler is constrained, either> by the man-power available to code, test and debug the compiler, or> even given unlimited resources, by the fact that one can only> add so much to a program (compiler) before it starts breaking> under the weight of the cumulative changes.

> Given that C++ has more features than C, more (most?) of a C++> compiler will be devoted to "getting the features right" than> to optimizations - more work will be needed in the front-end> than in the back-end.

Though I have little love for C++, I think your reasoning here is not
entirely correct. On some (not all) of the compilers that I've worked
on, there's been a sufficiently clean separation of concerns that
different groups of people can work mostly-independently on both the
front, middle, and back ends of the compiler, and each group only
needs to deal with the complexity inherent to their part of the
problem.

For C++, this is helped by the implementation-as-design approach to
the language that began with cfront, but it also appears to hold true
for Java; I'm working on pieces of a bytecode-to-native compiler, and
I generally see neither the source code nor the bytecodes. It helps
very much to pick the right abstractions, and it also helps to not get
too upset at a small finite number of hacks.

> 2) The effort of implementing an optimization may be higher in C++,> because it has to work correctly in more contexts...

Again, you have to choose the right abstractions. "C" is not too bad
as a starting point, though it is not quite everything you need. If
you choose the right abstractions, most of the "different" contexts
vanish.

My reasoning goes somewhat like this:

1. there's nothing magic about templates, except for the swamps of
instance code that they generate and the contortions to get rid of
it. So, templates aren't a problem for an optimizer.

3. (Synchronous) Exceptions are just control flow. I've worked on two
different compilers that were completely happy with them treated
like that.

4. There's only a little bit magic about inheritance, and that is only
in the multiple case when it is necessary to adjust the "this"
pointer. There's nothing magic about "this", it's just a parameter.
There's nothing magic about adjusting it in a so-called thunk (not
a real thunk, just a C++-jargon thunk, many might call it just a
wrapper) if one is needed. It helps to compile it into a form
where tail-call elimination can get rid of the extra stack frame
when that is possible.

So, basically, there's no magic, as long as you simplify, and keep a
semantic model of program execution firmly in mind. Picking the right
model, is, of course, quite helpful :-).

> 3) To get "OO"-ish written in C++ programs to work efficiently> requires optimizations that are irrelevant in C (consider> inter-procedural type propagation to convert virtual function calls> to function calls).

No, for two reasons.

Not irrelevant. Constant propagation is constant propagation, and the
more constants you can propagate (including procedure addresses) the
better. You do need a new abstraction (or analysis) to discover that
a word in a vtable is in fact constant, but if it were C, couldn't the
slot be declared "const"?

And, if it is a OO-specific optimization, you can perform it upstream
of the simplified representation.

> Do you agree?

Obviously not.

> Can you provide concrete examples?

One is Sun's C++ system; I worked on the backend of that system (which
also processed Fortran, C, and sometimes Pascal) from 1990 through
most of 1993. We did modify our abstractions to handle C++ more
cleanly and efficiently, but it was not a train wreck (perhaps people
who stayed on to work with it will disagree :-), and if you wrote C in
C++, it tended to run just like C, because the two compilers were the
same compiler after you peeled off the front-end. (I thought IBM's
compilers were supposed to be pretty clean; certainly, the "right way"
that I was taught had its origins at IBM.)

Another is the system I'm working on now, which processes bytecodes.
Again, working in code generation, analysis, and runtime design, and
the system is designed to (and to some extent, does) do both
high-level OO optimizations and low-level bit-twiddling optimizations.
From time to time a high-level analysis yields interesting information
to pass down to the low level, but the increase in complexity is
incremental, not multiplicative.

I don't feel like I can talk in great detail about either system; I
did write an article for JCLT some years ago on exception-handling
that attempted to present the "right" abstractions for doing it in a
language like C++, and take them all the way down to the machine level
on Sparc.

On the other hand, if you choose the wrong abstractions, you're sunk,
and your theory of compilers-are-finite will hold. And, given some of
the stuff I read on the net, I think it is entirely likely that some
people will choose the wrong abstractions.

And, for C++, there is one other complication, which is that the
language is too complicated for most programmers to understand, and
some of those programmers remember dirty tricks from their C days, and
expect some of them to still work, or don't even know that the tricks
are dirty. There's few tools to detect these dirty tricks, and
there's limits to what they can catch, and culturally-imposed limits
to what they can catch (*). The continuing existence of not-C++
programs being fed to C++ compilers, and the expectation that these
programs will continue to work, creates hassles for any
compiler-writer.

(*) For instance, how many programs do you think are "clean" with
respect to "char", "unsigned char" and "signed char", and pointers to
same? How many programs contain casts to integer types for purposes
of reinventing memmove or memcpy? Which of these puns are legal ANSI
C, which are not? Etc, etc, etc.