These are questions about C++ Style and Technique that people ask me often.
If you have better questions or comments on the answers,
feel free to email me (bs at cs dot tamu dot edu).
Please remember that I can't spend all of my time improving my homepages.

I have contributed to the new, unified,
isocpp.org C++ FAQ
maintained by
The C++ Foundation
of which I am a director.
The maintenance of this FAQ is likely to become increasingly sporatic.

Please note that these are just a collection of questions and answers. They are not
a substitute for a carefully selected sequence of examples and explanations
as you would find in a good textbook. Nor do they offer detailed and precise
specifications as you would find in a reference manual or the standard.
See
The Design and Evolution of C++ for questions
related to the design of C++.
See The C++ Programming Language for questions
about the use of C++ and its standard library.

Often, especially at the start of semesters, I get a lot of questions about
how to write very simple programs. Typically, the problem to be solved is
to read in a few numbers, do something with them, and write out an answer.
Here is a sample program that does that:

This is a Standard ISO C++ program using the standard library.
Standard library facilities are declared in namespace std in headers
without a .h suffix.

If you want to compile this on a Windows machine, you need to compile it as
a "console application".
Remember to give your source file the .cpp suffix or the compiler might think
that it is C (not C++) source.

Reading into a standard vector guarantees that you don't overflow some
arbitrary buffer.
Reading into an array without making a "silly error" is beyond the ability
of complete novices - by the time you get that right, you are no longer
a complete novice.
If you doubt this claim, I suggest you read my paper
"Learning Standard C++ as a New Language", which you can download from
my publications list.

The !cin.eof() is a test of the stream's format.
Specifically, it tests whether the loop ended by finding end-of-file
(if not, you didn't get input of the expected type/format).
For more information, look up "stream state" in your C++ textbook.

A vector knows its size, so I don't have to count elements.

Yes, I know that I could declare i to be a vector<double>::size_type
rather than plain int to quiet
warnings from some hyper-suspicious compilers,
but in this case,I consider that too pedantic and distracting.

This program contains no explicit memory management, and it does not
leak memory.
A vector keeps track of the memory it uses to store its elements.
When a vector needs more memory for elements, it allocates more;
when a vector goes out of scope, it frees that memory.
Therefore, the user need not be concerned with the allocation and
deallocation of memory for vector elements.

The program ends reading input when it sees "end of file".
If you run the program from the keybord on a Unix machine "end of file"
is Ctrl-D.
If you are on a Windows machine that because of a bug
doesn't recognize an end-of-file character, you might prefer this slightly more
complicated version of the program that terminates input with the word "end":

The main point of a C++ coding standard is to provide a set of rules for using
C++ for a particular purpose in a particular environment. It follows that there
cannot be one coding standard for all uses and all users.
For a given application (or company, application area, etc.), a good coding standard
is better than no coding standard. On the other hand, I have seen many examples that
demonstrate that a bad coding standard is worse than no coding standard.

Please choose your rules with care and with solid knowledge of your application
area. Some of the worst coding standards (I won't mention
names "to protect the guilty") were written by people without solid knowledge
of C++ together with a relative ignorance of the application area (they were
"experts" rather than developers) and a misguided conviction that more restrictions
are necessarily better than fewer. The counter example to that last misconception
is that some features exist to help programmers having to use even worse features.
Anyway, please remember that safety, productivity, etc. is the sum of all parts of
the design and development process - and not of individual language features, or even
of whole languages.

With those caveats, I recommend three things:

Look at Sutter and Alexandrescu:
"C++ coding standards". Addison-Wesley, ISBN 0-321-11358-.
It has good rules, but look upon it primarily as a set of meta-rules.
That is, consider it a guide to what a good, more specific, set of coding rules
should look like.
If you are writing a coding standard, you ignore this book at your peril.

Look at
the JSF air vehicle C++ coding standards. I consider it
a pretty good set of rules for safety critical and performance critical code.
If you do embedded systems programming, you should consider it. Caveat: I had a
hand in the formulation of these rules, so you could consider me biased. On the
other hand, please send me constructive comments about it. Such comments might
lead to improvements - all good standards are regularly reviewed and updated
based on experience and on changes in the work environment.
If you don't build hard-real time systems or safety critical systems, you'll find
these rules overly restrictive - because then those rules are not for you
(at least not all of those rules).

Don't use C coding standards (even if slightly modified for C++) and don't use ten-year-old
C++ coding standards (even if good for their time).
C++ isn't (just) C and Standard C++ is not (just) pre-standard C++.

You may have a problem with your compiler. It may be old, you may have it
installed wrongly, or your computer might be an antique.
I can't help you with such problems.

However, it is more likely that the program that you are trying to compile
is poorly designed, so that compiling it involves the compiler examining
hundreds of header files and tens of thousands of lines of code.
In principle, this can be avoided.
If this problem is in your library vendor's design, there isn't much you
can do (except changing to a better library/vendor), but you can structure
your own code to minimize re-compilation after changes.
Designs that do that are typically better, more maintainable, designs because
they exhibit better separation of concerns.

The idea is that users manipulate shapes through Shape's public interface,
and that implementers of derived classes (such as Circle and Triangle)
share aspects of the implementation represented by the protected members.

There are three serious problems with this apparently simple idea:

It is not easy to define shared aspects of the implementation that are
helpful to all derived classes. For that reason, the set of protected members
is likely to need changes far more often than the public interface.
For example, even though "center" is arguably a valid concept for all Shapes,
it is a nuisance to have to maintain a point "center" for a Triangle - for
triangles, it makes more sense to calculate the center if and only if someone
expresses interest in it.

The protected members are likely to depend on "implementation" details that
the users of Shapes would rather not have to depend on. For example, many
(most?) code using a Shape will be logically independent of the definition
of "Color", yet the presence of Color in the definition of Shape will probably
require compilation of header files defining the operating system's notion of
color.

When something in the protected part changes, users of Shape have to
recompile -
even though only implementers of derived classes have access to the protected
members.

Thus, the presence of "information helpful to implementers" in the base class
that also acts as the interface to users is the source of instability in the
implementation, spurious recompilation of user code (when implementation
information changes), and excess inclusion of header files into user code
(because the "information helpful to implementers" needs those headers).
This is sometimes known as the "brittle base class problem."

The obvious solution is to omit the "information helpful to implemeters" for
classes that are used as interfaces to users. That is, to make interfaces,
pure interfaces. That is, to represent interfaces as abstract classes:

The users are now insulated from changes to implementations of derived classes.
I have seen this technique decrease build times by orders of magnitudes.

But what if there really is some information that is common to all derived
classes (or simply to several derived classes)?
Simply make that information a class and derive the implementation classes
from that also:

This optimization is safe and can be most useful. It allows a programmer
to use empty classes to represent very simple concepts without overhead.
Some current compilers provide this "empty base class optimization".

This type is designed to be used much as a built-in type and the representation
is needed in the declaration to make it possible to create genuinely local
objects (i.e. objects that are allocated on the stack and not on a heap) and
to ensure proper inlining of simple operations. Genuinely local objects and
inlining is necessary to get the performance of complex close to what is
provided in languages with a built-in complex type.

Because many classes are not designed to be used as base classes.
For example, see class complex.

Also, objects of a class with a virtual function require space needed by the
virtual function call mechanism - typically one word per object. This overhead
can be significant, and can get in the way of layout compatibility with
data from other languages (e.g. C and Fortran).

Because many classes are not designed to be used as base classes.
Virtual functions make sense only in classes meant to act as interfaces to
objects of derived classes (typically allocated on a heap and accessed through
pointers or references).

So when should I declare a destructor virtual? Whenever the class has at
least one virtual function.
Having virtual functions indicate that a class is meant to act as an
interface to derived classes, and when it is, an object of a derived class
may be destroyed through a pointer to the base.
For example:

A virtual call is a mechanism to get work done given partial information.
In particular, "virtual" allows us to call a function knowing only an
interfaces and not the exact type of the object.
To create an object you need complete information. In particular,
you need to know the exact type of what you want to create.
Consequently, a "call to a constructor" cannot be virtual.

Techniques for using an indirection when you ask to create an object are
often referred to as "Virtual constructors". For example, see TC++PL3 15.6.2.

For example, here is a technique for generating an object of an appropriate
type using an abstract class:

In other words, there is no overload resolution between D and B. The compiler
looks into the scope of D, finds the single function "double f(double)" and
calls it. It never bothers with the (enclosing) scope of B. In C++, there is
no overloading across scopes - derived class scopes are not an exception to
this general rule. (See
D&E or
TC++PL3 for details).

But what if I want to create an overload set of all my f() functions from
my base and derived class? That's easily done using a using-declaration:

The clumbsy use of "new" for z3 is unnecessary and slow compared with
the idiomatic use of a local variable (z2).
You don't need to use "new" to create an object if you also "delete" that object in the same scope;
such an object should be a local variable.

Yes, but be careful. It may not do what you expect. In a constructor,
the virtual call mechanism is disabled because overriding from derived
classes hasn't yet happened. Objects are constructed from the base up,
"base before derived".

Note not D::f.
Consider what would happen if the rule were different so that D::f() was
called from B::B(): Because the constructor D::D() hadn't yet been run,
D::f() would try to assign its argument to an uninitialized string s.
The result would most likely be an immediate crash.

Destruction is done "derived class before base class", so virtual functions
behave as in constructors: Only the local definitions are used - and no
calls are made to overriding functions to avoid touching the (now destroyed)
derived class part of the object.

It has been suggested that this rule is an implementation artifact. It is
not so. In fact, it would be noticeably easier to implement the unsafe rule
of calling virtual functions from constructors exactly as from other functions.
However, that would imply that no virtual function could be written to rely
on invariants established by base classes. That would be a terrible mess.

But how can we later delete those objects correctly? The reason that there is
no built-in "placement delete" to match placement new is that there is no
general way of assuring that it would be used correctly. Nothing in the C++
type system allows us to deduce that p1 points to an object allocated in
Arena a1. A pointer to any X allocated anywhere can be assigned to p1.

for safety: to ensure that my class is not used as a base class
(for example, to be sure that I can copy objects without fear
of slicing)

In my experience, the efficiency reason is usually misplaced fear.
In C++, virtual function calls are so fast that their real-world use for
a class designed with virtual functions does not to produce measurable
run-time overheads compared to alternative solutions using ordinary
function calls.
Note that the virtual function call mechanism is typically used only when
calling through a pointer or a reference.
When calling a function directly for a named object, the virtual function
class overhead is easily optimized away.

If there is a genuine need for "capping" a class hierarchy to avoid virtual
function calls, one might ask why those functions are virtual in the first
place. I have seen examples where performance-critical functions had been
made virtual for no good reason, just because "that's the way we usually do it".

The other variant of this problem, how to prevent derivation for logical
reasons, has a solution.
Unfortunately, that solution is not pretty.
It relies on the fact that the most derived class in a hierarchy must construct
a virtual base.
For example:

These containers are described in all good C++ textbooks, and should be preferred over
arrays
and "home cooked" containers unless there is a good reason not to.

These containers are homogeneous; that is, they hold elements of the same type. If you want
a container to hold elements of several different types, you must express that either as a union
or (usually much better) as a container of pointers to a polymorphic type.
The classical example is:

vector<Shape*> vi; // vector of pointers to Shapes

Here, vi can hold elements of any type derived from Shape. That is, vi is homogeneous in that
all its elements are Shapes (to be precise, pointers to Shapes) and heterogeneous in the sense
that vi can hold elements of a wide variety of Shapes, such as Circles, Triangles, etc.

So, in a sense all containers (in every language) are homogenous because to use them there must
be a common interface to all elements for users to rely on. Languages that provide containers
deemed heterogenous simply provide
containers of elements that all provide a standard interface. For example, Java collections provide
containers of (references to) Objects and you use the (common) Object interface to discover the
real type of an element.

The C++ standard library provides homogeneous containers because those are the easiest to use in the
vast majority of cases, gives the best compile-time error message, and imposes no unnecessary
run-time overheads.

If you need a heterogeneous container in C++, define a common interface for all the elements and
make a container of those. For example:

class Io_obj { /* ... */ }; // the interface needed to take part in object I/O
vector<Io_obj*> vio; // if you want to manage the pointers directly
vector< Handle<Io_obj> > v2; // if you want a "smart pointer" to handle the objects

Don't drop to the lowest level of implementation detail unless you have to:

vector<void*> memory; // rarely needed

A good indication that you have "gone too low level" is that your code gets littered with casts.

Using an Any class, such as Boost::Any, can be an alternative in some programs:

They are not.
Probably "compared to what?" is a more useful answer.
When people complain about standard-library container performance,
I usually find one of three genuine problems (or one of the many myths and red herrings):

I suffer copy overhead

I suffer slow speed for lookup tables

My hand-coded (intrusive) lists are much faster than std::list

Before trying to optimize, consider if you have a genuine performance problem.
In most of cases sent to me, the performance problem is theoretical or imaginary:
First measure, then optimise only if needed.

Let's look at those problems in turn.
Often, a vector<X> is slower than somebody's specialized My_container<X>
because My_container<X> is implemented as a container of pointers to X.
The standard containers hold copies of values, and copy a value when you put it into the container.
This is essentially unbeatable for small values, but can be quite unsuitable for huge objects:

Now, if portrait.jpg is a couple of megabytes and Image has value semantics
(i.e., copy assignment and copy construction make copies) then vim.push_back(im) will indeed
be expensive.
But -- as the saying goes -- if it hurts so much, just don't do it.
Instead, either use a container of handles
or a containers of pointers. For example, if Image had reference semantics, the code above
would incur only the cost of a copy constructor call, which would be trivial compared to most
image manipulation operators.
If some class, say Image again, does have copy semantics for good reasons, a container of pointers
is often a reasonable solution:

Naturally, if you use pointers, you have to think about resource management,
but containers of pointers can themselves be effective and cheap resource handles
(often, you need a container with a destructor for deleting the "owned" objects).

The second frequently occuring genuine performance problem is the use of a map<string,X> for
a large number of (string,X) pairs.
Maps are fine for relatively small containers
(say a few hundred or few thousand elements -- access to an element of a map of 10000
elements costs about 9 comparisons), where less-than is cheap, and where no
good hash-function can be constructed. If you have lots of strings and a good hash function,
use a hash table.
The unordered_map from the standard committee's Technical Report is now widely
available and is far better than most people's homebrew.

Sometimes, you can speed up things by using (const char*,X) pairs rather than (string,X) pairs,
but remember that < doesn't do lexicographical comparison for C-style strings. Also, if X is large,
you may have the copy problem also (solve it in one of the usual ways).

Intrusive lists can be really fast.
However, consider whether you need a list at all: a vector is more
compact and is therefore smaller and faster in many cases - even when you do inserts and erases.
For example, if you logically have a list of a few integer elements, a vector is significantly
faster than a list (any list).
Also, intrusive lists cannot hold built-in types directly (an int does not have a link member).
So, assume that you really need
a list and that you can supply a link field for every element type. The standard-library list
by default performs an allocation followed by a copy for each operation inserting an element
(and a deallocation for each operation removing an element). For std::list with the
default allocator, this can be significant. For small elements where the copy overhead is not
significant, consider using an optimized allocator. Use a hand-crafted intrusive lists only
where a list and the last ounce of performance is needed.

People sometimes worry about the cost of std::vector growing incrementally.
I used to worry about that and used reserve() to optimize the growth.
After measuring my code and repeatedly having trouble finding the performance benefits of reserve()
in real programs,
I stopped using it except where it is needed to avoid iterator invalidation (a rare case in my code).
Again: measure before you optimize.

No.
It does not.
"Friend" is an explicit mechanism for granting access, just like membership.
You cannot (in a standard conforming program) grant yourself access to a
class without modifying its source.
For example:

Here, the default copy gives us h2.name==h1.name and h2.p==h1.p.
This leads to disaster: when we exit f() the destructors for h1 and h2 are invoked and the object
pointed to by h1.p and h2.p is deleted twice.

How do we avoid this?
The simplest solution is to prevent copying by making the operations that copy private:

People provide default arguments to get the convenience used for orig and p1.
Then, some are surprised by the conversion of 2 to Point(2,0) in the call of f().
A constructor taking a single argument defines a conversion.
By default that's an implicit conversion.
To require such a conversion to be explicit, declare the constructor explicit:

C++ inherited pointers from C, so I couldn't remove them without causing
serious compatibility problems.
References are useful for several things, but the direct reason I introduced
them in C++ was to support operator overloading.
For example:

More generally, if you want to have both the functionality of pointers and the functionality of references, you need either two different types (as in C++)
or two different sets of operations on a single type.
For example, with a single type you need both an operation to assign to the
object referred to and an operation to assign to the reference/pointer.
This can be done using separate operators (as in Simula). For example:

I think that for a reader, incr2() is easier to understand.
That is, incr1() is more likely to lead to mistakes and errors.
So, I'd prefer the style that returns a new value over the one that modifies a value
as long as the creation and copy of a new value isn't expensive.

I do want to change the argument, should I use a pointer or should I use a reference?
I don't know a strong logical reason.
If passing ``not an object'' (e.g. a null pointer) is acceptable, using a pointer makes sense.
My personal style is to use a pointer when I want to modify an object because in some contexts
that makes it easier to spot that a modification is possible.

Note also that a call of a member function is essentially a call-by-reference on the object,
so we often use member functions when we want to modify the value/state of an object.

In terms of time and space,
an array is just about the optimal construct for accessing a sequence of objects in memory.
It is, however, also a very low level data structure with a vast potential for misuse and errors
and in essentially all cases there are better alternatives. By "better" I mean easier to write,
easier to read, less error prone, and as fast.

The two fundamental problems with arrays are that

an array doesn't know its own size

the name of an array converts to a pointer to its first element at the slightest provocation

The second call will scribble all over memory that doesn't belong to arr2.
Naturally, a programmer usually get the size right, but it's extra work
and ever so often someone makes the mistake.
I prefer the simpler and cleaner version using the standard library vector:

In the last call, the Derived[] is treated as a Base[] and the subscripting no longer works correctly when sizeof(Derived)!=sizeof(Base) -- as will be the case in most cases of interest.
If we used vectors instead, the error would be caught at compile time:

In C++, the definition of NULL is 0, so there is only an aesthetic difference.
I prefer to avoid macros, so I use 0.
Another problem with NULL is that people sometimes mistakenly believe that it
is different from 0 and/or not an integer.
In pre-standard code, NULL was/is sometimes defined to something unsuitable
and therefore had/has to be avoided. That's less common these days.

If you have to name the null pointer, call it nullptr; that's what it's called in C++11.
Then, "nullptr" will be a keyword.

Like C, C++ doesn't define layouts, just semantic constraints that must
be met. Therefore different implementations do things differently.
Unfortunately, the best explanation I know of is in a book that is
otherwise outdated and doesn't describe any current C++ implementation:
The Annotated C++ Reference Manual
(usually called the ARM). It has diagrams of key layout examples.
There is a very brief explanation in Chapter 2 of
TC++PL.

is represented by an A followed by an int; that is, by three ints next to
each other.

Virtual functions are typically implemented by adding a pointer (the vptr)
to each object of a class with virtual functions. This pointer points to
the appropriate table of functions (the vtbl). Each class has its own vtbl
shared by all objects of that class.

It's undefined.
Basically, in C and C++, if you read a variable twice in an expression where you also
write it, the result is undefined.
Don't do that.
Another example is:

v[i] = i++;

Related example:

f(v[i],i++);

Here, the result is undefined because the order of evaluation of function arguments are undefined.

Having the order of evaluation undefined is claimed to yield better performing code.
Compilers could warn about such examples, which are typically subtle bugs (or potential subtle bugs).
I'm disappointed that after decades, most compilers still don't warn, leaving that job to
specialized, separate, and underused tools.

Because machines differ and because C left many things undefined.
For details, including definitions
of the terms "undefined", "unspecified", "implementation defined", and "well-formed";
see the ISO C++ standard.
Note that the meaning of those terms differ from their definition of the ISO C standard
and from some common usage.
You can get wonderfully confused discussions when people don't realize that not everybody
share definitions.

This is a correct, if unsatisfactory, answer.
Like C, C++ is meant to exploit hardware directly and efficiently. This implies that C++ must deal
with hardware entities such as bits, bytes, words, addresses, integer computations, and
floating-point computations the way they are on a given machine, rather than how we might
like them to be.
Note that many "things" that people refer to as "undefined" are in fact "implementation defined",
so that we can write perfectly specified code as long as we know which machine we are running on.
Sizes of integers and the rounding behaviour of floating-point computations fall into that category.

Consider what is probably the the best known and most infamous example of undefined behavior:

The C++ (and C) notion of array and pointer are direct representations of a machine's notion
of memory and addresses, provided with no overhead. The primitive operations on pointers map
directly onto machine instructions. In particular, no range checking is done. Doing range checking
would impose a cost in terms of run time and code size. C was designed to outcompete assembly code
for operating systems tasks, so that was a necessary decision. Also, C -- unlike C++ --
has no reasonable way of reporting a violation had a compiler decided to generate code to detect it:
There are no exceptions in C.
C++ followed C for reasons of
compatibility and because C++ also compete directly with assembler (in OS, embedded systems, and
some numeric computation areas).
If you want range checking, use a suitable checked class (vector, smart pointer, string, etc.).
A good compiler could catch the range error for a[100] at compile time, catching the one for p[100]
is far more difficult, and in general it is impossible to catch every range error at compile time.

Other examples of undefined behavior stems from the compilation model.
A compiler cannot detect an inconsistent definition of an object or a function in separately-compiled
translation units. For example:

Compiling file1.c and file2.c and linking the results into the same program is illegal
in both C and C++.
A linker could catch the inconsistent definition of S, but is not obliged to do so (and most don't).
In many cases, it can be quite difficult to catch inconsistencies between separately
compiled translation units.
Consistent use of header files helps minimize such problems and there are some signs that linkers
are improving.
Note that C++ linkers do catch almost all errors related to inconsistently declared
functions.

Finally, we have the apparently unnecessary and rather annoying undefined behavior of individual
expressions. For example:

The value of j is unspecified to allow compilers to produce optimal code. It is claimed that the
difference between what can be produced giving the compiler this freedom and requiring
"ordinary left-to-right evaluation" can be significant. I'm unconvinced, but with innumerable compilers
"out there" taking advantage of the freedom and some people passionately defending that freedom, a
change would be difficult and could take decades to penetrate to the distant corners of the C and C++
worlds.
I am disappointed that not all compilers warn against code such as ++i+i++.
Similarly, the order of evaluation of arguments is unspecified.

IMO far too many "things" are left undefined, unspecified, implementation-defined, etc.
However, that's easy to say and even to give examples of, but hard to fix.
It should also be noted that it is not all that difficult to avoid most of the problems and produce
portable code.

If there is a type error, it will be in the resolution of the fairly
complicated for_each() call. For example, if the element type of the
container is an int,
then we get some kind of obscure error related to the for_each()
call (because we can't invoke Shape::draw() for an int).

The initialization of the spurious variable "p" will trigger a comprehensible
error message from most current compilers. Tricks like this are common
in all languages and have to be developed for all novel constructs.
In production code, I'd probably write something like:

Can_copy checks (at compile time) that a T1 can be assigned to a T2.
Can_copy<T,Shape*> checks that T is a Shape* or a pointer to
a class publicly derived from Shape or a type with a user-defined conversion
to Shape*.
Note that the definition is close to minimal:

one line to name the constraints to be checked and the types for which
to check them

one line to list the specific constraints checked (the constraints() function)

one line to provide a way to trigger the check (the constructor)

Note also that the definition has the desirable properties that

You can express constraints without declaring or copying variables,
thus the writer of a constraint doesn't have to make assumptions about
how a type is initialized, whether objects can be copied, destroyed, etc.
(unless, of course, those are the properties being tested by the constraint)

No code is generated for a constraint using current compilers

No macros are needed to define or use constraints

Current compilers give acceptable error messages for a failed constraint,
including the word "constraints" (to give the reader a clue), the name of
the constraints, and the specific error that caused the failure (e.g. "cannot
initialize Shape* by double*")

So why is something like Can_copy() - or something even more elegant - not in
the language?
D&E
contains an analysis of the difficulties involved in expressing general
constraints for C++.
Since then, many ideas have emerged for making these constraints classes
easier to write and still trigger good error messages. For example,
I believe the use of a pointer to function the way I do in Can_copy
originates with Alex Stepanov and Jeremy Siek.
I don't think that
Can_copy() is quite ready for standardization - it needs more use.
Also, different forms of constraints are in use in the C++ community;
there is not yet a consensus on exactly what form of constraints templates
is the most effective over a wide range of uses.

However, the idea is very general, more general than language facilities
that have been proposed and provided specifically for constraints checking.
After all, when we write
a template we have the full expressive power of C++ available.
Consider:

To an expert, the fact that sort()
tends to be faster than qsort()
for the same elements and the same comparison criteria is often significant.
Also, sort() is generic, so that it can be used for any reasonable
combination of container type, element type, and comparison criterion.
For example:

An object that in some way behaves like a function, of course.
Typically, that would mean an object of a class that defines the application
operator - operator().

A function object is a more general concept
than a function because a function object
can have state that persist across several calls (like a static local variable)
and can be initialized and examined from outside the object (unlike a static
local variable).
For example:

Note that a function object with an inline application operator inlines
beautifully because there are no pointers involved that might confuse
optimizers. To contrast: current optimizers are rarely (never?) able to
inline a call through a pointer to function.

Function objects are extensively used to provide flexibility in the standard
library.

By writing code that doesn't have any. Clearly, if your code has new
operations, delete operations, and pointer arithmetic all over the place,
you are going to mess up
somewhere and get leaks, stray pointers, etc.
This is true independently of how conscientious you are with your allocations:
eventually the complexity
of the code will overcome the time and effort you can afford.
It follows that successful
techniques rely on hiding allocation and deallocation inside more manageable
types.
Good examples are the standard containers. They manage memory for their
elements better than you could without disproportionate effort.
Consider writing this without the help of string and vector:

What would be your chance of getting it right the first time? And how would
you know you didn't have a leak?

Note the absence of explicit memory
management, macros, casts, overflow checks, explicit size limits, and
pointers. By using a function object and a standard algorithm, I could have
eliminated the pointer-like use of the iterator, but that seemed overkill
for such a tiny program.

These techniques are not perfect and it is not always easy to use them
systematically. However, they apply surprisingly widely and by reducing
the number of explicit allocations and deallocations you make the remaining
examples much easier to keep track of.
As early as 1981, I pointed out that by reducing the number of objects that
I had to keep track of explicitly from many tens of thousands to a few dozens,
I had reduced the intellectual effort needed to get the program right from
a Herculean task to something manageable, or even easy.

If your application area doesn't have libraries that make programming that
minimizes explicit memory management easy, then the fastest way of getting
your program complete and correct might be to first build such a library.

Templates and the standard libraries make this use of containers, resource
handles, etc., much easier than it was even a few years ago. The use of
exceptions makes it close to essential.

If you cannot handle allocation/deallocation implicitly as part of an object
you need in your application anyway, you can use a resource handle
to minimize the chance of a leak.
Here is an example where I need to return an object allocated on the free store
from a function.
This is an opportunity to forget to delete that object.
After all, we cannot tell just looking at pointer whether it needs to be
deallocated and if so who is responsible for that.
Using a resource handle, here the standard library auto_ptr, makes it clear
where the responsibility lies:

If systematic application of these techniques is not possible in your
environment (you have to use code from elsewhere, part of your program
was written by Neanderthals, etc.), be sure to use a memory leak detector
as part of your standard development procedure, or plug in a garbage
collector.

In other words, why doesn't C++ provide a primitive for returning to the
point from which an exception was thrown and continuing execution from there?

Basically, someone resuming from an exception handler can never be sure that
the code after the point of throw was written to deal with the execution
just continuing as if nothing had happened. An exception handler cannot know
how much context to "get right" before resuming.
To get such code right, the writer of the throw and the writer of the catch
need intimate knowledge of each others code and context. This creates a
complicated mutual dependency that wherever it has been allowed has led to
serious maintenance problems.

I seriously considered the possibility
of allowing resumption when I designed the C++ exception handling mechanism
and this issue was discussed in quite some detail during standardization.
See the exception handling chapter of
The Design and Evolution of C++.

If you want to check to see if you can fix a problem before throwing an
exception, call a function that checks and then throws only if the problem
cannot be dealt with locally. A new_handler is an example of this.

If you want to, you can of course use realloc().
However, realloc() is only guaranteed to work on arrays allocated by malloc()
(and similar functions)
containing objects without user-defined copy constructors.
Also, please remember that contrary to naive expectations,
realloc() occasionally does copy its argument array.

In C++, a better way of dealing with reallocation is to use a standard
library container, such as vector, and
let it grow naturally.

What good can using exceptions do for me?
The basic answer is:
Using exceptions for error handling makes you code simpler, cleaner,
and less likely to miss errors.
But what's wrong with "good old errno and if-statements"?
The basic answer is:
Using those, your error handling and your normal code are closely intertwined.
That way, your code gets messy and it becomes hard to ensure that you have dealt with all errors
(think "spaghetti code" or a "rat's nest of tests").

First of all there are things that just can't be done right without exceptions.
Consider an error detected in a constructor; how do you report the error?
You throw an exception.
That's the basis of
RAII (Resource Acquisition Is Initialization),
which it the basis of some of the most effective modern C++ design techniques:
A constructor's job is to establish the invariant for the class
(create the environment in which the members function are to run)
and that often requires the acquisition of resources, such as memory, locks, files, sockets, etc.

Imagine that we did not have exceptions, how would you deal with an error detected in a constructor?
Remember that constructors are often invoked initialize/construct objects in variables:

The vector or ofstream (output file stream) constructor
could either set the variable into
a "bad" state (as ifstream does by default) so that every subsequent operation fails.
That's not ideal.
For example, in the case of ofstream,
your output simply disappears if you forget to check that the
open operation succeeded. For most classes that results are worse.
At least, we would have to write:

So writing constructors can be tricky without exceptions, but what about plain old functions?
We can either return an error code or set a non-local variable (e.g. errno).
Setting a global variable doesn't work too well unless you test it immediately
(or some other function might have re-set it).
Don't even think of that technique if you might have multiple threads accessing
the global variable.
The trouble with return values are that choosing the error return value
can require cleverness and can be impossible:

There is no possible value for my_negate() to return:
Every possible int is the correct answer for some int
and there is no correct answer for the most negative number in the twos-complement representation.
In such cases, we would need to return pairs of values (and as usual remember to test)
See my Beginning programming book
for more examples and explanations.

Common objections to the use of exceptions:

but exceptions are expensive!: Not really.
Modern C++ implementations reduce the overhead
of using exceptions to a few percent (say, 3%) and that's compared to no error handling.
Writing code with error-return codes and tests is not free either.
As a rule of thumb, exception handling is extremely cheap when you don't throw an exception.
It costs nothing on some implementations.
All the cost is incurred when you throw an exception:
that is, "normal code" is faster than code using error-return codes and tests.
You incur cost only when you have an error.

but in JSF++
you yourself ban exceptions outright!:
JSF++ is for hard-real time and safety-critical applications (flight control software).
If a computation takes too long someone may die.
For that reason, we have to guarantee response times, and we can't -
with the current level of tool support - do that for exceptions.
In that context, even free store allocation is banned!
Actually, the JSF++ recommendations for error handling simulate the use of exceptions in anticipation
of the day where we have the tools to do things right, i.e. using exceptions.

but throwing an exception from a constructor invoked by new causes a memory leak!:
Nonsense! That's an old-wives' tale caused by a bug in one compiler -
and that bug was immediately fixed over a decade ago.

Do not use exceptions as simply another way to return a value from a function.
Most users assume - as the language definition encourages them to - that exception-handling
code is error-handling code,
and implementations are optimized to reflect that assumption.

Had the call f(v) been legal, we would have had an Orange pretending to be an Apple.

An alternative language design decision would have been to allow the unsafe conversion,
but rely on dynamic
checking. That would have required a run-time check for each access to v's members, and h()
would have had to throw an exception upon encountering the last element of v.

Not really.
We can do without multiple inheritance by using workarounds, exactly as we can do
without single inheritance by using workarounds.
We can even do without classes by using workarounds.
C is a proof of that contention.
However, every modern language with static type checking and inheritance provides
some form of multiple inheritance.
In C++, abstract classes often serve as interfaces and a class can have many interfaces.
Other languages - often deemed "not MI" - simply has a separate name for their equivalent
to a pure abstract class: an interface.
The reason languages provide inheritance (both single and multiple) is that language-supported
inheritance is typically superior to workarounds (e.g. use of forwarding functions to sub-objects
or separately allocated objects) for ease of programming, for detecting
logical problems, for maintainability, and often for performance.

For a brief introduction to standard library facilities, such as iostream and
string, see Chaper 3 of TC++PL3 (available online).
For a detailed comparison of simple uses of C and C++ I/O, see "Learning Standard C++ as a New Language", which you can download from my
publications list

No. generics are primarily syntactic sugar for abstract classes; that is, with generics
(whether Java or C# generics), you program against
precisely defined interfaces and typically pay the cost of virtual function calls and/or dynamic
casts to use arguments.

Templates supports generic programming, template metaprogramming, etc. through a combination
of features such as integer template arguments, specialization, and uniform treatment of built-in
and user-defined types. The result is flexibility, generality, and performance unmatched by
"generics". The STL is the prime example.

A less desirable result of the flexibility is late detection of errors and horrendously
bad error messages. This is currently being addressed indirectly with
constraints classes.

Yes: You should throw an exception from a constructor whenever you cannot properly
initialize (construct) an object.
There is no really satisfactory alternative to exiting a constructor by a throw.

Not really: You can throw an exception in a destructor, but that exception must not leave the
destructor; if a destructor exits by a throw, all kinds of bad things are likely to happen
because the basic rules of the standard library and the language itself will be violated.
Don't do it.

Because C++ supports an alternative that is almost always better:
The "resource acquisition is initialization" technique (TC++PL3 section 14.4).
The basic idea is to represent a resource by a local object, so that the
local object's destructor will release the resource. That way, the programmer
cannot forget to release the resource.
For example:

In a system, we need a "resource handle" class for each resource. However,
we don't have to have an "finally" clause for each acquisition of a resource.
In realistic systems, there are far more resource acquisitions than kinds
of resources, so the "resource acquisition is initialization" technique leads
to less code than use of a "finally" construct.

An auto_ptr is an example of very simple handle class, defined in <memory>,
supporting exception safety using the
resource acquisition is initialization technique.
An auto_ptr holds a pointer, can be used as a pointer, and deletes the object
pointed to at the end of its scope.
For example:

If an exception is thrown in the ... part, the object held by p is correctly
deleted by auto_ptr's destructor while the X pointed to by q is leaked.
See TC++PL 14.4.2 for details.

Auto_ptr is a very lightweight class. In particular, it is *not* a reference
counted pointer. If you "copy" one auto_ptr into another, the assigned to
auto_ptr holds the pointer and the assigned auto_ptr holds 0.
For example:

This "move semantics" differs from the usual "copy semantics", and can be
surprising. In particular, never use an auto_ptr as a member of a standard
container. The standard containers require the usual copy semantics.
For example:

std::vector<auto_ptr<X> >v; // error

An auto_ptr holds a pointer to an individual element, not a pointer to an array:

void f(int n)
{
auto_ptr<X> p(new X[n]); // error
// ...
}

This is an error because the destructor will delete the pointer using delete
rather than delete[] and will fail to invoke the destructor for the last n-1
Xs.

So should we use an auto_array to hold arrays? No. There is no auto_array.
The reason is that there isn't a need for one. A better solution is to use
a vector:

void f(int n)
{
vector<X> v(n);
// ...
}

Should an exception occur in the ... part, v's destructor will be correctly
invoked.

C++ exceptions are designed to support error handling.
Use throw only to signal an error and catch only to specify error handling actions.
There are other uses of exceptions - popular in other languages - but not idiomatic in C++
and deliberately not supported well by C++ implementations (those implementations are optimized
based on the assumption that exceptions are used for error handling).

In particular, throw is not simply an alternative way of returning a value
from a function (similar to return).
Doing so will be slow and will confuse most C++ programmers used to seing exceptions used only for
error handling.
Similarly, throw is not a good way of getting out of a loop.

malloc() is a function that takes a number (of bytes) as its argument;
it returns a void* pointing to unitialized storage.
new is an operator that takes a type and (optionally) a set of initializers for that type
as its arguments;
it returns a pointer to an (optionally) initialized object of its type.
The difference is most obvious when you want to allocate an object of a user-defined type
with non-trivial initialization semantics.
Examples:

Note that when you specify a initializer using the "(value)" notation,
you get initialization with that value.
Unfortunately, you cannot specify that for an array.
Often, a vector is a better alternative to a free-store-allocated array
(e.g., consider exception safety).

Whenever you use malloc() you must consider initialization and convertion
of the return pointer to
a proper type. You will also have to consider if you got the number of bytes right for your use.
There is no performance difference between malloc() and new when you take
initialization into account.

Yes, in the sense that you can use malloc() and new in the same program.

No, in the sense that you cannot allocate an object with malloc() and free it
using delete. Nor can you allocate with new and delete with free() or use
realloc() on an array allocated by new.

The C++ operators new and delete guarantee proper construction and destruction;
where constructors or destructors need to be invoked, they are. The C-style
functions malloc(), calloc(), free(), and realloc() doesn't ensure that.
Furthermore, there is no guarantee that the mechanism used by new and delete
to acquire and release raw memory is compatible with malloc() and free().
If mixing styles works on your system, you were simply "lucky" - for now.

If you feel the need for realloc() - and many do - then consider
using a standard library vector.
For example

The effects of using a T* that doesn't point to a T can be disastrous.
Consequently, in C++, to get a T* from a void* you need an explicit cast.
For example, to get the undesirable effects of the program above, you have to
write:

int* pp = (int*)q;

or, using a new style cast to make the unchecked type conversion operation more visible:

int* pp = static_cast<int*>(q);

Casts are best avoided.

One of the most common uses of this unsafe conversion in C is to assign the
result of malloc() to a suitable pointer. For example:

int* p = malloc(sizeof(int));

In C++, use the typesafe new operator:

int* p = new int;

Incidentally, the new operator offers additional advantages over malloc():

At first glance, the declaration of c1 seems cleaner, but note that to use
that in-class initialization syntax, the constant must be a static const of
integral or enumeration type initialized by a constant expression.
That's quite restrictive:

I tend to use the "enum trick" because it's portable and doesn't tempt me
to use non-standard extensions of the in-class initialization syntax.

So why do these inconvenient restrictions exist?
A class is typically declared in a header file and a header file is typically
included into many translation units.
However, to avoid complicated linker rules, C++ requires that every object
has a unique definition.
That rule would be broken if C++ allowed in-class definition of entities
that needed to be stored in memory as objects.
See
D&E for an explanation of C++'s design tradeoffs.

You have more flexibility if the const isn't needed for use in a constant
expression:

If the ... part doesn't touch p then the second "delete p;" is a serious error
that a C++ implementation cannot effectively protect itself against (without
unusual precautions).
Since deleting a zero pointer is harmless by definition, a simple solution
would be for "delete p;" to do a "p=0;" after it has done whatever else
is required.
However, C++ doesn't guarantee that.

One reason is that the operand of delete need not be an lvalue. Consider:

delete p+1;
delete f(x);

Here, the implementation of delete does not have a pointer to which it can
assign zero.
These examples may be rare, but they do imply that it is not possible to
guarantee that ``any pointer to a deleted object is 0.''
A simpler way of bypassing that ``rule'' is to have two pointers to an
object:

T* p = new T;
T* q = p;
delete p;
delete q; // ouch!

C++ explicitly allows an implementation of delete to zero out an lvalue
operand, and I
had hoped that implementations would do that, but that idea doesn't
seem to have become popular with implementers.

If you consider zeroing out pointers important, consider using a destroy
function:

template<class T> inline void destroy(T*& p) { delete p; p = 0; }

Consider this yet-another reason to minimize explicit use of
new and delete by relying on standard library containers, handles, etc.

Note that passing the pointer as a reference (to allow the pointer to be
zero'd out) has the added benefit of preventing destroy() from being called
for an rvalue:

is not and never has been C++, nor has it even been C.
See the ISO C++ standard 3.6.1[2] or the ISO C standard 5.1.2.2.1.
A conforming implementation accepts

int main() { /* ... */ }

and

int main(int argc, char* argv[]) { /* ... */ }

A conforming implementation may provide more versions of main(),
but they must all have return type int.
The int returned by main() is a way for a program to return a value
to "the system" that invokes it. On systems that doesn't provide such a
facility the return value is ignored, but that doesn't make "void main()"
legal C++ or legal C.
Even if your compiler accepts "void main()" avoid it, or risk being considered
ignorant by C and C++ programmers.

In C++, main() need not contain an explicit return statement. In that case, the
value returned is 0, meaning successful execution.
For example:

Note also that neither ISO C++ nor C99 allows you to leave the type out of a
declaration. That is, in contrast to C89 and ARM C++ ,"int" is not assumed
where a type is missing in a declaration.
Consequently:

There is no fundamental reason to disallow overloading of ?:.
I just didn't see the need to introduce the special case of overloading
a ternary operator.
Note that a function overloading expr1?expr2:expr3 would not be able to
guarantee that only one of expr2 and expr3 was executed.

Sizeof cannot be overloaded because built-in operations, such as incrementing
a pointer into an array implicitly depends on it. Consider:

Thus, sizeof(X) could not be given a new and different meaning by the
programmer without violating basic language rules.

In N::m neither N nor m are expressions with values; N and m are names known
to the compiler and :: performs a (compile time) scope resolution rather
than an expression evaluation. One could imagine allowing overloading of x::y
where x is an object rather than a namespace or a class, but that would - contrary to
first appearances - involve introducing new syntax (to allow expr::expr).
It is not obvious what benefits such a complication would bring.

Operator . (dot) could in principle be overloaded using the same technique as
used for ->.
However, doing so can lead to questions about whether an operation is
meant for the object overloading . or an object referred to by . For example:

Sorry, no.
The possibility has been considered several times, but each time I/we decided
that the likely problems outweighed the likely benefits.

It's not a language-technical problem.
Even when I first considerd it in 1983, I knew how it could be implemented.
However, my experience has been that when we go beyond the most trivial examples
people seem to have subtlely different opinions of "the obvious" meaning of uses
of an operator. A classical example is a**b**c. Assume that ** has been made to mean
exponentiation. Now should a**b**c mean (a**b)**c or a**(b**c)?
I thought the answer was obvious and my friends agreed - and then we found that we
didn't agree on which resolution was the obvious one.
My conjecture is that such problems would lead to subtle bugs.

Both are "right" in the sense that both are valid C and C++ and both
have exactly the same meaning. As far as the language definitions and
the compilers are concerned
we could just as well say ``int*p;'' or ``int * p;''

The choice between ``int* p;'' and ``int *p;'' is not about right and
wrong,
but about style and emphasis.
C emphasized expressions; declarations were
often considered little more than a necessary evil. C++, on the other
hand, has a heavy emphasis on types.

A ``typical C programmer'' writes ``int *p;'' and explains it ``*p is
what is the int'' emphasizing syntax, and may point to
the C (and C++) declaration grammar to argue for the correctness of the
style. Indeed, the * binds to the name p in the grammar.

A ``typical C++ programmer'' writes ``int* p;'' and explains it ``p is
a pointer to an int'' emphasizing type. Indeed the type of p is int*.
I clearly prefer that emphasis
and see it as important for using the more advanced parts of C++ well.

The critical confusion comes (only) when people try to declare several
pointers with a single declaration:

int* p, p1; // probable error: p1 is not an int*

Placing the * closer to the name does not make this kind of error
significantly less likely.

int *p, p1; // probable error?

Declaring one name per declaration minimizes the
problem - in particular when we initialize the variables. People are
far less likely to write:

int* p = &i;
int p1 = p; // error: int initialized by int*

And if they do, the compiler will complain.

Whenever something can be done in two ways, someone will be confused.
Whenever something is a matter of taste, discussions can drag on
forever. Stick to one pointer per declaration and always initialize
variables
and the source of confusion disappears.
See
The Design and Evolution of C++
for a longer discussion of the C declaration syntax.

Such style issues are a matter of personal taste.
Often, opinions about code layout are strongly held, but
probably consistency matters more than any particular style.
Like most people, I'd have a hard time constructing a solid logical argument
for my preferences.

I personally use what is often called "K&R" style. When you add conventions
for constructs not found in C, that becomes what is sometimes called
"Stroustrup" style.
For example:

This style conserves vertical space better than most layout styles,
and I like to
fit as much as is reasonable onto a screen. Placing the opening brace of a
function on a new line
helps me distinguish function definition from class definitions
at a glance.

No I don't recommend "Hungarian".
I regard "Hungarian" (embedding an abbreviated version of a type in a variable name)
a technique that can be useful in untyped languages, but is completely unsuitable
for a language that supports generic programming and object-oriented programming
- both of which emphasize selection of operations based on the type an arguments
(known to the language or to the run-time support).
In this case, "building the type of an object into names" simply complicates and
minimizes abstraction.
To various extent, I have similar problems with every scheme that embeds information
about language-technical details (e.g., scope, storage class, syntactic category)
into names.
I agree that in some cases, building type hints into variable names can be helpful,
but in general, and especially as software evolves, this becomes a maintenance hazard and a
serious detriment to good code. Avoid it as the plague.

So, I don't like naming a variable after its type; what do I like and recommend?
Name a variable (function, type, whatever) based on what it is or does.
Choose meaningful name; that is, choose names that will help people understand
your program. Even you will have problems understanding what your
program is supposed to do if you litter it with variables with
easy-to-type names like x1, x2, s3, and p7. Abbreviations and
acronyms can confuse people, so use them sparingly. Acronyms should
be used sparingly. Consider, mtbf, TLA, myw, RTFM, and NBV. They are obvious,
but wait a few months and even I will have forgotten at least one.

Short names, such as x and i, are meaningful when used
conventionally; that is, x should be a local variable or a parameter
and i should be a loop index.

Don't use overly long names; they are hard to type, make lines so long
that they don't fit on a screen, and are hard to read quickly. These are
probably ok:

partial_sum element_count staple_partition

These are probably too long:

the_number_of_elements remaining_free_slots_in_symbol_table

I prefer to use underscores to separate words in an
identifier (e.g, element_count) rather than alternatives,
such as elementCount and ElementCount.
Never use names with all capital
letter (e.g., BEGIN_TRANSACTION) because that's conventionally reserved for macros.
Even if you don't use macros, someone might have littered your header files with them.
Use an initial capital letter for types (e.g., Square and Graph).
The C++ language and standard library don't use capital letters, so it's
int rather than Int and string rather than String.
That way, you can recognize the standard types.

Avoid names that are easy to mistype, misread, or confuse. For example

name names nameS
foo f00
fl f1 fI fi

The characters 0, o, O, 1, l, and I are particularly prone to cause trouble.

Often, your choice of naming conventions is limited by local style rules.
Remember that a maintaining a consistent style is often more important than
doing every little detail in the way you think best.

I put it before, but that's a matter of taste. "const T"
and "T const" were - and are - (both) allowed and equivalent.
For example:

const int a = 1; // ok
int const b = 2; // also ok

My guess is that using the first version will confuse fewer programmers
(``is more idiomatic'').

Why? When I invented "const" (initially named "readonly" and had a
corresponding "writeonly"), I allowed it to go before or after the type
because I could do so without ambiguity. Pre-standard C and C++ imposed few
(if any) ordering rules on specifiers.

I don't remember any deep thoughts or involved discussions about
the order at the time. A few of the early users - notably me - simply
liked the look of

const int c = 10;

better than

int const c = 10;

at the time.

I may have been influenced by the fact that my earliest examples
were written using "readonly" and

readonly int c = 10;

does read better than

int readonly c = 10;

The earliest (C or C++) code using "const" appears to have been
created (by me) by a global substitution of "const" for "readonly".

I remember discussing syntax alternatives with several people -
incl. Dennis Ritchie - but I don't remember which languages I looked at then.

Note that in const pointers, "const" always comes after the "*". For example:

Casts are generally best avoided.
With the exception of dynamic_cast, their use implies the possibility of a type error or
the truncation of a numeric value. Even an innocent-looking cast can become a serious
problem if, during
development or maintenance, one of the types involved is changed.
For example, what does this mean?:

x = (T)y;

We don't know. It depends on the type T and the types of x and y.
T could be the name of a class, a typedef, or maybe a template parameter.
Maybe x and y are scalar variables and (T) represents a value conversion.
Maybe x is of a class derived from y's class and (T) is a downcast.
Maybe x and y are unrelated pointer types.
Because the C-style cast (T) can be used to express many logically different operations,
the compiler has only the barest chance to catch misuses.
For the same reason, a programmer may not know exactly what a cast does. This is sometimes
considered an advantage by novice programmers and is a source of subtle errors when the
novice guessed wrong.

The "new-style casts" were introduced to give programmers a chance to state their intentions
more clearly and for the compiler to catch more errors. For example:

The idea is that conversions allowed by static_cast are somewhat less likely to lead to errors
than those that require reinterpret_cast.
In principle, it is possible to use the result of a static_cast without casting it back to its
original type, whereas you should always cast the result of a reinterpret_cast back to its original
type before using it to ensure portability.

A secondary reason for introducing the new-style cast was that C-style casts are very hard
to spot in a program. For example, you can't conveniently search for casts using an ordinary
editor or word processor. This near-invisibility of C-style casts is especially unfortunate
because they are so potentially damaging. An ugly operation should have an ugly syntactic form.
That observation was part of the reason for choosing the syntax for the new-style casts.
A further reason was for the new-style casts to match the template notation, so that programmers
can write their own casts, especially run-time checked casts.

Maybe, because static_cast is so ugly and so relatively hard to type, you're more likely
to think twice before using one? That would be good, because casts really are mostly avoidable
in modern C++.

Macros do not obey the C++ scope and type rules.
This is often the cause of subtle and not-so-subtle problems.
Consequently, C++ provides alternatives that fit better with the rest of C++,
such as inline functions, templates, and namespaces.

Consider:

#include "someheader.h"
struct S {
int alpha;
int beta;
};

If someone (unwisely) has written a macro called "alpha" or a macro called
"beta" this may not compile or (worse) compile into something unexpected.
For example, "someheader.h" may contain:

#define alpha 'a'
#define beta b[2]

Conventions such as having macros (and only macros) in ALLCAPS helps,
but there is no language-level protection against macros.
For example, the fact
that the member names were in the scope of the struct didn't help:
Macros operate on
a program as a stream of characters before the compiler proper sees it.
This, incidentally, is a major reason for C and C++ program development
environments and tools have been unsophisticated: the human and the compiler
see different things.

Unfortunately, you cannot assume that other programmers consistently
avoid what you
consider "really stupid". For example, someone recently reported to me that
they had encountered a macro containing a goto. I have seen that also and
heard arguments that might - in a weak moment - appear to make sense.
For example:

The "d+1" problem is solved by adding parentheses in the "call" or in the
macro definition:

#define square(x) ((x)*(x)) /* better */

However, the problem with the (presumably unintended) double evaluation of i++
remains.

And yes, I do know that there are things known as macros that doesn't suffer
the problems of C/C++ preprocessor macros.
However, I have no ambitions for improving C++ macros.
Instead, I recommend the use of facilities from the C++ language proper,
such as inline functions, templates, constructors (for initialization),
destructors (for cleanup), exceptions (for exiting contexts), etc.

"char" is usually pronounced "tchar", not "kar". This may seem illogical because "character"
is pronounced "ka-rak-ter", but nobody ever accused English pronunciation (not "pronounciation" :-)
and spelling of
being logical.