Const_Cast: An Offspring from the Dark Side of C++

Karsten Weihe is a faculty member of the department of computer and information science at the University of Konstanz in Germany and can be reached at karsten.weihe@uni-konstanz.de.

Const-correctness is a nice feature of C++. When applied with care, it can make your code much more reliable. However, there is an opponent built into C++ whose sole purpose is to break const-correctness: the keyword const_cast. In this feature, I will discuss two points:

when to use const_cast to break const-correctness deliberately

andeven more importantwhen not.

Remember: In C++, you can declare a parameter of a function to be constant inside that function. For normal value parameters that does not make sense. However, it often makes sense for parameters of pointer or reference types. Consider this:

The calls to f2 and f3 may potentially change the value of m, because m is passed by reference to f2 and via a pointer to f3. The identifier n inside f2 and the pointer expression *nptr inside f3 refer to the same piece of storage as m does outside. Hence, whenever the value of n in f2 or the value of *nptr in f3 is changed, the value of m changes correspondingly.

On the other hand, we can blindly assume that the calls to f1 and f4 do not affect m at all. In fact, m is passed by value in either case, so the formal parameter n is a copy of the actual parameter m, and no operation inside f1 or f4 has access to m. For a user of f4, the const does not make a difference. It merely ensures that n is not changed inside f4. However, the user is interested in m, not in n.

For f5 and f6, we know that m cannot be modified, although m is passed by reference or pointer: parameter n of f5 is declared to be constant, so the compiler will refuse any attempt to change n inside f5. Analogously, the position of the const in the parameter list of f6 means that *nptr is declared to be constant. (To declare the pointer nptr itself as a constant, the const must appear after the * in the declaration of f6.)

This is what const-correctness means: An object may be declared as being constant inside some component, and the compiler watches "Argus-eyed" that the implementation of this component does not violate its own declaration. Applying const-correctness rigorously means to declare every parameter of every function/method const, unless it is indeed changed by this function/method (or by any of the functions/methods invoked inside). By the way, you should not overlook the possibility of const-declaring a method as a whole:

So, as we did for f1 and f4, we may blindly assume that m is not changed in f5 or f6, right? Unfortunately, this is by no means true. Consider this hostile implementation of f5:

void f5 (const int& n)
{
int& n_ref = const_cast<int&>(n);
n_ref++;
}

Now guess what the following main routine prints:

int main ()
{
int m = 2;
f5 (m);
cout << m;
}

You are absolutely right: it prints a 3, not a 2! Why? Because f5 breaks const-correctness: n_ref refers to the same piece of storage as n, but it is not declared to be constant. The const_cast casts constness away.

It looks like the situation with f5 and f6 is as bad as with f2 and f3: To make sure that m is not changed by a call to either function, we have to inspect the codeor simply trust the person who implemented f5 or f6. So if const-correctness can always be broken, why should we use it? A const-declaration of a function parameter may lull us into a false sense of security. When we drop const-correctness, every look at a function would remind us of the sad fact that we cannot assume anything. So isn't dropping const-correctness the better way to go?

The answer isas usual"it depends." Perhaps you are one of these unlucky developers whose colleagues write terrible code, code that you never ever want to see, because you know that you would not be able to sleep well anymore. In this case you should ignore every occurrence of the keyword const in the declarations of the functions those bad guys contribute; otherwise, you'll be playing with fire. If you need to use such an untrustworthy function f but you definitely want to avoid having your parameters changed by f, you have to pay a price, with performance as the currency: you have to copy each parameter and call f with these copies. Clearly you need not worry about what f does with your copies, because your own code proceeds with the original objects.

However, if you feel that your colleagues are like yousensible, responsible, circumspect, etc.then const-correctness will do a great job for your team:

The assumption that a value is constant throughout certain operations greatly helps you verify the correctness of your program by code review.

Many unintended value-changing operations are caught by the compiler.

Even more important: value-changing operations at unexpected places are often hints to hidden design errors.

Last but not least: every const is a piece of documentationa reliable piece of documentation, because its validity is guaranteed by the compiler (well...).

Do you think achieving const-correctness is an easy task? Just declare every parameter of a function or method const unless you definitely want to change its value inside? Maybe you think it is even simpler than that: You do not have to worry yourself about where to place a const and where not. Just declare everything const, and whenever the compiler complains about a constremove it. That's it!

OK, so much for theory. As usual, practice is very different. Let's see what happens in the extreme case, when you declare everything const and ask the compiler which of these consts are wrong. In the first round, the compiler complains about a few occurrences where you actually change the values of some const-declared parameters. You remove the consts and run the compiler againand the error listing on the screen is gigantic. This is not surprising. Many functions/methods call the functions/methods from which you removed a few consts. The compiler now complains about the consts in all of these functions/methods.

Next round: You remove the consts in all of these functions as well, then you run the compiler again and start praying. . . But the compiler is merciless: the error listing is very large again. Even worse, like a virus, the errors quickly infect file after file, and after a few compiler runs, all files in your project are infected by this "virus."

At this point, you have three choices:

You give up, drop the idea of const-correctness, and remove all consts.

You apply const_cast to interrupt the distribution of the "virus."

You regard this virus as a warning telling you that the design of your program is not as clean as you thought. Consequently, you initiate a major redesign effort to get things right once and for all.

Clearly, if your program is supposed to do an important task for a significant period of time, there is only one choice: #3. This choice is painful and expensive, but in the long run it will pay off. Fortunately, the "virus" will help you find the right redesign steps: look at its paths through the program!

Choice #2 is probably never a good idea. If you apply const_cast to stop the "virus," your program will be left in a highly inconsistent state. It will be hard for you to decide which promises of constness are still kept and which not. Hence, it might always be better to prefer choice #1 to #2, which simply means that nothing is promised anymore. There are even situations in which choice #1 is preferable to choice #3:

The time until the deadline of your project is desperately short.

You're only building a quick prototype.

Clearly, in the first situation there is no alternative. However, in the second situation, you should think twice. Do you really believe that your sloppy program will have a short life time? Remember that this attitude was the source of the Y2K problem!

So far, we have only seen situations in which const_cast is a very bad idea. So, what is const_cast good for? Is it good for anything? The answer is two-fold: Applying const_cast is never good; but sometimes it is necessary. Consider this function:

bool find (const int* A, int size, int value);

A is assumed to point to an int-array, whose size is given by the second parameter. Then find should return true if and only if at least one of A[0]...A[size-1] equals value. The state of array A immediately after the call to find should exactly match its state immediately before the call. Thus, it is reasonable to const-declare A. This is no problem if find is implemented in the naive way:

However, the performance of find can be significantly improved by applying the following trick: The last component of A is temporarily replaced by value, which removes the need for the test i<size. Here is an implementation:

This application of const_cast does not cause any problems, since all modifications of A are undone at the end of the function. In other words, the const in the declaration of parameter A keeps its promiseeventually.

Since the const does not cheat, we do not declare A non-const, although we manipulate A inside find. Otherwise, we could not use find in a context in which const-correctness was applied rigorously and thus A is constant (or we had to spoil each such context by inserting a const_cast). And this would be a pity, wouldn't it?

Caveat: You should think twice before applying this kind of trick inside templates:

template <class T> bool find (const T* A, int size, T value);

If you do not have full control over all possible types T that may instantiate find, there is a chance that the two calls to T's assignment operatorto copy A[size-1] back and forthmay yield unexpected, strange side effects.

Now we have seen that const_cast is useful if we want to hack a function for better performance but do not want to give up const-correctness outside the function that we hacked. Is this the only rationale for const_cast?

There is yet another scenario, which might be even more common. Suppose the above function find was not implemented by you yourself, but you found it in a third-party library. Further suppose that the designer of this function simply forgot to make argument A constant (unfortunately, this assumption is all too realistic). You want to apply find in your program, but of course your program is fully const-correct. In particular, A is declared to be constant at the point where you want to apply find to A, because this point is not supposed to change A. If you definitely do not want to drop const-correctness from your own code but you do want to use find, there is only one way out: encapsulate find in a self-defined function my_find, which "fakes" const-correctness.