Making an Exception

An interesting debate about C++ exceptions took place a few weeks ago in the C++ MVPs discussion list. The trigger was something as innocent and specific as “would you guys just throw std::runtime_exception with some error message string or you define a new exception hierarchy? What’s your opinion on AtlThrow/AtlThrowLastWin32?” But as it typically happens, replies motivated new questions, justifications, etc., thus moving the point around and showing yet another example in this sour software industry on the impossibility to find that “holy grail”, that one-size-fits-all answer for any possible question (current or future). In this case, about exception generation and its later handling. Despite that, the arguments were so enriching that I decided to compile some and share here with the broader community.

A Brief Intro

Best practices on exception generation and handling are a matter of debate in any programming language that supports exceptions, although possibly in the C++ case its complexity is aggravated by the fact that exceptions were not present in its first versions, and some alternative approaches emerged until then (some may remember the old, macro-based MFC exception model). Furthermore, C++ is probably the most widely used language across developer generations, as it’s known and used by developer generations pre-Java and/or .NET, while being currently the most widely taught language in academy –ahead of Java and C#.- These together with the fact that C++ is still among the most opened jobs make possible that large C++ projects are being built by developers with different background and approaches, both in exception handling and development as a whole.

Exception hierarchy

The original question mentioned std::runtime_error as potential root of a custom hierarchy but somebody preferred to use std::exception instead considering that the standard derivatives such as std::runtime_error have been too abused to be an effective hierarchy. If this practice is project-wide applied, it’s a plus against catch ellipsis usage –catch (…)– as this last also intercepts non-C++ exceptions like Windows SEH (unless this was exactly what you wanted). [UPDATE: Windows SEH exceptions are not caught when compiled with /EH or /EHsc options instead of /EHa.]

By checking my copy of C++ father’s book on C++, he makes no explicit advice regarding root candidates for a custom exception hierarchy, even using plain objects in many examples. Not advice in the opposite either, though. In his 14th chapter about exceptions, he introduces the STL exception hierarchy as a family of exceptions being thrown by STL components, regardless that classes you or other developers in your project could build may be extensions of this family. But what Stroustrup avoided to do was actually made by influential Boost group through a brief article on this matter. There, they consider std::exception a reasonable base class for your own hierarchy, encouraging you to seriously consider it.

Another MVP, instead, mentioned the fact that he works mostly in MFC applications and, in such specific context, basing your exceptions in MFC CException is more convenient than the STL alternative because the MFC message dispatcher has a try / catch that limits the scope of its exception hierarchy. In his own words “there is one school of thought that says if you have managed to let the exception get back to the MFC dispatcher, your code is already erroneous.” He affirmed that he has a strong sympathy with that position.

Most MVPs agreed that, despite not being illegal in C++, throwing primitive types like int, long, etc., or similarly Windows-based ones like HRESULT, etc. is a coding horror as inabilities to catch those in the proper place will make the application crash with a hard post-investigation to determine where they are being originated. One went further, stating that “for me it should be a reason for termination of employment (at least the second time it is done).”

Yet, there was a common agreement that while an exception hierarchy help avoid the problem just mentioned, it may still occur when exception messages are not explanatory enough (i.e. “Record not found”, instead of something like “<Entity> <ID> not found.”)

Bulletproofing

As a side comment, someone criticized some the MFC design as CFileException::GetErrorMessage() method uses a pointer based buffer as argument instead of an std::string reference as current C++ school recommends (implicitly, the criticism also reaches the design of std::exception as well, as its what() method returns an old, C-style char*). But somebody else reminded some wisdom delivered since the exception handling mechanism was added to C++: exception operations do not themselves throw exceptions. In that sense, the standard string implementation could do it –i.e. it can’t allocate memory,- eventually leading to an std::terminate().

It sounds reasonable, but the response of the original thinker was original as well: if your application is in such state that there’s no remaining memory for a simple exception message, incurring in an unconditional termination wouldn’t be that terrible after all.

Not that crazy: in Java there’s a distinction between Errors and Exceptions. The latter is what we all know and are discussing here, but errors belong to a category considered fatal in a way that catching attempts make no sense. Thus, java.lang.OutOfMemory is an error, not an exception.

In any case, this MVP clarified that he expects such abnormal condition –failure to create an std::string() because there’s not enough memory- to occur with a low frequency, although there was no answer when a third participant asked “even in mobile apps?”

Exception Declaration. Comparison with Java

Along the debate, yet more hallways were open every time somebody needed to justify an opinion or provide examples. Thus, a guy dared to defend dynamic exception declaration (that is to say, a specification of the possible exceptions that a function or method might throw), putting –in my opinion erroneously- the Java exception declaration schema as a model to imitate. I say erroneously for a couple or reasons:

In Java, declaring those so-called checked exceptions has a different purpose than in C++: to force the caller of the declared method to either catch or declare itself those exception types as potentially thrown. If not done like that, the caller method won’t compile. In ISO C++, instead, the purpose of this dynamic declaration is to confirm, in runtime (not during compilation), that any thrown exception belongs to those types (or descendants). Otherwise, std::unexpected() is called –although, in Visual C++, this last behavior is not implemented as the standard specifies.- Exception declaration in C++ doesn’t impose conditions to the caller, unlike the Java case mentioned above.

Despite being different species, there’s today an extended belief in both languages that declaring exceptions is a practice to be avoided rather than recommended. In the Java case the reason is mostly related with some consequent coupling that may arise. In the C++ one, it may be a maintenance pain for the function whose exceptions are declared, if new exceptions started being thrown by functions or methods being called by the function or method whose exceptions are being declared.

I’m personally with the current stream of thought, but still found interesting the justification offered by the defender of exception declaration: in his vision, by declaring you keep control of what could fail where, instead of letting anything fail anywhere. Otherwise, the absence of that control turns the problem of finding where an exception was thrown into an exponential problem. In that sense he prefers the penalty resulting from keeping exception declarations updated.

I don’t believe I’d buy the concept, although that approach could have a sweet spot when you have full control of the whole code base of your application (what could possibly be the case of this MVP.) If you depend on components delivered by other teams (i.e. third parties), keeping such degree of synchronism between caller and the methods or functions it calls may soon become a heavy assignment.

This in fact leads to a final aspect I’ll cover in this post, related with exception handling.

Exception Handling

So far we were talking about the time an exception is being thrown. But once it happened, who should take care of it? That old notion that its immediate caller should be directly involved in its capture is fortunately gone, and the righter idea of dealing with the exception where and only where it’s possible to reestablish the application state (otherwise let it fly and keep unwinding) is widely spread.

There was also a common consciousness that in an n-layer application (eventually n-tier), if nothing was done at a given layer, some action should be taken before leaving it, as an attempt to reestablish the application state. This action could be eventually partial (like logging the original failure for diagnostic tools) while masking or transforming the exception into something more meaningful to be thrown back to the immediate outer layer. There are special cases where masking is not possible because the “outer world” is a tier where exceptions aren’t supported. Someone offered as example the case when we are about to cross the boundary of a COM method. In such case a possible solution consists in mapping the exception into some form of HRESULT.

At a resource-level, someone reminded, compensations shouldn’t depend on exception handling: by just applying the RAII technique (Resource Acquisition Is Initialization,) compensating actions must happen in the resource destructor as long as local resources lose visibility.

Exception generation and handling are topics that covers chapters in books, so despite this post (based on a random discussion initiated by a single question) offers some insights, you may have lots of other recommendations in this matter. Would you like to share those with the rest of us?

Appendix: Exceptions in C++0x

As a side comment in the discussion, it was mentioned that dynamic exception specifications are to be deprecated in the upcoming C++0x standard. Furthermore, a noexcept declaration is to mean something somewhat similar to the throw() declaration today, although not fully equivalent (it’s scope is still being discussed so I’d rather not write anything here that I’ll later have to change.) There was an interesting article about it published a few months ago in three parts (I, II and III.)

Trivia: Why do try/catch blocks in C++ lack of finally?

If you are familiar with Java and .NET, you may have already wondered why there’s no finally block in C++. Don’t you guess why C++ doesn’t need it, unlike managed languages?

Tags

Join the conversation

If you want to throw exceptions across DLL boundaries (which, admittedly, may be a bad idea), then you should -not- root your exceptions in a type provided by the standard library. That can cause ODR (One definition rule) issues.

For example, suppose MyException inherits from std::exception found in Visual Studio 2010 (msvc10!std::exception). The client of my dll is build with Visual Studio 2008, and tries to catch(std::exception &) (msvc9!std::exception). The client code won't catch MyException. This is because msvc10!std::exception and msvc9!std::exception are unrelated types.

Note that Apache's Xerces C++ XML Parser goes out if its way to provide its own exception hierarchy, presumably for this kind of reason.

'Because C++ supports an alternative that is almost always better: The "resource acquisition is initialization" technique (TC++PL3 section 14.4). The basic idea is to represent a resource by a local object, so that the local object's destructor will release the resource. That way, the programmer cannot forget to release the resource.'

I guess dealing with a compiler all day leaves you either pedantic about good grammar or dependent on the compiler to bring syntax errors to your attention. Either way, "projects meet together developers" is incorrect.

Thanks again, @Ben!! No, the fact, is that I'm not a native English speaker and despite I improved considerable compared with the Tarzanesque English I spoke when I first landed in US, I still must pollish my grammar specially. No pedantry here.

What would have been a better way to express that? Help me please improve my bar.

@MF: unfortunately there was no prize for scoring but, yes!! RAII was the reason. Just to relaunch the challenge: why managed languages implemented the finally block instead of deferring on RAII?

Well, some managed languages implement finally since they do not implement destructors similar to C++. Garbage collection deals with releasing memory lazily, but to release other resources deterministically, the finally clause can be used.

Note that this isn't necessarily a managed vs. unmanaged issue: VC supports the __try/ __finally keyword extensions in native code to support deterministic clean up in the face of structured exceptions (which are different than C++ exceptions) for C and C++.

RAII is great and all but let's be realistic: Lots of APIs and frameworks we use in C++ do not support it. For example, all of Win32.

Should we be forced to wrap every single thing we use, or should the language provide the semantics we need to deal with those non-RAII things when it doesn't make sense to wrap them?

I do create RAII wrappers for things I use a lot but having to create them for everything in order to get exception safety is ridiculous.

IMO, "finally" is missing from C++ and there's no excuse for it that doesn't require pretending C++ is something it isn't. For a language like C++ that tries to handle every situation/domain and coding style it's a funny place to take a stand about how things must be done!

Saying that, I've never understood why everyone agrees that "goto" is bad but exceptions are good in C++; they both result in unpredictable control flow. Not that exceptions are always bad — I've worked on C++ projects where everything was RAII and exceptions worked great; ditto when you can depend on a garbage collector to tidy everything up — but I find it strange that the same people can be religiously against goto yet in favour of exceptions.

RAII is great, but it cannot always be used as a replacement for finally. That's because it is not always feasible to use local scope, e.g. if an object's data member must be cleaned up when an exception is thrown. The Object's data member is not RAII, it is a member variable, not a local variable.

For example, without drastically rearchitecting how buf is used, how do you use RAII to get rid of finally here?

1) Throwing exceptions was never intended as a replacement for other error handling mechanisms. If a function can return a variety of different values, throwing exceptions that represents them all is cumbersome. You either a) embed a code or string inside the exception, or b) create a hierarchy to handle the different cases. The former is dangerous for strings (since std::string may throw std::bad_alloc) and redundant for codes (since returning them instead would have worked). The latter makes catching a mess that surpasses regular return values handling.

2) I find that using exceptions to distinguish between variants of the same error results in bad code at either the source of the problem or the handler, and sometimes both. Exceptions should represent general issues within a module (such as a network_exception) that can be dealt with in a similar fashion.

3) Embedding strings in exceptions is a problem with localization. Exceptions should not be used to convey meaningful information to users. If you need additional information at the source, log what happened. If you need better error messages to the user, use return values.

4) Deriving all exceptions from a common base class (such as std::exception) looks tempting, but does not solve any real problems. It would be unusual to handle both a network_exception and a file_not_found_exception the same way. In this case, whether exceptions are derived from std::exception does not matter: they need to be handled differently. If a handler needs to catch all exceptions, that's what catch(…) is for.

5) Distinguishing between SEH and C++ exceptions is, IMO, not important. In my experience, code that deals with a catch(…) is a last resort, such as a thread function, to avoid leaking exceptions to a module that does not expect them. In the catch(…) I write, I usually swallow the exception or throw it again.

6) I find that exceptions work well for _exceptional_ cases and not with regular functions that may return different values. As an example, have your socket::recv() return would_block or okay, but have it throw when the socket is down. This way, all socket calls can be wrapped in a try/catch to handle connection errors, but individual return values can still be handled.

7) As for the finally block, I never need it in C++. Wrapping simple resources is very easy with general purpose smart pointers such as boost::shared_ptr by giving them a specific deallocation function.

> catch ellipsis usage –catch (…)- as this last also intercepts non-C++ exceptions like Windows SEH

This happens when the compiler option /EHa is used. It does NOT happen when the compiler options /EHs or /EHsc are used.

I strongly recommend AGAINST using the /EHa compiler option. Either /EHs or /EHsc should ALWAYS be used, with /EHsc being preferable (it's faster because it assumes that extern "C" functions won't emit exceptions – while technically permitted by the Standard, sane code should never attempt to do such a thing, so giving up that ability is worth the performance gain).

> Furthermore, a noexcept declaration is to mean something similar to the throw() declaration today,

> but the difference is that noexcept is intended to be checked during compilation rather than just runtime.

That's not exactly correct. (This part of the Working Paper is still in flux, and I don't have a noexcept-aware compiler yet, but what follows is my current understanding.) noexcept does NOT provide compile-time enforcement, in the sense of triggering a compiler error when a noexcept-marked function calls a plain function. It provides runtime enforcement, with immediate termination, if a noexcept-marked function emits an exception. (Apparently, terminating immediately instead of going through std::unexpected() permits an efficient implementation, compared to the Standard semantics of the now-deprecated dynamic exception specification throw (), but this is deep magic to me.) It is also possible to query at compile-time whether an expression (including things like move constructors and move assignment operators) is entirely noexcept-marked. This is what solves the nasty problem of how to deal with throwing move constructors; they can now be detected, and moves can be downgraded to copies when necessary (it isn't often) to preserve correctness.

[Leo Davidson]

> Should we be forced to wrap every single thing we use

Yes. Yes, you should.

[David Ching]

> how do you use RAII to get rid of finally here?

Use vector (or string) instead of new[]/delete[]. Use shared_ptr or unique_ptr instead of new/delete. Parts 1 and 3 of my Video Introduction to the STL cover this.

If you don't like the STL, you can use other classes. The point, which is very deep but also very simple, is that resources should be managed by objects with destructors. Plain old X * (or FILE *, etc.) is unacceptable.

This key insight cost me a year and a half of my life to learn. I give it to you for free.

Here's an RAII pattern that you may find useful. I have little helper classes like this sprinkled in a lot of my code:

struct AutoSocket {

SOCKET s;

AutoSocket(SOCKET s_) : s(s_) {}

~AutoSocket() {closesocket(s);}

};

The data member is public on to keep the boilerplate code to a minimum. I don't do anything to protect against copying or assignment, but normally these helper classes are shoved into a cpp file and not widely used.

"For example, without drastically rearchitecting how buf is used, how do you use RAII to get rid of finally here?"

RAII is infectious. If you are going to use RAII, *all* resources must be encapsulated in RAII objects. That means no naked pointers that are dynamically newed, whether local or as class members. That buffer should be a std::vector or a std::string, and code using the buffer should be changed accordingly.

*** sorry if this is a duplicate: I have already tried to submit it once, but got no response

@David Ching,

that's because you have written what is really a C example, not C++ :).

That is also the reason for bug in your code: it should have been 'delete[] buf' and not 'delete buf'

operator delete() is not same as function free()

Since your buf is not really a member (except sintactically: semantically, it is a local variable: you assign it at the beginning of a function and destroy it at the end) you should implement it as a local variable and change buf2 to take it as parameter. I understand that this is just a contrived example, but it is not consistent at all, and is actually going out of it's way to show a scenario which is not realistic.

More important is that you can't expect RAII to solve what is primarily a bad code and bad design in many ways. If bar2() really throws, buf will point to deallocated memory. Even if you take care to set buf to NULL after deleting, this means that every call to bar2() – e.g. from foo() method – has to make sure that buf is properly initialized. Of course, you must make sure that foo() and any other method which uses buf implements the same try/catch/finally junk.

now, the answer is to use either std::string or something like boost::shared_array or boost::scoped_array. If you can't afford to modify you code to 'if (strcmp(buf.get(), "xxx") == 0)' you can do this:

template<class T>

class SafeArray : public boost::scoped_array<T> {

public:

explicit SafeArray(T* buf) : boost::scoped_array<T>(buf){}

SafeArray() : boost::scoped_array<T>(){}

SafeArray& operator=(T* buf) { reset(buf); return* this; }

operator T*() { return get(); }

};

and use it like this:

SafeArray<char> buf;

buf = new char[20];

if(strcmp(arr, "aa")) …

Note, however, that there is a very good reason why boost and most other libraries forbid implicit conversions. If you want to write C don't bother with C++ overhead

This is where generic programming & design patterns come in. Using Smart Pointers fixes your issue and requires no wrapping whatsoever. For objects on the stack, you can still use smart pointer semantics (through references) even though it wouldn't be a pointer.

The best reason I know not to allocate memory when throwing exceptions is to avoid a crash if the heap is corrupt, leading to a nasty debugging situation where you can't get information about what went wrong. The same thing goes for freeing memory. It also applies to operations that you might undertake to document what went wrong – for example dumping a file of information. All should be done without involving the heap.

Does anybody remember its trick with exceptions vs error codes? I think it's a great example of how it must be written in "modern C++ style" – use error codes (result codes) for code with (often) variable results (i.e. boost::system::error_code) and write exception-based wrappers for those users who don't need error codes. It's very simple (but mostly reasonable for libraries).

The problem I've had with the STL exceptions is that they store the message as a byte (non-Unicode) string (or at least, std::exception::what returns a byte string). So I create my own base exception class:

"…with /EHsc being preferable (it's faster because it assumes that extern "C" functions won't emit exceptions – while technically permitted by the Standard, sane code should never attempt to do such a thing…"

While it would certainly be preferable to avoid this, it sometimes happens. I'm maintaining a large (several million LOC), old (25+ years) application, written in C and partially converted to C++, where the call stack regularly passes back and forth between the two. It just wouldn't be practical to try to block exceptions from crossing the language boundary. We just spent a substantial amount of time tracking down a problem that turned out to be because we were using /EHsc. I would recommend that all mixed C and C++ applications use /EHs by default, and only switch to /EHsc if they're really certain that their extern "C" functions could never let an exception escape (presumably with a catch(…) clause). Otherwise the optimizer starts silently eliding catch clauses, resulting in Release-build only crashes that can be hard to track down.

Expecting every single development team to make their own bespoke wrappers seems backwards.

How about the people who make Win32, the main non-RAII API we are all subjected to, do that C++ RAII wrapper for us, well and in one place, make it cover the entire API, and continue to maintain it as part of the main SDK, not as an afterthought that is updated slowly and all but abandoned when some other framework becomes the current fad?

Aside from the consolidation of effort, it'd make code and programmers a lot more interchangeable between projects and teams.

If MS cannot provide that wrapper is it reasonable to expect everyone else to provide their own?

Smart pointers and other generics are simply not enough to deal with all the weird things Win32 throws at us.

FWIW, I *do* make RAII wrappers for a lot of things (and not just resource clean-up; e.g. ensuring a threading event is set when a code block exits), and in my own code I almost never use exceptions, but I've still found myself in situations where I've sat wondering why C++ so stubbornly resists having finally blocks. It seems like a head-in-the-sand mentality to me, which is quite strange for a language that bends over backwards to support so many different programming styles (and as a result, supports Win32, which is not RAII).

I use scopeguard for simple code that might be handled by finally in Java. I prefer it to finally because (a) the release code only ever needs to be mentioned once and (b) the release code is written immediately after the acquisition code so it's easier to maintain.