I often come across heated blog posts where the author uses the argument: "exceptions vs explicit error checking" to advocate his/her preferred language over some other language. The general consensus seems to be that languages that make use of exceptions are inherently better / cleaner than languages which rely heavily on error checking through explicit function calls.

Is the use of exceptions considered better programming practice than explicit error checking, and if so, why?

which will unexpectedly pass NULL into your other procedure and likely discard the errno which was carefully set. There is nothing visibly wrong with the code to indicate that this is possible.

In a language that uses exceptions, instead you would write

MyCoolObject obj = new MyCoolObject();
doSomethingWith(obj);

In this (Java) example, the new operator either returns a valid initialized object or throws OutOfMemoryError. If a programmer must handle this, they can catch it. In the usual (and conveniently, also the lazy) case where it is a fatal error, the exception propagation terminates the program in a relatively clean and explicit manner.

That is one reason why exceptions, when properly used, can make writing clear and safe code much easier. This pattern applies to many, many things which can go wrong, not just allocating memory.

Answer: Refactor & Avoid Boiler Plate with Exceptions (43 Votes)

While Steven's answer provides a good explanation, there is another point which I find is rather important. Sometimes when you check an error code, you cannot handle the failure case immediately. You have to propagate the error explicitly through the call stack. When you refactor a big function, you may have to add all the error checking boiler plate code to your subfunction.

With exceptions, you only have to take care of your main flow. If a piece of code throws some InvalidOperationError, you can still move that code to a sub function and the error management logic will be maintained.

Answer: Error Handling is a Matter of Security (6 Votes)

A point of view from a different angle: Error handling is all about security. An unchecked error breaks all assumptions and preconditions the following code was based on. This may open a lot of external attack vectors: From a simple DoS over unauthorized data access to data corruption and complete system infiltration.

Sure, it depends on the specific application, but when assumptions and preconditions are broken, then all bets are off. Within a complex software, you simply can't say anymore with certainty what is possible from then on, and what can and can't be used from extern.

Given that, there is a fundamental observation: When security is treated as an optional add-on that can be attached later, then it fails most often. It works best when it was already considered in the very first basic design stages and built-in right from the start.

This is what you basically get with exceptions: An already built-in error handling infrastructure that is active even if you don't care. With explicit testing, you have to build it yourself. You have to build it as a very first step. Just beginning to write functions that return error codes, without thinking about the big picture until you're in the final stages of your application development, is effectively an add-on error handling, doomed to fail.

Not that a built-in system helps in any way. Most programmers don't know how to do error handling. It's just too complex most of the time.

There are so many errors that can happen, and each error requires its own handling and its own actions and reactions.

Even the same error can require different actions, based on context. What about a File Not Found or an Out Of Memory?

FNF - A data file on a network share the user wants your app to open? Disappearing files can happen anytime, even in the microseconds between Exists() and Open(). It's not even an "Exception" in the literal meaning of the word.

OOM - For a 2GB allocation? No surprise.

OOM - For a 100 byte allocation? You're in serious trouble.

Error handling leaks through all your carefully separated abstraction layers. An error in the lowest level may require notifying the user with a GUI message. It may require a decision from the user on what to do now. It may need logging. It may need recovery operations in another part, e.g. open database or network connections. Etc.

What about state? When you call a method on an object that invokes state changes and it throws an error:

Is the object in an inconsistent state and needs a rebuild?

Is the object state consistent but (partially) changed and needs an additional rollback or a rebuild?

Is a rollback done within the object and it remains unchanged to the caller?

Where is it even sensible to do what?

Just show me one learner's book where error handling is rigorously designed from the start and consequently used through all the examples, without being left out for brevity and readability and as exercise for the reader. If this is applicable from an educational POV, that is another question, but it's no surprise that error handling is often enough a second or third thought when it should be the very first.

Think you know why exceptions are preferable to error testing? Disagree with the opinions expressed above? Bring your expertise to Stack Exchange, a network of 80+ sites where you can trade expert knowledge on topics like web apps, cycling, patents, and (almost) everything in between.

49 Reader Comments

The linked to question ("Defensive Programming vs Exception Handling?") IMO does a much better job at addressing the heart of the question than the answers here. Exceptions work well for handling the exceptional cases (such as OOM errors), but they're absolutely horrible when people misuse them for control flow (in most languages, they have an absolutely massive performance overhead).

Given that, there is a fundamental observation: When security is treated as an optional add-on that can be attached later, then it fails most often. It works best when it was already considered in the very first basic design stages and built-in right from the start.

What languages are there that are designed with security in mind right from the start? One can say that Go and Erlang were built with concurrency right from the start. One could also argue that Erlang had security built in from the start as well. What else is there?

There are extreme errors that make it impossible for an application to continue in any meaningful way. For example failure to load a crucial library.

Then there are expected errors which the application can handle gracefully. You can survive the loss of an eye or a finger. Many applications can function well with some crippled parts. For example, one of the app dialogs fails to construct.

Then there are errors (important or insignificant) an application is designed to handle without users ever noticing.

Then there is a broad spectrum of errors or exceptional situations that fall somewhere between these categories.

Exception handling works great at handling what the name says: exceptional situations that the application designer could have not foreseen or the economics dictated that it would be too expensive to prepare for. Like a division by zero, connection failure etc.

Most non-exceptional conditions I believe should be handled near the place they occur by non-exceptional handling. It adds a few lines of code but it helps with the application's readability and maintenance. There's nothing less maintainable than hidden "go to's" scattered throughout an application.

While this article presents some interesting points of view on the question of exceptions vs. manual testing, it's a false dichotomy. Monadic error handling is another option, where (for those not familiar with monads) one treats the result of a possibly failed computation as a box that could contain a value or an error and instead of applying operations directly to the value in the box, one delegates to the box itself to apply the operation. A box containing a value applies the operation normally and returns a new box with the result (or error), but a box containing an error returns itself. In this model, obtaining the value in the box requires an explicit desire to do so, making it hard to accidentally ignore an error. And the language can warn you when taking a value out without handling the error case.

While exceptions are more convenient in C-style imperative languages, some major advantages to the monadic approach is that it allows you to pass boxes around in arbitrary ways and it doesn't assume when a function is actually applied, making it work smoothly in lazy languages and allowing more flexible exception handling.

Exceptions are a dodge for not sweating errors. The language can't do your job for you. It can help you ignore doing your job.

The problem with error returns is needing to make provision for errors at each level of your call stack. The problem with exception handling is if you don't actually make provision for exceptions at each level in your code you will not do better than exit() anyway.

It is just like memory management. Programmers doing useful work know memory management is too important to leave to the language. Language designers know memory management is too important to leave to application programmers. Fight!

It is easy to write code which works well when everything goes as expected. That is about 20% of making something which is genuinely good.

I'm surprised Java is mentioned here with OutOfMemoryError. Most "Errors" in Java are unrecoverable, as opposed to Exceptions which are.

Of course, in Java, most exceptions are checked, meaning that the compiler refuses to compile your program if it's not handled. The exception to this are the easy to avoid ones, such as NullPointerException.

[exceptions are] absolutely horrible when people misuse them for control flow (in most languages, they have an absolutely massive performance overhead).

In Python, the overhead of a 'try:' statement is essentially 0. The overhead of an 'except x" clause to catch an exception is generally 5-10 times the overhead of a corresponding 'if condition:' statement. I do not consider that 'absolutely massive'. In any case, if a block of code is executed repeated and an exception would be raised fewer than 1 in 5 or 10 repetitions, then catching the occasional exception is *faster* than always testing with an *if* statement.

Consequently, use of exceptions for control flow is common and built into the design of the 'for item in iterable:' looping statement. Instead of asking an iterator each time 'do you have another item to yield?', or instead of asking each yielded item 'are you the special stop object?', the for statement says 'yield the next item'. It then handles the one exception that occurs when there are no more. In words, one exception catching can replace an indefinitely large number of inquiries.

> Monadic error handling is another option, where (for those not familiar with monads) one treats the result of a possibly failed computation as a box that could contain a value or an error and instead of applying operations directly to the value in the box, one delegates to the box itself to apply the operation.

This. The more and more I work on complex systems, the more I've come to lean towards this approach.

For any suitably complex API, using exceptions just means that you either (a) have to remember to wrap every damn use of the API in a try/catch block or (b) wrap entire modules of your program in said blocks. Either way, figuring out the control flow and error response is a mess.

With old-style C/C++ errors where for example one might return NULL when requesting a missing object that will then crash if you dereference it, you of course have to (a) wrap every API call with error handling or (b) build fancy higher-level wrappers. Either way, figuring out the control flow and error response is a mess.

Monadic errors make it trivial to write clear, natural code, and you can write error handling at the natural points to do so. Best of all, if you like either exceptions or traditional error handling, you can take a hybrid approach. For example, a graphics rendering API might provide a great number of methods one must call to setup the rendering, followed by a single Present() call to actually finish it all. You can use monadic error handling for all the API calls except Present() and then make that throw an exception or return a particular error code if it (or any previous API call) failed.

The linked to question ("Defensive Programming vs Exception Handling?") IMO does a much better job at addressing the heart of the question than the answers here. Exceptions work well for handling the exceptional cases (such as OOM errors), but they're absolutely horrible when people misuse them for control flow (in most languages, they have an absolutely massive performance overhead).

A lesser point first: I'll have to disagree that exceptions are "misused" as control flow; Python demonstrates quite nicely how they make effective control flow.

The larger issue is your assumption that checking for exceptions is "overhead."

You're speaking from the mindset of "I don't want to waste cycles checking for exceptions." And that's a reasonable POV, until your code is littered with ad hoc fixes and checks because your system was insecure.

I'd suggest you compare the extra microseconds a processor was spinning vs. the hours you spent fixing bugs (which is just your experience of the time and money wasted by those bugs) that a computer could have fixed for you.

Exceptions are great. They help isolate the error recovery code from the main sequence so that it takes a lot longer to figure out what happens when the file is not found. I think we should use them for conditionals, rather than else clauses. Think of how much cleaner your code would be:

My feeling is that if you are supposed to handle the problem - for example, failed memory allocation, floating point overflow, file not found - then it should be done by explicit testing. If it is something that cannot be recovered from, then throw an exception, or just kill the task and inform the parent. (If someone is pointer swizzling, they can handle the exception and the entire memory space, but that's YHBW country.)

Exceptions are great. They help isolate the error recovery code from the main sequence so that it takes a lot longer to figure out what happens when the file is not found. I think we should use them for conditionals, rather than else clauses. Think of how much cleaner your code would be:

My feeling is that if you are supposed to handle the problem - for example, failed memory allocation, floating point overflow, file not found - then it should be done by explicit testing. If it is something that cannot be recovered from, then throw an exception, or just kill the task and inform the parent. (If someone is pointer swizzling, they can handle the exception and the entire memory space, but that's YHBW country.)

Your first example is more readable to me (and I take it to most people) than the second one. It reads "if test is true then do action 1, otherwise do action 2", as compared to your second one that essentially reads "if test is true then do action 1, if not then throw an exception".

The higher complexity comes in when you include else-if statements or nestled statements, should it throw the exception for each test that fails (and thus effectively always execute the catch statement) or only if all fail?

You're speaking from the mindset of "I don't want to waste cycles checking for exceptions." And that's a reasonable POV, until your code is littered with ad hoc fixes and checks because your system was insecure.

No, my mindset is that using exceptions to handle normal control flow in most languages is a horrible idea (the massive performance penalty for most languages of unwinding the stack was just an aside). Python might encourage that mindset, and it might even work there, but trying to thrust that mindset on other languages often creates clusterfucks, because it creates very alien patterns, and it becomes far harder to understand possible codepaths. Most developers also don't seem to appreciate exactly what state exceptions leave an application in as well (it likely varies by language as well), which means the state might as well be unknown.

And that will result in insecure, buggy code. Not all bugs are fatal, some might only result in undesirable behavior that's perfectly valid, but not desired. Expecting Exceptions to handle everything magically for you is definitely a bad mindset (although it would explain why I see so many unhandled exceptions in Python programs).

Exceptions are a great way to handle errors for some reasons: - When you can't bother to check every single error cause, an exception will be thrown at some point and be caught somewhere in the call stack. It's of course better than a core dump.- Exceptions can bubble through a long stack of boilerplate code to be managed where they make more sense, without adding complexity to said code.- Exceptions are a standard and typed way of managing error, compared to long lists of obscure error codes. They can contain references to problematic objects and resources, or any other data structure than can help handling errors.- Being able to display the stack trace is a boon for quick debugging. Remember how you did with C++ ten years ago? Either you had very good logs, or you had no info at all. Admitedly, 1000 lines stack trace can turn into a new kind of curse, too...

That being said, when dealing with modern and complex APIs relying on dependency injections, network, data mapping, closable resources, and so on, sources of errors are so complex and numerous that you literally may have to "wrap every damn use of the API in a try/multi-catch block". In these situations, Exceptions become the new kind of errno, they creep everywhere, they hamper legibility, they are complex to write, manage, understand and read, and even the try-catch block doesn't feel much better than the old if (err) else... In one word: error handling remains hard and error-prone.

Maybe "monadic error handling" can be an appropriate response to complex error management, but I've never had the opportunity to test it.

For I/O bound, evented, non-blocking, asynchronous applications exceptions don't even make sense as a language construct, as the call stack of a response handler is completely different than the call stack of the request that initiated the response. (Anybody that has done debugging of complex javascript programs knows what I mean I hope)

Frankly I've never cared for exceptions. Though they were conceived to simplify coding (the try block just has simple code that assumes no errors), in practice they often lead to even more code complexity.

One nice thing about exceptions is that you can handle them at the appropriate level. Generally, down at the very low level I don't handle them at all. Occasionally you might want to with a resource error you can deal with. But generally I write a blob which does one thing, with many steps, and this blob does not need to worry about errors. The entire blob can be wrapped in a cleanup handler (whoops, temp file) if necessary, which then propagates the error. The code is just so much cleaner than explicitly checking every call for error (and invoking the same error code). If you're wrapping every single call in try/catch you're probably doing it wrong.

Then your higher level stuff just needs to wrap that one blob and it succeeds or fails. Or even just not worry about it and let even higher level handlers take care of it. Blobs in blobs. You just handle the errors that you can or should handle at each point. At the very highest level I just catch everything and show a (hopefully informative) message box to the user instead of just crashing. ProTip: Do not catch and discard all exceptions at the top level like Netscape Navigator did.

Everyone is concerned with speed, but there are only a few cases in which speed is actually a concern - at the pixel level of image manipulation, for example. I can do 99% of things I need to do in Python and it's plenty fast. For the other 1% I can write just that bit in C.

Most people concerned about speed are ignoring their bad design for the trees. I saw this last week with a guy who didn't want to use exceptions for something that would be invoked once per TIFF file because 'exceptions are slow'. The cost of the try is infinitesimal compared to the processing you're doing on the TIFF. It's slightly more if an error is actually thrown - but at that point you don't care.

My own experience in error handling comes from game AI coding, which is a massive pain in the ass to get right. Can/should the AI "hear" the player? How close is the the player to the AI? Are there any in game obstacles that need to be considered? Player within AI's cone of sight/range of hearing? Collision detection flaws, etc.....

Once had to deal with a situation where an AI, lovingly coded as a "shock and awe" spawn, would promptly hide itself in the level's geometry and pursue the player until they were dead as soon as it spawned. Massive pain.... turned out it was due to a single number being off.

Your first example is more readable to me (and I take it to most people) than the second one. It reads "if test is true then do action 1, otherwise do action 2", as compared to your second one that essentially reads "if test is true then do action 1, if not then throw an exception".

The higher complexity comes in when you include else-if statements or nestled statements, should it throw the exception for each test that fails (and thus effectively always execute the catch statement) or only if all fail?

I read this as sarcasm:

kaleberg wrote:

They help isolate the error recovery code from the main sequence so that it takes a lot longer to figure out what happens when the file is not found. I think we should use them for conditionals, rather than else clauses. Think of how much cleaner your code would be:

I think his deeper point was that anywhere that exceptions are used in code, you could simply use else clauses, which would also be beneficial because it keeps the error handling code with the code that caused the error condition.

Your first example is more readable to me (and I take it to most people) than the second one. It reads "if test is true then do action 1, otherwise do action 2", as compared to your second one that essentially reads "if test is true then do action 1, if not then throw an exception".

The higher complexity comes in when you include else-if statements or nestled statements, should it throw the exception for each test that fails (and thus effectively always execute the catch statement) or only if all fail?

I read this as sarcasm:

kaleberg wrote:

They help isolate the error recovery code from the main sequence so that it takes a lot longer to figure out what happens when the file is not found. I think we should use them for conditionals, rather than else clauses. Think of how much cleaner your code would be:

I think his deeper point was that anywhere that exceptions are used in code, you could simply use else clauses, which would also be beneficial because it keeps the error handling code with the code that caused the error condition.

I thought it was pretty funny.

Maybe so, I have a problem with written sarcasm. But then that touches on another point, not everything that generates an exception are tested with an if-statement, and not everything that would be within the else-clause would be an "error" in that regard. Sometimes it's better to just test directly if the value is in error. Sometimes it's better to just a run a bunch of statements inside a try-block instead of having the major overhead of an "extra" if-statement for every other statement.

There are good and bad practices for every form of error-handling (and every form of... well, effectively anything), the trick is to know when to use one form over another.

Exception handling works great at handling what the name says: exceptional situations that the application designer could have not foreseen or the economics dictated that it would be too expensive to prepare for. Like a division by zero, connection failure etc.

Most non-exceptional conditions I believe should be handled near the place they occur by non-exceptional handling. It adds a few lines of code but it helps with the application's readability and maintenance. There's nothing less maintainable than hidden "go to's" scattered throughout an application.

But that's easily doable by catching (and handling or silently ignoring) specific exceptions at the place where they occur, no?

1. Exceptions will always be with us. For example, a program writes to disk and gets "Disk full." No amount of pre-testing can stop this exception because between the testing and the actual writing, another process may fill up the disk. Some exceptions cannot be tested for and must always be handle.

Forgive me if I'm repeating thoughts - I haven't read all the comments. I think this question and the answers to it highlight two related problems in computer science/programming: 1. Bottom-up and ad-hoc design (as formalized by agile design processes)2. A lack of clear architectural patterns that provide guidelines for behavior under abnormal conditions.

The root of the problem (and the source of a lot of security vulnerabilities *at least*) is that any *project* needs to define a policy and a practice to enforce predictable behavior under abnormal conditions. But, how do you design that behavior and flow it down to all of your implementers if normal vs abnormal is not clearly defined, and guidelines for graceful degradation (not to mention paths to propagate an error to the appropriate handler) are not in place?

To that end, this discussion misses the forrest for the trees. Both are clearly inadequate when you view the problem as a risk of bottom-up design. At the same time, either may be appropriate given context and the nature of the problem at hand.

For I/O bound, evented, non-blocking, asynchronous applications exceptions don't even make sense as a language construct, as the call stack of a response handler is completely different than the call stack of the request that initiated the response. (Anybody that has done debugging of complex javascript programs knows what I mean I hope)

Frankly I've never cared for exceptions. Though they were conceived to simplify coding (the try block just has simple code that assumes no errors), in practice they often lead to even more code complexity.

Doesn't that depend on the platform? My understanding is that try/except does work with async code in something like C#.

Use explicit error checking with informative error messages, including the name of the function that is returning the error. That way, if the program crashes, you know exactly where it crashed. It takes a little more time during coding, but it saves a lot of time during debugging.

Exception handling works great at handling what the name says: exceptional situations that the application designer could have not foreseen or the economics dictated that it would be too expensive to prepare for. Like a division by zero, connection failure etc.

I think that's the wrong way of looking at it. It's a question of: is it this function's responsibility to handle that?

Division by zero, for instance, is always your algorithm's responsibility since you're doing the math. You handle it by asserting that your inputs meet preconditions, and then getting the damned math right.

A connection failing, OTOH, is something I can foresee, but is not something my parsing object should care about. It should just pass that exception up the chain to whoever opened the connection and asked me to read it in. It can then clean up the connection, and report an API failure to the object that called that, which might decided on a strategy of retrying or reporting it back to the user.

At each level, you have another layer of abstracting the error handling. Incidentally, if your language supports resource management (e.g. context managers in Python, try-with in Java, using in C#, RAII in C++), that ties in to exception management very nicely because you almost invariably want to release the resource when you leave the try block.

Quote:

Most non-exceptional conditions I believe should be handled near the place they occur by non-exceptional handling. It adds a few lines of code but it helps with the application's readability and maintenance. There's nothing less maintainable than hidden "go to's" scattered throughout an application.

Right, and the way to achieve this is 1. to keep your functions or methods short and sweet and 2. refactor until you have a proper separation of concerns. There is no notation that can make a 300 line function readable. Don't be afraid to break logic out into another function.

Further, if your classes are highly coupled, you're going to be dealing with so many possible failures and interactions that error handling is going to be a mess. Figure out what one subsystem does, break it out and make it independent.

(I say this because people seriously seem to be deathly afraid to create new classes or functions, and jam everything into these mega-functions. Pretty code is good code.)

For I/O bound, evented, non-blocking, asynchronous applications exceptions don't even make sense as a language construct, as the call stack of a response handler is completely different than the call stack of the request that initiated the response. (Anybody that has done debugging of complex javascript programs knows what I mean I hope)

Frankly I've never cared for exceptions. Though they were conceived to simplify coding (the try block just has simple code that assumes no errors), in practice they often lead to even more code complexity.

Doesn't that depend on the platform? My understanding is that try/except does work with async code in something like C#.

Not really. If there's some magic with that for C# it's likely part of a nice async library, not part of C# itself. If an exception is thrown on a thread that thread must handle that exception.

Use explicit error checking with informative error messages, including the name of the function that is returning the error. That way, if the program crashes, you know exactly where it crashed. It takes a little more time during coding, but it saves a lot of time during debugging.

Or include a back trace in your toplevel exception handler... Cause it accomplishes that, automatically, without cluttering your code.

While it is somewhat an aside from the topic at hand, I agree with Secure's statement that books and online examples most often leave out things like error handling for the sake of clarity.

In fact, they typically leave out so many things that very often it is hard to translate from the example code to real world production use. While I can understand that it is probably a monotonous task in many cases, I find it immensely helpful on the rare occasion that someone gives a complete and thorough demonstration of a particular code topic that attempts to cover most real world scenarios. It certainly is impossible to cover all bases, but so many books and websites seem to leave the reader at not much more than a "Hello World" type of demo which rarely ends up being very useful except as an introduction to the topic. The fact that languages and code libraries evolve over time makes things even more convoluted because a search for how to do something typically brings up the way to do it 6 years and 4 platform versions ago...

I think a more relevant problem is when to consume exception and when to allow them to bubble to the next layer. For example, do my task specific file handling functions propagate lower level library exceptions? Do I catch them and then throw my own context specific exceptions?

For the example used in this article, the malloc() failure relies on a simplified model that malloc() actually allocates all of the memory for you, or fails.

In fact, malloc() simply pages in the memory to the application's virtual address space. It fails when it has run out of this. On a 64-bit system, this can typically be 2^48 bytes.

As such, you can call malloc() and get back more RAM than you actually have on your system. And it won't be until you've tried accessing enough of it to be mapped in to exhaust your RAM before your application finally crashes and burns.

So given this, while one should still check the malloc() result as a general rule of thumb (check every error code), the odds of malloc() returning NULL are beyond infinitesimal, as it only tells you when you are out of virtual memory to map in, not when you are out of physical memory to assign.

I think the answer is that you should always do a bit of both; after all, exceptions are really just an easier way to handle error checking, and will always produce an error condition somewhere in your code, but you need to make sure you're handling that exception somewhere otherwise it's just going to lead to a failure, even for something relatively innocuous (such a timed out network connection).

While things like out of memory exceptions are a tricky one to handle gracefully (the program simply needed to have more memory available) there are plenty of exceptions that shouldn't be allowed to escape your call-stack and result in a program failure, but plenty of programmers forget (or don't bother) to check for them. This is how you end up with programs that are completely incapable of handling packet loss or other network related troubles, because someone isn't checking for IO exceptions for example.

I think that in general it's a good idea to configure your IDE to provide warnings for unchecked exceptions (or ones your method doesn't declare in a throws or similar statement), so you can at least quickly see which things you might want to handle. While it can be messy looking, I even sometimes like to make sure I have the important exceptions listed in try/catch blocks, even if all I do is pass them up the chain anyway, as at least that way I'm explicitly doing it rather than potentially forgetting about it.

It's worth remembering as well that while the error testing method involves a bit of extra boiler-plate, you really do have to check for nulls wherever they can crop up at the very least in your program, as there's nothing worse than unexpectedly trying to do something with nothing (can be some of the worst things to try and debug later as well). Passing errors up the chain by having all your functions return status codes is a bit of an unpleasant way to do things, but it's not really that much harder than adding try/catch statements, the only real difference is that you're the one deciding when and how each error propagates up the chain, rather than letting in-built exception handling do it for you.

My main reason for preferring exceptions is that it means you can return actual results from your functions and methods, rather than only returning error codes and having to use pointers to handle useful values. I know it limits you in other ways, since pointers let you receive multiple return values without having to wrap them up somehow, but in the majority of cases it's a nicer way to work, at least for me anyway. Since you're not stuck returning status codes from your functions, you can instead return any of a well-defined set of specific error types, rather than needing a convention that may not represent all the possible errors you need to handle, or causing confusion if you end up using different sets of error codes due to differences in how functions work. E.g - to handle different types of network errors in a language like C you need to be able to indicate if an IO error occurred, a timeout occurred, a connection couldn't be established etc. If you pass this back as a return value then it may be different from codes you use elsewhere and cause confusion, or you end up having to use a pointer to set a specific error code in which case you may as well have just passed back any suitable null value to what would otherwise be a normal return value. Exceptions obviously need to do some of the same work behind the scenes, but at least with them you're not worrying about that, though you absolutely should still consider which things you need to catch and where; just because a language feature makes something easier should mean you can get more out of the time you put in, not that you can be lazy.

This discussion reminds me of the discussions we had in the early 1970s about array subscript range checking. At that time, Pascal was emerging as a competitor to common languages like C and Fortran. Many programmers objected to the overhead imposed because Pascal array references always performed range checking, whereas other languages left it to the programmer to explicitly do the range check where it was needed.

As has been mentioned, programmers are prone to forgetting to do those checks. For example, consider the billions of dollars of costs (both direct losses and remediation costs) from security breaches caused by "buffer overflows" in software written in C.

In my opinion, the use of modern exception handling mechanisms has contributed to improved software quality and to reduced software life-cycle costs.

For example, IP doesn't respond with an error message if you send data down a fibre channel link which eventually goes over an 2G mobile connection. It just throws away 99.99% of the data and silently sends the 0.01% it can send.

No exception, no error response, it just drops most of the data.

On an iPhone, if an app uses too much RAM it doesn't return an error next time you allocate memory. It just kills the app, without even an error message to the user. If there's an issue with the kernel it doesn't show a kernel panic to the user, it just reboots instantly.

I don't see any users complain, even my mum has no problems with it - she just gets a bit annoyed and then opens the app again to continue working.

There is a place in programming for truly fatal errors. Sometimes it really is OK to just kill everything you're doing and drop everything and give up without any attempt to fix the problem. Just log the event and die.

In the API's I have worked with, exceptions are intended to really be fatal. If you try to read item 42 in an array that contains 6 items you are not supposed to catch the exception and then read item 6 instead of 42 - if that's the behaviour you wanted you should have counted the array *before* reading it - by the way that is exactly the same amount of code as a try/catch block.

On the other hand, we do need to be able to catch exceptions, such as for writing debug info to a log, or to close off something that really needs to be closed off (eg: if you create a 2GB temporary file on the disk you really should wrap things in a try/catch block so you can erase the temp file no matter what).

If you want to test for an error, then an exception is the wrong way to do it! That is just as stupid as using a string containing "yeah!" when you should really be using bool/true.

Sure, there are times when try/catch is more convenient than other error testing, and there are also times when badly written code throws an exception when it shouldn't, but if you want that feature in your language then you should petition the language community to add it as a new feature that is called something other than an "exception".

My rule is to avoid using exceptions for controlling code. If there is any documentation describing what situations will trigger an exception, then I write my code in a way that makes sure I'm I don't trigger that exception. Exceptions are for when you *have no fucking idea* what could have possibly triggered the exception, or for when you simply screwed up.

I don't follow this rule religiously — rules are made to be broken — but for the most part I only use try/catch to temporarily postpone my code from asploding and then throw another(or the same) exception inside my catch block.

If you want to test for an error, then an exception is the wrong way to do it!

There is no way to test for an I/O error before it happens. If the I/O was important, the app should die. If not, the app continues.

And here's a slope subroutine that uses exceptions for extreme cases. Note that it is impossible to test for the extreme cases without knowing the hardware the code is running on. Different hardware has different limits. Using exceptions means the code can run anywhere without changes.