Jeff LaMarche presents a rundown on whether Objective-C dealloc implementations should bother nil’ing out an instance variable after releasing it.

LaMarche offers a fairly balanced presentation of the two sides, but in my opinion he gives too much credibility to the argument that nil’ing is a good idea. He essentially embraces the argument that nil’ing the variables in production code is wise, because it might mask crashing bugs that would obviously vex the user:

Generally speaking, this is only going to happen if you’ve got a bug elsewhere in your code, but let’s face it, you may very well. Anyone who codes on the assumption that all of their code is perfect is begging for a smackdown.

And speaking specifically about the approach I prefer, leaving the instance variable alone and not bothering to nil it out:

On the other hand, that approach is really, really bad in a shipping application because you really don’t want to crash on your users if you can avoid it.

I disagree with the assertion that avoiding crashes at all costs in shipping code is the right priority. The biggest disservice you can do to your users is to allow them to run code that has unpredictable behavior. The whole point of programming is to make computers accomplish tasks in predictable ways. Sure, we all make mistakes as programmers, but giving in to the chaos and putting up safety nets “just in case” is not the right approach, especially considering it has unwanted consequences.

Consider that by masking the crash from occurring in the wild, you may be putting off detection of and resolution of the underlying bug indefinitely. But if your application has a crash reporter, or if you take advantage of Apple’s iOS crash reports, you will learn quickly that there are issues needing to be resolved.

It is reasonable to take steps that mask mysterious crashes, but this should only be done after you know there are actually crashes to prevent. Masking the symptoms in a blanket manner is akin to cutting half the phone lines in the neighborhood and celebrating the reduced number of 911 calls.

It’s also worth noting that LaMarche’s defense of nil’ing out instance variables hinges on the presumption that messaging nil is safe. True, messaging nil will not cause a crash. But is that safe? Not if it changes the behavior of your code in unpredictable ways. Consider this incredibly contrived Cocoa-based nuclear arms controller:

By LaMarche’s reasoning, it’s a good idea to nil out the mDiplomat variable on release, so that it won’t crash when you message it. But in this case, messaging nil for a BOOL result causes a logical flaw in the program, with obviously dire consequences. If the instance variable were not set to nil, it would probably crash before the launch message could ever be sent.

We should aim to comprehend what our software actually does. Deliberately decreasing the symptoms of problematic code won’t lead us any closer to that understanding. Ship the best code you have to your customers, and if it crashes, try to fix the root cause of the crash as quickly as possible. Don’t get in the habit of writing “hope this fixes it” code, and by all means don’t adopt that philosophy as a boilerplate coding habit.

This entry was posted to Cocoa, Programming, Rant.
Both comments and pings are currently closed.
Thanks for reading!

44 Responses to “Don’t Coddle Your Code”

Your opposition to nilling out ivars hinges upon the assumption that the only reason to nil out an ivar is to avoid crashes if it somehow gets accessed again (e.g. in a multi-threading situation). I don’t think that’s the only reason to nil out an ivar. In fact, if that is your only reason for doing so, you should stop right now. That’s a terrible reason, as you have outlined in this post.

However, I think there are other reasons to nil out ivars. As I have said before, on rare occasions you end up with a situation where calling [super dealloc] ends up calling another method on your class, and if this other method ends up accessing one of your ivars, it will crash. You may think this contrived, but I’ve seen it in the real world, and the only sane fix was to nil out my ivars in dealloc and then write the called method such that it will behave sanely in the event that its ivar is nilled out. Based on that experience, and in the general idea that ivars should be nilled out the moment they’re released regardless of location (though I admit this is far less of an issue if you use properties exclusively in your class), I personally nil out all my ivars after releasing them in dealloc. I consider it an issue of having clean code.

While I don’t disagree with your premise, if I relied on the iOS crash reports, I’d never find any bugs. I don’t know if it’s just me but it seems many (most?) people turn that reporting off when they’re asked by iTunes.

APPLE SOFTWARE AND SERVICES ARE NOT INTENDED OR SUITABLE FOR USE IN SITUATIONS OR ENVIRONMENTS WHERE…INCLUDING WITHOUT LIMITATION THE OPERATION OF NUCLEAR FACILITIES, AIRCRAFT NAVIGATION OR COMMUNICATION SYSTEMS, AIR TRAFFIC CONTROL, LIFE SUPPORT OR WEAPONS SYSTEMS…

Kevin- I agree that the strategic nil’ing of an instance variable in a situation like that might be a reasonable thing. I don’t like it as habit however for the same reason that it masks, if not bugs, then bad code smells.

Calling [super dealloc] shouldn’t have side effects that reach code in my subclass that thinks it’s still OK to much with instance variables. IMHO. Work around it on a case-by-case basis until you can fix the underlying problem.

Daniel- I was using the example I posted as just an example of where nilling out is actually helpful. My main motivation is one of code cleanliness. By always nilling out the ivar, I always know that if the ivar has been released, it will be nil, regardless of the situation where the ivar is nilled out, no exceptions, not even dealloc. Sure, it’s possible that I end up masking a bug this way, but in my experience, I’ve run into the situation I described more than once, and I can’t recall ever having a situation where nilling out the ivar masked a bug.

This age-old conversation hinges on philosophy and point of view. To turn Daniel’s argument on its head:

if ([mDiplomat didReturnGestureOfPeace]) NSLog(“Hooray!”);

In other words, say an otherwise benign operation causes a crash for any of the reasons outlined above (threading, dealloc subtlety, etc). Where the product would have been no worse off from a messaging-nil nop, you now have a serious customer service problem on your hands.

Of course the engineer in all of us would want to know about this problem ASAP, but I would draw your attention to the non-technical obstacle of iPhone app review. You now have a crash that didn’t need to happen, which your users could be waiting weeks for relief from.

Karl recently said “Yes, it has the potential to hide bugs but that’s programmer nonsense that prefers allowing your application to crash in the hands of users rather than fail gracefully as all consumer facing software should.” Daniel is arguing that the short-term pain of these subtle bugs will eventually lead to a better product. Both approaches potentially expose the user. Hence “philosophy.”

Crashing is failing gracefully — at least compared to continuing in an unpredictable state and wiping all the user’s data. The assumption that your program will continue safely and not go even further off the rails despite being in a completely unknown but obviously wrong state is pretty naive.

If your code goes completely off the rails when it encounters an unexpected nil in a fahsion that causes no crashes but instead silently corrupts data or whatnot, then you have worse problems than whether or not to nil out your ivars in dealloc.

Matt – let’s just take it as a given that either uncertain route can cause dire consequences. Good point.

But the big advantage crashes have over other uncertain behaviors is they are imminently machine detectable and reportable.

The crash might cause data loss for 1, 10, 1000 customers. But hopefully that is informing the developer about the existence of the problem, so she can fix and prevent further harm. The uncertain (and even untraceable) bugs that might be getting masked could also be causing data loss to 1, 10, 1000 customers over years without any means of tracking it.

There is a logical extreme one could go to, which would be to put special objects in place of the nils, and have the objects themselves serve as little micro “incident reporters.” This feels like overkill to me, but maybe in a particularly buggly and crash-sensitive app, it would be a good idea.

Generally I like to do the easiest thing that produces the most detectability of faults. In this case the easiest thing is to leave the instance variable alone.

Indeed, Daniel, and most importantly I hope we can all agree that if these specific scenarios are a problem for you on a non-trivial scale, then what you need to review is your tests, not your deallocs.

After reading both post blogs and the comments posted here, I come to this conclusion: nil’ing in dealloc is not needed. Dealloc should ideally be atomic in a multi-threaded environment and instance variables never be accessed after the object is meant to be deallocated in any case.

This approach is cleaner than nil-messaging, isn’t it? it’s basic “safety check” or “data integrity check”.
So the only advantage of nil-ing a variable is to state: “ok I know this variable is nil, so I never defined it or it’s no longer valid. Now take some explicit action to do it”.
I never leave my code logic to be constrained by nil-messaging side-effects (like the example above!)

If the idea is for crashes to be revealed early, then it is possible to do even better than *not* nil’ing out ivars: set them to -1 (or even better to values such as 0xdeadbeef). Thereafter, the crash is not only likely on access, it’s certain.

Jean-Denis makes an excellent point as well. Not only does it cause a crash, but when debugging or perhaps reviewing log output it’s very clear by the pointer value that the object has been released.

As other replies have illustrated there are cases where you have to use a different approach (such as in Kevin’s [super dealloc] depending on self’s ivars), but the point is that not nilling should be your rule of thumb. There will always be cases where the rule is not appropriate. Nilling as a rule of thumb is simply more likely to mask crashers and cause data/state corruption.

(To avoid the same unsubstantiated sniping tone that your comment had, I’ll add that I don’t think it’s appropriate to characterize a pointer in an invalid, deallocated object, as “dangling.” Though I suppose you might be arguing about the time period between which the pointer is released, and when it is truly deallocated by super. My opinion is the releasing class is in charge of managing its access to the instance variable. Yes, after it releases it, it proclaims by contract that it won’t access the instance variable again.)

Matt’s point is interesting. As Mac developers who distribute via the internet, we can get a crash report (from a user, not from Apple, grr!), find the bug, fix the crash, and release a software update all within an hour. Thus, we tend to favor not ‘coddling’ our code, as Daniel puts it. iOS developers, on the other hand, are hostages to the app store approval process. Of course, if your iOS app is crashing in production, one wonders how it got through the approval process in the first place or what the point is of the process, but in any case, your coding precautions may be different for that case.

Having said all that, the vast majority of crashes occur after self has been deallocated, so the instance variable issue is not really that important. :-)

Your argument really lost credibility when it fell into the [[NuclearWarehouse nuclearMissile] launch] example. Perhaps you are making a valid point. The ridiculously wild example makes it hard to consider the rest of the argument.

Is there an equivalent to Godwin’s law about talking code and nuclear war as a result to make your argument? Perhaps there should be.

Mark – I thought it would be best in this example to choose a purposefully ridiculous (and as I stated, contrived) example, to eliminate any bickering about whether the outcome of nil’ing *could* be bad, or not.

I may be dense but I honestly don’t see how the ridiculousness of the example takes away from its value as an exercise of the logical argument being made. Rather, it seems problematic that you would stretch the theme of my example and use it as an excuse to (presumably) discount my entire argument.

I think we need names for the two sides: For nil-in-dealloc folks, let’s call them the “Tealloc Party”. (That name is remarkably suitable in retrospect.) Let’s call the other side “libreleasians.” At least that can put everything in perspective; ever noticed how the things that cause the most controversy either don’t have a true or false answer or require an advanced knowledge in the subject?

By the way, loved your nuclear missile example, I thought it was hilarious and exemplified the point with a dash of humor that is missing from so many people’s lives. I’ve never understood why people get so upset about these things on the internet, which is why I love writing/reading posts like this. It’s like internet natural selection.

P.S. I don’t nil my ivars in dealloc, because I have more interesting things to do.

Jean-Denis – I actually don’t have a problem with your using that pattern, especially if you are using synthesized ivars, you actually don’t have any other choice. [UPDATE: Mike Ash pointed out to me that you can actually still access the instance variable now directly with synthesized ivars.]

I suppose I may not have been clear but what I meant to express by this post was mainly that I think it’s a waste of effort to go out of one’s way to nil out the ivars if you don’t have to, and that furthermore, the arguments are doing so in order to mask “mysterious bugs” is a bad reason to do it.

For the most part, I think this is a “it doesn’t matter” issue. But I reacted mainly because I don’t like the philosophy of teaching a way of doing something because of the mysterious bug-masking “benefit.”

Daniel – your original argument was that setting the ivar to nil would be safe: “But is that safe? Not if it changes the behavior of your code in unpredictable ways.”

But it’s not unpredictable, we know the behavior of messages sent to nil objects.

The nuclearMissile controller argument creates a visceral reaction, clouding the mind. Your example demonstrates a bad design flaw in your controller (that allows it to so easily and inadvertently launch missiles). I think if you’re actually putting critical (and destructive) code in like that (without extra checks and protections), setting ivars to nil or not is the very least of your problems.

That said, a quick look at some recent code I’m working on makes me believe I tend to only explicitly nil out ivar references of objects that aren’t directly owned by the object.

That is what macros are for, to help in situations like this one. You get the best of both worlds. You create a macro that that does not set to nil to catch bugs while you test, but when you go to production, you nil your ivars.

Consumer perception of your app crashing or not crashing can be a big deal. If your car is not getting good gas mileage, it can be the side effect of various things going wrong under the hood, but at least it still runs and gets you to where you need to go.

If you are letting really nasty bugs out in your product, then you might need to work on your testing process a bit better.

Oh, also I forgot to mention: Sending messages to dangling references is not a guaranteed crash. If in the meantime a new object was allocated and put in the place where your dangling diplomat points to you will send messages to the new object.

If you’re unlucky that new object understands those messages. So you could be commanding the enemy missiles to nuke yourself and wouldn’t even notice it till it’s too late.

If you want to feel the pain of messaging a nil pointer, try reading a float or double from it, or try getting a CGPoint or any other struct returned by value, or anything else over sizeof(void*) (currently 32 bits in iOS).

And it can be equally painful, as was already pointed out, having code simply not make the call it was supposed to. You won’t see (and your testing team won’t see) that one call to the data layer failed because the DAO pointer was nil for a millisecond during deallocation after the thread erroneously referenced the still-valid-but-about-to-be-destroyed object, especially if you have save points elsewhere that get called successfully. Your program could run happily along for quite some time until someone quits out of the app right after that failed sequence point, and you’ll be scratching your head for months at this weird, intermittent data loss problem in a program that otherwise seemed to run just fine.

If the pointer were left dangling, you’d know about the bug after your first crash report (using your own custom crash reporter, not Apple’s thing), and more importantly your data integrity would be maintained.

To make matters worse, what if your multithreaded program were to access the pointer after deallocation, but before reassignment to nil? Now you’ve got the WORST of both worlds!

And remember that we’re talking about setting to nil in the dealloc method here. Do you really want your faulty multithreaded program to get garbage data back from a nil pointer that it doesn’t realize is bad? If so, I certainly would murder you in your sleep the day after I inherit your code.

Actually, while we’re at it, why not make integer division by 0 return 0 instead of crashing?

Or how about making uncaught exceptions not crash the app at all? Just restore the stack frame and go on our merry way as if nothing happened!

Or how about having index out of bounds just return a nil object rather than throwing an exception?

Hell, we could even make the Objective-C runtime smart enough to just treat an uninitialized or deallocated pointer as nil while we’re at it.

We could build an Objective-C that NEVER crashes! Forget what all those brainiac computer scientists were thinking when they designed the language and runtime system; it’s better to have bad data and/or incorrect operation than have the program actually shut down, right?

Karl – without an obvious target for your sarcasm, I think it loses its zing. Is it supposed to be a response to my post, or to one of the comments? If it’s a sincere response to my post, it sounds like you are misunderstanding my sentiment, which is evidently right in line with yours: let’s crash early and often, and not go out of our way to mask bugs.

Daniel, I am in complete agreement with you. The facetiousness was directed towards the “defensive programming” habit of setting deallocated pointers to nil “just in case”.
The most common argument you hear is that they’d rather the program diverge from its designed behavior than crash.