Why free memory on application quit?

OSC: The situation in question came up because due to a design change by the client, we had to temporarily instantiate a good chunck of a game a second time. Now, great parts of this was doing a mish-mash of proper cleaning or not, or even double-deleting at times. It didn't use to matter because the game would just quit. But try to instantiate and eventually delete a second one and all hell breaks loose. Had the game done proper cleaning, this would never had been an issue.

FreakSoftware Wrote:sohta: Did you know that that every Mac OS X Cocoa application out there does not free all of its memory before it quits? After a certain point in the termination process is reached, the whole process just exits rather than release every single object first.

I am well aware of this.I would never dream shipping a game that does proper cleaning up. However, it still does it during most of the development cycle.

Just curious, is OS X going to automatically deallocate fbos, textures etc. in vram on a video card if the application just quit? I suppose if the application exits and the context gets destroyed it would somehow trigger the video card to clean up? Would it depend on the driver implementation and could this be different on other platforms?

I try to deallocate memory. But having the OS do it for me on exit is nice.

sohta: So now you're doing more unnecessary work up-front because you were bitten once by an impossible request from a client? That's not a good way to build software. It's called Speculative Generality, and it is widely regarded as a Very Bad Ideaâ„¢.

macnib: If there is any trace of your process left, anywhere, after it exits, that's an OS bug (certain bizarre POSIX semantics aside, perhaps).

To add to what Seth said: no application written in any garbage-collected language (Ruby, Python, Java, ObjC-GC, Haskell, Scheme, etc.) ever cleans up before quitting.

In all seriousness though, fighting back against Speculative Generality can be tough sometimes. Cutting things down to a bare-bones "less is more" approach can pay off big though. The fewer the features, the fewer opportunities there are for bugs. The fewer lines of code to look at, the easier it is to find mistakes. I know those seem like total "duh!" statements, but I've seen so much crappy, bloated code out there that apparently many developers don't think like that when they're coding [I'm often guilty of code bloat too].

I don't hate the language, but it's hard not to take a jab at it once in a while, when the opportunity presents itself.

The anal-retentive memory management thing certainly isn't exclusive to C++, although I do generally tend to speculate that C++ fans are often the biggest speculative generalists. [yes that was another cheap jab for the fun of it, maybe not remotely accurate, but fun for me all the same!]

It's not just finalizers either. If you put logging statements in your garbage collection C functions, it will call those too. I assume this is because, although highly discouraged, finalizers and destructors can be used to do more than free resources.

AnotherJake Wrote:That's weird. Like what else besides freeing resources would be appropriate in a destructor?

IO objects are a good example. They close the stream when they are collected. You don't want them to just leave it open because it may be a long time before the program exits.

Now consider that you've built some buffered IO object on top of that. If it was simply allowed to be GCed, it may still have data buffered that hasn't been written to the physical stream. If the stream was to be closed before the buffered data is written to it, it would corrupt the data.

OneSadCookie: Again, I assume it has to do with it being impossible to tell which destructor functions are safe to skip, and which ones aren't.

Back to the original topic. I once thought that it was good practice to free everything before the program quit, but have since come to agree with the consensus here. It doesn't actually benefit anybody, so why bother?

Right, I can see the technical reasons being sound here, but I would feel so dirty! I'm cleaning everything else up everywhere else, and it would seem a terrible oversight not to be thorough with it. :/

Cleaning up resources, such as sockets, is a good idea, but RAM is just reclaimed by the OS anyway. I think the anti-pattern here is that people rely on destructors to free up resources when quitting an application.

Skorche Wrote:IO objects are a good example. They close the stream when they are collected. You don't want them to just leave it open because it may be a long time before the program exits.

Now consider that you've built some buffered IO object on top of that. If it was simply allowed to be GCed, it may still have data buffered that hasn't been written to the physical stream. If the stream was to be closed before the buffered data is written to it, it would corrupt the data.

That's a pretty good example. I was thinking that an IO buffer should be strictly handled by the runtime environment when things shut down, but there doesn't seem to be anything inherently wrong with letting objects finish up to prevent data corruption.

If I understand correctly, Google Chrome (chromium) treats each tab, window, and plug-in as its own, separate process, singularly so as to avoid garbage collection altogether -- instead, it kills tabs upon loading, reloading, and closing tabs. You'd need a copy of Windows to see it in action, however, as I understand that the 'Chrome developers have yet to release ports to any other system.