Concerning delete/delete[] at program exit

This is a discussion on Concerning delete/delete[] at program exit within the C++ Programming forums, part of the General Programming Boards category; Originally Posted by brewbuck
So we should poison the operating system so it explodes if someone does something Elysia doesn't ...

It should not be safe. Or should not be possible, or something. Optimization or not, I do not like the idea of doing this at all, for all reasons mentioned above.

Most programs are liable to crash, without cleanup. Therefore the ability of an OS to safely cleanup is a vital feature that can be relied upon. If someone has an OS that cannot properly handle an application crash, then I suggest they get a new one.

And how many programs can safely shut down and does not have some work to do when shutting down? Saving settings, perhaps? It's far better and safer to just shut down the normal way and let all destructors and cleanup code run.

You're missing my point. There times when such a feature is a vital requirement -- an application must be able to safely and completely recover from a sudden system shutdown. Given such an application, one may as well take advantage of the feature to speed up the normal quiting process.

Most programs are liable to crash, without cleanup. Therefore the ability of an OS to safely cleanup is a vital feature that can be relied upon. If someone has an OS that cannot properly handle an application crash, then I suggest they get a new one.

Of course. Due to all the poor written applications out there, the OS just has to have a feature to clean up the mess of programs. Sure. I can get that.

You're missing my point. There times when such a feature is a vital requirement -- an application must be able to safely and completely recover from a sudden system shutdown. Given such an application, one may as well take advantage of the feature to speed up the normal quiting process.

This is a good thing but that does not give it the right to just exit or crash or whatever and force the OS to cleanup its mess.
As has been mentioned before, shutdown should not take long time and if it does, then it's probably not memory-related but destructors that needs to be optimized.
And even if it takes long, it's far safer to let everything clean up the normal way in a background thread.

Elysia, although I agree with you on this for 99.99% of the programs out there - they should always clean up their own mess, I think the point of this discussion is that this "technique" is an optimization. So as any optimization, it should be used very carefully and only when the benefit outweighs the cost.

The stability of the OS should not depend on the behavior of user applications. That's the entire point of having an operating system. It's not a patch, it's a fundamental step forward.

I was going to argue this point. Then I realized it was just my opinion and it was irrelevant.

Originally Posted by Elysia

No, I disagree. Even if programs take time to close down, they can do so in the background.

Even shutdown time is something that you could use to be doing something else.

Originally Posted by brewbuck

Calling the difference between DOS (a system without memory protection) and any modern OS a "patch" is freaking nuts. The only reason DOS itself didn't have memory protection is because the home computer hardware at the time didn't support it.

I didn't mean it was patching the OS, but rather the application.

Originally Posted by brewbuck

and for that matter how DOES a user application "free the stack" anyway?

Nothing suggested that you have to do that. And you didn't allocate it so why should you have to free it? And what is a stack anyways?

Originally Posted by brewbuck

Many modern operating systems won't even ALLOW you to give memory back to the OS once you've acquired it. Your program's data size can grow but not shrink.

Really? Can you support this?

Originally Posted by King Mir

If one of the features of an application you are developing is the ability to safely suddenly terminate at any time, then it is safe to call exit() at any time to force such a termination. Since the feature is there, one may as well use it to quickly close the application.

Elysia, I don't think calling it a 'patch' is really correct. Everything since the 386 (using the Intel line as an example) has had Protected Memory mode built into the CPU, which allows 32-bit OS's to allocate a completely separate virtual memory space to each application. Unlike the Win 3.1 days when a GPF could crash all 16-bit programs that were running, and IPF in 32-bit Windows would only crash the program that caused the IPF. Since Intel provided Windows with a tool to protect separate memory namespaces, they decided to use it. This isn't a patch, it's a new feature that was virtually impossible in earlier CPUs.

I don't see what memory protection has to do with this.

Originally Posted by Elysia

And how many programs can safely shut down and does not have some work to do when shutting down?

Are you talking about from a library perspective or an application perspective?

That would probably be an awful idea since you have no idea where and why atexit was called.

But Elysia, I think you are taking it all too religiously. I guess very few people would disagree that proper code would delete all resources, and noone would disagree that you absolutely have to delete some resources.

But if you have something like a FLTK_Window which is guaranteed not to do anything (useful) in its destructor, heaven and earth won't collapse if you fail to delete it at program termination.

In the referred thread someone also mentions that omitting deletes may be so as to not complicate simple examples with irrelevant details. Just like error handling for input is omitted in most example programs.

If I tried to enlighten them, they'd probably sneeze at me, saying it's not necessary.
But I'd love to see their faces when something breaks and they have to go to great lengths to fix it simply because they're not deleting what they allocate.
I'd also love to see their faces if the OS didn't clean up their mess. Though I doubt that will ever happen.

Calling the difference between DOS (a system without memory protection) and any modern OS a "patch" is freaking nuts. The only reason DOS itself didn't have memory protection is because the home computer hardware at the time didn't support it.

As has been said this ability has been in Intel processors for a very very long time. Microsoft just decided to start using it very late in the game. I believe it all started with the 386 but I do remember some blurbs from way back about the 286 having some type of special 'mode' that was almost like protected mode. However it was not used often because it was clunky and not very stable. Matsp would probably know more about this than I.

To me not cleaning up memory at program exit because the OS will do it for you is both lazy and a hack. To say it is an optimization is quite a stretch. In my 23 years of working with computers I can remember about 5 times I complained about a program shutting down too slow. Usually they startup and run too slow. It's not because none of them were slow but it's probably due to perception. At shutdown I'm done with the application and don't care what it does or how fast it is. At startup or during runtime I do. So optimize the startup and the runtime I don't really care about shutdown.

I just cannot vote for the side that says not cleaning up memory at exit is a valid practice. We all agree it's not a good practice in an embedded environment and I say it's also equally not good in a non-embedded environment.

Yes the OS will clean up for you as a courtesy to the other programs to keep the system stable. It's not designed so that programmers can be lazy and thoughtless at shutdown and force or rely 100&#37; on the OS to clean up. The end result here is not the issue. It's not the fact that it will work either. It's the fact that regardless of the outcome, it's not a good practice.

I expected that more people would reply to my post. Oh well. Guess I'll wrap up my train of thought.

Well, I think it all comes down to whether the OS releasing memory is a documented feature of Windows or some API, so that it can be relied upon, not just some expectant realization.

That's all I have to say on the subject.

edit: except for this:

That would probably be an awful idea since you have no idea where and why atexit was called.

It's called by exit. well... the function passed to it is. That's what you meant, right?

edit2: and that I don't think it should be done in C++ because there might be some sort of compiler-added overload in the destructors, so the only way you could do it cleanly in C++ was with malloc and with OS extensions, or with compiler and OS extensions.

I'm with Bubba on this and I doubt you'll find any API or documentation on that it does this. Or if you do, I don't think you'll find it says "you can just avoid cleaning up so Windows can do it for you."

I'm with Bubba on this and I doubt you'll find any API or documentation on that it does this. Or if you do, I don't think you'll find it says "you can just avoid cleaning up so Windows can do it for you."

I think it's very bad practice too, but none the less, it should be safe when used VERY carefully on the right platform, and it should shutdown a lot faster. That being said -- don't do it.