m/realloc returning ever 0 on modern systems?

This is a discussion on m/realloc returning ever 0 on modern systems? within the C++ Programming forums, part of the General Programming Boards category; As far I know modern operating systems from Win95 - Vista are using swap files if the real memory is ...

m/realloc returning ever 0 on modern systems?

As far I know modern operating systems from Win95 - Vista are using swap files if the real memory is full. I did try on windows xp to malloc a much bigger amount of memory then I have and the system was still stable. It never returned NULL.

I am currently reading a C book and in one chapter it says that it may happen that the operating system can`t find a free block with that size and you should request less memory then for example 1/2. My problem is that this error catching adds loads of overhead to my code.

In which case is this still needed these days? Programming embeded devices?

Your question falls apart into two parts:
Is it likely that malloc will fail in a small application: no. Is it possible that it can fail under certain circumstances: Most certainly, here's why:

What if your swapfile is also full to it's maximum size?

What if you try to allocate more memory than the processor can physically address, or that a single process can hold (2GB in standard Windows setup)

What if you ask for 512MB when there's no contiguous address-range left of 512MB in that process? [Imagine we have 2GB per process, you allocate 3 x 512MB, then release the "Middle" 512MB, allocate another 64MB, then try to allocate 512MB again?]

There are multiple reasons why malloc/realloc could fail, and the above are just a few of those.

Edit: As to the overhead of checking for zero: It's very easy for the compiler to check for zero. Compared to the work needed to allocate memory inside malloc, the code to check for null is very marginal. If it's a large portion of your code, then you are either doing something wrong (perhaps you should write a "safemalloc" that checks for NULL and reports the problem, assuming you don't wan to let the user try to recover from the situation). Also, if you are calling malloc in many different places, you there may be better ways to write your code - but I haven't seen your code, so I can't really say.

There's no right and wrong way generally to deal with "not able to allocate". If you are allocating a buffer that is "if it's smaller, performance will suffer, but otherwise no difference", then using half the size until you get something is better than crashing/erroring out because you can't allocate a large buffer. At the same time, it is very unlikely that 8K would ever fail to allocate, so if that fails, it's most likely also failing at 4K, 2K, 1K, 512B, 256B too - because there simply ain't any more memory.

In other circumstances, you may need 8K, and allocating less simply won't do any good, since you need one contiguous section of 8K, that's it - 7.9K or nothing would be equally bad, because it's 8K of data you need to store! In which case, erroring out is just about the only solution (asking the user if he can stop something else in the system is perhaps an option).

Note that fflush(stdin) is an undefined operation, and most likely will not do anything useful. See the FAQ for "Why shouldn't I use fflush(stdin)" and "How do I flush the input" subjects.

The most likely scenario of failing to be able to malloc is when the system is simply low on memory, and it could happen for just about any reason. Not having free-space on the disk may also affect things, as if you have a "auto-grow" swap-file, and no more space on disk, then the swap-file may not be able to grow, even if you are below it's limit.

I think checking if it`s NULL is no overhead. In case it`s NULL I just save all data to disk, show an errormessage and teminate the program.

How are you going to save the data to disk? You are out of memory. You probably can't even open a file at that point, since fopen() needs to call malloc() to get a chunk of memory to create the FILE object.

What if displaying the error message requires memory to be allocated? Etc...

If you're trying to allocate, say, 20 megabytes and it fails, there might in theory be 19.99 megabytes available, but that's not something you can determine, or count on.

One strategy is to allocate a piece of "panic memory" at the very beginning of the program. If you run out of memory, you can free() this chunk, which hopefully will give you back enough memory to clean up and exit gracefully.

One strategy is to allocate a piece of "panic memory" at the very beginning of the program. If you run out of memory, you can free() this chunk, which hopefully will give you back enough memory to clean up and exit gracefully.

I agree that this is a reasonable solution, but if some other process is allocating memory like mad, then you may still not be able to allocate memory after freeing some up.

If the panic memory is on the same page as some other memory you hold (which is pretty likely), the page won't be freed and another program can't steal it.
Not that a memory page is very large ...

True indeed. But if your panic memory is "big enough to make a difference", then it's probably a fair bit over 4K(page-size on most processors - some have larger pages, but at least all x86 processors use 4KB pages[1]) - but that depends on the use of spare memory.

Thinking more about it, it's unlikely that a small chunk of memory freed would actually be given back to the OS - since asking the OS to reduce/enlarge the heap is fairly expensive, the runtime library will hold on to freed memory and re-use that, rather than reduce it's usage by calling the OS, unless there is a LARGE amount of free memory. This makes such a principle more workable.

Unfortunately, if the freed memory area was never "committed", e.g. we just do

Code:

char *reserveMemory;
...
reserveMemory = malloc(100000);
...

without actually touching any of that 100K area of memory, the memory may well just be "reserved" for this process, and not actually backed by real memory. This would mean that if the swapfile is full, we're no better off, because we haven't freed space in the swapfile (this memory was never in the swapfile, so freeing just marks some memory range that has never been "given any real memory" as free).

Yeah, so you'd have to fill the memory right after allocating it, e.g. by using calloc.

And it seems that it would be best not to free the memory at all, but instead use it directly for cleanup tasks.

Yeah, that would work, except if you call something like fopen() it will call standard malloc, rather than use memory from our reserve - it would be fine if you only need memory for your own internal purposes (could replace all calls to malloc with your own "safeMallocWithReserve"). But that may not work right - in C++ you could replace new.