For an assignment, I need to allocate a dynamic buffer, using malloc() for the inital buffer and realloc() to expand that buffer if needed. Everywhere I use (re|m)alloc(), the code looks like the following:

Calling perror with no argument results in undefined behavior. A null argument is valid though. Next time include the right headers so the compiler will give you an error if you write broken code like this.
–
R..Nov 29 '10 at 16:17

It's a bad idea in the form presented, because in anything other than a trivial program written for an assignment, you want to do something more useful/graceful than bailing out. So best not to get into bad habits. This isn't to say that wrappers around allocation are bad per se (centralizing error handling can be a good thing), just that a wrapper that you don't check the return value of (e.g., that doesn't return at all on failure) is a bad idea, barring your having provided some mechanism for allowing code to hook into the bail-out logic.

If you do want to do it in the form you've presented, I'd strongly recommend using a more clearly different name than Malloc instead of malloc. Like malloc_or_die. :-)

In anything other than a trivial program, memory errors usually occur long after the OS has become crippled by paging activity, and even if you do get an out-of-memory before that happens, there is usually not much you can do beyond crashing. The best strategy, particularly in non-trivial programs, is to avoid running out of memory in the first place.
–
Marcelo CantosNov 29 '10 at 10:25

What other sensible options do you really have when you're out of memory?
–
SimoneNov 29 '10 at 10:25

Yes and no... Yes because in general you might have other resources to clean up before the exit(); no because out of memory is not the kind of error that most programs can recover sensibly from, and the OS will clean up most resources (apart from possibly IPC stuff) automatically.
–
j_random_hackerNov 29 '10 at 10:26

1

@Marcelo: Fair point, although of course it totally depends on what OS you're dealing with (embedded is different than desktop, for instance). @Simone: Releasing some memory. ;-)
–
T.J. CrowderNov 29 '10 at 10:26

@T.J. Crowder: Agreed, it depends heavily on the platform/OS. Most of my work is in 64-bit environments with tons of virtual memory.
–
Marcelo CantosNov 29 '10 at 10:30

In most instances, the attempt to use a null pointer will crash the program soon enough anyway, and it will make it easier to debug, since you get a nice core dump to wallow around in, which you don't get if you call exit().

The only advice I'd give is to dereference the returned pointer as soon as possible after allocating it, even if only gratuitously, so that the core dump can lead you straight to the faulty malloc call.

There is rarely much you can do to recover from memory exhaustion, so exiting is usually the right thing to do. But do it in such a way as to make the post-mortem easier.

Just to be clear, though, memory exhaustion usually occurs long after the OS is crippled by page-swapping activity. This strategy is really only useful to catch ridiculous allocations, like trying to malloc(a_small_negative_number) due to a bug.

Then it's probably better to abort() the program, because if you don't use your pointer right away it may be a PITA to find out why it's null later.
–
Antoine PelisseNov 29 '10 at 10:32

@Antoine: Hence my advice to dereference the pointer ASAP. As discussed under @T.J. Crowder's answer, getting a non-NULL pointer may not guarantee that the pointer is usable. Nothing beats actually dereferencing the pointer.
–
Marcelo CantosNov 29 '10 at 10:43

@Marcelo, @Antoine: As someone noted in a comment to another answer, (un)fortunately on nowadays systems you never will see a NULL pointer since virtual memory is almost unlimited. Your program will then first slowdown dramatically if you reclaim too much since the system has to swap in and out. It will then only crash, once all your swap is used, much much later.
–
Jens GustedtNov 29 '10 at 10:43

@Marcelo: what do you mean by works :) It doesn't hurt much, but will not detect anything either...
–
Jens GustedtNov 29 '10 at 10:46

@Antoine: Agreed; ultimately, the only solution is to avoid running out of memory in the first place. Easier said than done, of course, but hey, that's what they pay us for isn't it?
–
Marcelo CantosNov 29 '10 at 10:48

The ideas that "checking for malloc for failure is useless because of overcommit" or that "the OS will already be crippled by the time malloc fails" are seriously outdated. Robust operating systems have never overcommitted memory, and historically-not-so-robust ones (like Linux) nowadays have easy ways to disable overcommit and protect against the OS becoming crippled due to memory exhaustion - as long as apps do their part not to crash and burn when malloc fails!

There are many reasons malloc can fail on a modern system:

Insufficient physical resources to instantiate the memory.

Virtual address space exhausted, even when there is plenty of physical memory free. This can happen easily on a 32-bit machine (or 32-bit userspace) with >4gb of ram+swap.

Memory fragmentation. If your allocation patterns are very bad, you could end up with 4 million 16-byte chunks spaced evenly 1000 bytes apart, and unable to satisfy a malloc(1024) call.

How you deal with memory exhaustion depends on the nature of your program.

Certainly from a standpoint of the system's health as a whole, it's nice for your program to die. That reduces resource starvation and may allow other apps to keep running. On the other hand, the user will be very upset if that means losing hours of work editing a video, typing a paper, drafting a blog post, coding, etc. Or they could be happy if their mp3 player suddenly dying with out-of-memory means their disk stops thrashing and they're able to get back to their word processor and click "save".

As for OP's original question, I would strongly advise against writing malloc wrappers that die on failure, or writing code that just assumes it will segfault immediately upon using the null pointer if malloc failed. It's an easy bad habit to get into, and once you've written code that's full of unchecked allocations, it will be impossible to later reuse that code in any program where robustness matters.

A much better solution is just to keep returning failure to the calling function, and let the calling function return failure to its calling function, etc., until you get all the way back to main or similar, where you can write if (failure) exit(1);. This way, the code is immediately reusable in other situations where you might actually want to check for errors and take some kind of recovery steps to free up memory, save/dump valuable data to disk, etc.

I think that it is a bad idea, since first of all checking the return of malloc doesn't buy you much on modern systems and second since this gives you false security that when you use such a call, all your allocations are fine.

(I am supposing you are writing for a hosted environment and not embedded, standalone.)

Modern systems with a large virtual address space will just never return (void*)0 from malloc or realloc apart, maybe, if the arguments where bogus. You will encounter problems much, much later when your system starts to swap like crazy or even runs out of swap.

So no, don't check the return of these functions, it makes not much sense. Instead, check the arguments to malloc against 0 (and for realloc if both are 0 simultaneously) with an assertion, since then the problem is not inside malloc or realloc but the way you are calling them.