That's because of over-committing memory. Basically, the system pretends to have more memory than it actually has so that when a process asks for X bytes of memory it usually gets it.

This works reasonably well since programmers are lazy and usually request much more memory than they actually need. However when programs actually start to use all that memory the system is screwed, because there is no way it can provide the necessary memory. It will then start killing processes to fix this problem.

That's because of over-committing memory. Basically, the system pretends to have more memory than it actually has so that when a process asks for X bytes of memory it usually gets it.

This works reasonably well since programmers are lazy and usually request much more memory than they actually need. However when programs actually start to use all that memory the system is screwed, because there is no way it can provide the necessary memory. It will then start killing processes to fix this problem.

Sounds like a bank to me.. People have a certain amount of money in their account and they usually get that when they ask for it.. Unless of course everyone want their money at the same time, since the banks haven't enough money to fullfill all their obligations.. Then they will start to kill themselves

My worry is reserved for the OP who appears to know enough to be dangerous (i.e. if you're (able to be) digging around enough on a system to find those messages, the messages themselves shouldn't have elicited the response it did.)

My worry is reserved for the OP who appears to know enough to be dangerous (i.e. if you're (able to be) digging around enough on a system to find those messages, the messages themselves shouldn't have elicited the response it did.)

It's not my screenshot, I saw it online and found it amusing. That's why I'm more interested in the silly error messages than the connotations of what's happening.

Sounds like a bank to me.. People have a certain amount of money in their account and they usually get that when they ask for it.. Unless of course everyone want their money at the same time, since the banks haven't enough money to fullfill all their obligations.. Then they will start to kill themselves

I've been speaking to the guy that posted the screenshot. Here's the full story for anyone who wants a bit of extra context...

All my machines are setup with Hyper-V, a Windows 8 host (with cygwin installed) and an Ubuntu VM. That's how I get to enjoy both worlds of a great Windows UI and an awesome Ubuntu command line.

I couldn't connect to my VM yesterday, so I checked the Hyper-V console to find this. This doesn't seem like something that should happen on a machine which rarely gets to 1% memory usage, and I have no idea how it could stay out of memory after killing all my processes.

Also, this is the first Kernel panic I've seen since I've been using this setup, which is interesting because I've been using it for over a year and haven't seen the Windows 8 BSOD yet.

That's because of over-committing memory. Basically, the system pretends to have more memory than it actually has so that when a process asks for X bytes of memory it usually gets it.

This works reasonably well since programmers are lazy and usually request much more memory than they actually need. However when programs actually start to use all that memory the system is screwed, because there is no way it can provide the necessary memory. It will then start killing processes to fix this problem.

Right but other OSes like, say, Windows will (shockingly) just never over-commit in the first place. These OSes are known as "good OSes".

I thought the problem of allocating more memory than you actually have was solved more than half a century ago.

Swap space can also be full or not configured (linux installers usually make a swap space automatically just like Windows).The difference between Windows and Linux is that Windows will never allocate a process more (virtual memory) than it has while Linux will.

E.g. on Linux you can request 10TB of memory (on a machine with 8GB RAM and 1TB disk)... And you'll get it. At least you think so, because you have a valid pointer. Once you actually start using that memory you and/or other process(es) will get killed.

The difference between Windows and Linux is that Windows will never allocate a process more (virtual memory) than it has while Linux will.

This seems to me like a WTF on the part of Linux (and I'm not a hater; in fact, I've been a user and fan for mumble-mumble years). Is this true of other *NIX, too? What, pray tell, is the rationale behind it?

Um, I still don't get it. The only thing I see that seems somewhat relevant is "[t]he entire virtual address space of the parent is replicated in the
child." I don't see a benefit from letting a process think it has more memory available that it could ever possibly use.

Um, I still don't get it. The only thing I see that seems somewhat relevant is "[t]he entire virtual address space of the parent is replicated in the
child." I don't see a benefit from letting a process think it has more memory available that it could ever possibly use.

Because fork() duplicates the address space as copy-on-write, the total amount of committed memory doesn't increase initially. But as you start modifying the COW pages, it causes increase in committed memory, because instead of one shared page you'll get two non-shared pages. You cannot predict how much memory you'll need as a result of that. One approach is to reserve maximum amount of memory you may ever need after fork. This may be time-consuming.

I thought the problem of allocating more memory than you actually have was solved more than half a century ago.

Swap space can also be full or not configured (linux installers usually make a swap space automatically just like Windows).The difference between Windows and Linux is that Windows will never allocate a process more (virtual memory) than it has while Linux will.

E.g. on Linux you can request 10TB of memory (on a machine with 8GB RAM and 1TB disk)... And you'll get it. At least you think so, because you have a valid pointer. Once you actually start using that memory you and/or other process(es) will get killed.

Thus, the difference is that Windows kills the process that is unluck enough to ask for more memory just as soon as the system runs out of it, while Linux will explicitly chose one to kill (or sacrifice descendent).

Because fork() duplicates the address space as copy-on-write, the total amount of committed memory doesn't increase initially. But as you start modifying the COW pages, it causes increase in committed memory, because instead of one shared page you'll get two non-shared pages. You cannot predict how much memory you'll need as a result of that. One approach is to reserve maximum amount of memory you may ever need after fork. This may be time-consuming.

Ah! It's more efficient to allocate a big chunk of memory the child may never need than to allocate it piecemeal when it does need it. And if it ever needs more than is really available, well, too bad for some unlucky process(es). One can argue about whether this is the optimum solution (obviously, since other OSs made other choices), but at least I now understand the reason. Thank you.

In Windows, short of preallocating memory so you can handle an OOM situation, how do you respond to an OOM situation? You can't open a new window to inform the user and you don't have any memory to translate data to normal disk formats. I don't that the C library or kernel will let necessarily let you even open a file to dump data to without memory, and you have to make sure any libraries you use don't secretly allocate anything. If you're allocating a large chunk of memory, it's probably a good thing to handle a failure, but if you're making a routine allocation, the amount of work to handle it seems disproportionate to what it takes to handle it and the effectiveness you can respond to it.

Try this answer: fork() followed immediately by exec(), which probably represents 99% of the uses of fork().

The real problem with over-commit is that you get (in practice) "any process can get killed in the middle of any operation and leave something in an inconsistent state" instead of the normal "any memory allocation can fail, as expected by good programmers".

So the difference is mostly theoretical since good programmers don't exist.

The real problem with over-commit is that you get (in practice) "any process can get killed in the middle of any operation and leave something in an inconsistent state" instead of the normal "any memory allocation can fail, as expected by good programmers".

It seems that the first would be better in some situations. A program that has a persistent memory leak on the order of GB per hour, whose response to a failed memory allocation is to wait and try it again later, can kill or cripple any program that depends on forking or allocating memory in the latter situation; in the first, it will probably be the first thing targetted when the OOM killer runs. Or it's not even bothering to initalize the memory, in the first case it may have no effect at all.

The reason POSIX/SuS doesn't specify a fork_and_exec() function is that it just duplicates two existing functions and is therefore redundant. If you don't like the fact that fork() clones the entire memory space, then good news: vfork() already exists.

POSIX does specify a fork_and_exec() function, it's called posix_spawn() and nobody knows about it because fork()-then-exec() has been in UNIX since the before times so that's what everyone uses.