I tried adding the /3GB switch into BOOT.INI as a suggested fix for a game crashing. I also set the Large-Address-Aware flag bit in the game EXE, but it doesn't appear to do anything, for this or other programs. Task Explorer still shows everything being loaded below the 2GB mark. Is there any measurable difference I should expect to see with /3GB and LAA being enabled?

All 32-bit programs/processes in Windows (even running in a 64-bit OS flavour) are limited to 2GB user address space. When a program reaches that limit, it gets terminated by the OS. The 3GB switch will allow a 32-bit process/program to potentially use 3GB before being terminated.

You won't notice any difference other than a process (which is hitting that limit) not getting terminated when it reaches that limit... if that process uses more than 3GB, you will see the same thing happen when that limit is reached even when using the /3GB flag. It will just terminate. If you're lucky, you may get an "Out of Memory" message box popup, although ime this hasn't always been the case.

CAD may require this flag, since a lot of the time, the size of data used is not known in advanced (i.e AutoCAD could be a designer working on a small item, or an architect could be working on a massive building). A Game should know in advance what its max usage should be in most cases, so the developers should have a rough idea of the footprint in various states of the program, so in most cases should not need the flag (unless the requirements say so). The exception being something customisable, or with say any number of 'enemies' which means there is no fixed max memory footprint on the games memory requirements, meaning it's fully dynamic and could hit that limit.

If your 32-bit game is reaching that limit, I would say that is bad design. It's footprint should be well defined and thus restricted to not get near that limit (so it doesn't get prematurely terminated)... or it should be deployed as 64-bit binaries only.

What game?

[EDIT:] Just read that document, it says 64-bit OS would allow a 32-bit process, 4GB (max addressable for 32-bit 2^32), however ime it's still been 2GB (64-bit Windows 10 Pro) so you still need the 3GB flag in this scenario. I'll check this next time I'm working in 32-bit... certainly 32-bit debug builds do go higher than 2GB, but 32-bit release builds always seem to terminate when they hit 2GB. o.0 ... not that this matters in your case though 😀

All 32-bit programs/processes in Windows are limited to 2GB user address space.

In 32-bit versions of Windows, yes.
The reason for this is that 32-bit address space only gives you 4 GB in total. So the choice was to use 2 GB of address space that can be virtualized on a per-process basis, and the other 2 GB can be used for kernel and shared objects.

The /3GB switch pushes that boundary so that you can have 3 GB of address space per process, while the remaining kernel/shared objects have to be in the 1 GB of address space that is left (which may not actually be 1 GB, if PAE is enabled, but only the kernel would have to be aware of that... and afaik only server versions of Windows even go there, because of issues with drivers).

In a 64-bit OS, you're not limited to a 32-bit address space for kernel and shared objects at all, so there's no reason to keep the 2 GB limit anyway. There is no concept of the /3GB switch at all.
Each process can be in its own virtual 32-bit environment that can appear to have the full 4 GB of address space available to it (minus whatever shared objects and things need to be mapped in there, but the kernel itself is 64-bit, so it doesn't have to be in there, just some stubs to thunk between 32-bit and 64-bit mode).

All 32-bit programs/processes in Windows (even running in a 64-bit OS flavour) are limited to 2GB user address space. When a program reaches that limit, it gets terminated by the OS.

There is no automatic termination by the OS when an application reaches some memory limit. All the OS does is that it refuses to give the application more memory if there is no free continuous memory chunk available of the requested size. The application keeps working and can potentially ask for a smaller chunk of memory, which may succeed depending on circumstances.
Now, the truth is that most applications are written in a way that makes them not well prepared for such memory allocation failures. Instead of displaying an "out of memory" message (which may also fail if we are really close to the limit) they just terminate themselves, or pretend that nothing happened and try to access that non-existent memory. Only then the OS terminates the program.

There is no automatic termination by the OS when an application reaches some memory limit. All the OS does is that it refuses to give the application more memory if there is no free continuous memory chunk available of the requested size.

Yes, I suppose this is more the application not handling that exception and terminating through its own fault rather than the OS doing it.

Azarien wrote:

The application keeps working and can potentially ask for a smaller chunk of memory, which may succeed depending on circumstances.

Well yes, as long as it is still below that limit. I can't speak for others, but if I was near that limit when requesting continuous memory chunks (as opposed to non continuous), I want it continuous for a reason 😉. So it's game over in most cases anyhow without resorting to memory mapped files or asking the user to use the /3GB flag. It then becomes a question of the elegance of the swan dive from that point...

Azarien wrote:

Now, the truth is that most applications are written in a way that makes them not well prepared for such memory allocation failures. Instead of displaying an "out of memory" message (which may also fail if we are really close to the limit)

Guilty as charged 🙁. I've seen that message displayed with some of my applications when reaching the limit though, but I didn't write it. Pretty sure it's an OS or runtime message box which may or may not choose to display afaik. Might be C# runtime? C++ runtime would crash out due to the bad_alloc exception (if unhandled) rather than give a specific error message saying it.

Scali wrote:

Each process can be in its own virtual 32-bit environment that can appear to have the full 4 GB of address space available to it

Are you sure it would allow this by default for a 32-bit process on Windows 64-bit without some of the mentioned flags?

...goes just over 2GB, but compiling as x86 bombs out for me on Windows 10 Pro 64-bit (compiling as x64 is fine of course). Am I missing a configuration or something with my OS (quote possible since IT set it up this machine)? I'm not sure this is default to allow 32-bit the full 4GB in 64-bit flavours?

[Edit:] ah I see, laa flag to allow process over 2gb, on 64-bit this gives full 4Gb, on 32-bit you still need the 3Gb flag.

Scali wrote:

The /LARGEADDRESSAWARE flag for the linker...

Whats strange is this is not enabled for debug configurations, yet accordding to Task Manager, my debug builds are going over 2GB? Release builds not so lucky... booo...meh... 64-bit ftw

Are you sure it would allow this by default for a 32-bit process on Windows 64-bit without some of the mentioned flags?

As far as I know, they do get their own 4 GB virtual environment, the flag mainly changes behaviour of memory alloction functions. As in, do they cut off at the 2GB mark or not? In other words: will they ever return pointers that could be interpreted as negative values when using signed arithmetic?

spiroyster wrote:

Whats strange is this is not enabled for debug configurations, yet accordding to Task Manager, my debug builds are going over 2GB? Release builds not so lucky... booo...meh... 64-bit ftw

That may have other causes though?
There will be code in the debug runtime linked to your application, which itself is obviously LAA-enabled. Aside from that, there will be thunking to 64-bit, so there may be more than 2GB allocated to your debug build, even though it's not related to your own code, but just 'background' stuff from the debug layer and the OS it interfaces with?

The required combination for this to work:
- software has to be large address aware (linked with /LAA)
- /3GB in boot.ini
- /PAE in boot.ini to enable 37-bit address space - if you have more than 4Gb of physical memory available.
I think you need Windows 2000 Pro and later versions for the flags to be considered.

- /PAE in boot.ini to enable 37-bit address space - if you have more than 4Gb of physical memory available.
I think you need Windows 2000 Pro and later versions for the flags to be considered.

As far as I remember, only Server/Workstation editions of 32-bit Windows ever supported more than 4GB of physical memory.
One reason why is that device drivers have to be aware of PAE. Server/Workstation versions of Windows were the first to enforce WHQL certification, which meant that only a subset of drivers could be used on Server/Workstation Windows, and Microsoft could filter out broken drivers, so they could ensure that /PAE works.
On consumer versions of Windows, this was not an option, so they just stuck to 4GB to make sure the drivers wouldn't break.

As far as I remember, only Server/Workstation editions of 32-bit Windows ever supported more than 4GB of physical memory.
One reason why is that device drivers have to be aware of PAE.

Technically, PAE mode is always enabled in 32-bit Windows since Vista I think, but due to faulty drivers issue it is artificially capped on non-server Windows editions to 4GB which makes it essentially pointless. There are hacks/patches to remove this limit however. They work for some people and don't work for others, depending on installed drivers.

Technically, PAE mode is always enabled in 32-bit Windows since Vista I think, but due to faulty drivers issue it is artificially capped on non-server Windows editions to 4GB which makes it essentially pointless.