Large vertex buffer and out-of-memory error

Can anyone confirm to me that the maximum vertex buffer you can allocate is restricted by the amount of memory available to the app (not GPU RAM)?

I have a 32-bit app that is close to its limits in memory in that I get an out-of-memory error when I try to allocate a buffer of 250,000,000 bytes (I know that's large) to be a temporary hold for vertices.

I modified the code to load the vertices in chunks but I still create a vertex buffer of this size. When I call glBufferSubData or map buffer range, I get and out-of-memory from OpenGl.

I created a small 32-bit app and could create and load a vertex buffer of 500,000,000 bytes; ie twice the size.

Also if I change the app to 64bit will I be able to load larger vertex buffers?

(As an aside I can create 3 100,000,000 bytes vertex buffers no problem).

Buffer objects don't have to live in actual GPU-only RAM. They can be transferred in and out at the will of the driver. They may even have a backing copy in CPU memory.

Where they go, how they're allocated, and what GPU or CPU memory they need to use, is ultimately up to the driver. It can be different for different graphics cards, driver versions, or anything else.

In short, there's no real answer that is applicable to all, or even most hardware.

That being said, mapping a pointer generally means mapping the memory into your address space. If you don't have that much virtual address space available, there's not much you can do. Using glBufferSubData can incur similar issues.

Thanks Alfonse, I was hoping glBufferSubData would only map the size of the subdata but this does no seem to be the case with the current nVidia driver. I am going to change my code to 64-bit to see if that changes anything. Looks like I am going to have to reparse the tins after they have been triangulated into smaller chunks with repeated vertices.

Switching to 64b will alleviate the virtual address space issues, but you'll still be bound by the amount of physical RAM of course. With 32b apps, it's very easy to run into a situation where there is no free block big enough to allocate a contiguous 250MB chunk, especially if you're mixing allocating of large and small blocks of memory (fragmenting the memory space).

Using multiple smaller buffers is a good way to avoid this, if you're stuck with a 32b OS as a target.