Hybrid View

Free video memory efficiently

I have a multi-process application and each process owns a very big 3D texture.Only one process is active at the same time while other process run in background and don't need the graphic resouces. As the video memory is very limited, so I want to free texture memory owned by current process when switching to other process. The only way to free texture as I know is "glDeleteTextures", but seems not so efficient as I switch processes very frequently.I also considered a global texture management among all processes, according to http://www.opengl.org/discussion_boa...ween-processes it's almost impossible. Is there any other way to do this?

There is the GL_ARB_invalidate_subdata extension that was introduced this past summer. As it's very new it will only be supported by newer drivers. You should be able to invalidate a 3D texture with that. However, you'd still likely have to upload the texture again when the context switches back to your app.

Out of curiosity, how big is "a very big 3D texture"? The GPU should swap out the other texture (or parts of it, perhaps) when it needs the space. Is there a specific reason you feel you need to manually intervene?

Using GL_ARB_invalidate_subdata doesn't necessarily free the memory used by the texture. It just meant to invalidate its content.

Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
Technical Blog: http://www.rastergrid.com/blog/

My thought was that by invalidating the texture, it would be a more likely candidate to be thrown out of VRAM if the GPU ran into a memory shortage. Completely implementation dependent of course, but it seems like something reasonable to try.

Doesn't orphaning the texture wit size == 0 also release whatever memory was previously allocated? At least I would expect implementations behave like that, though I've never read anything like it in the spec.

While implementations can potentially throw out the storage of a texture/buffer when calling invalidating functions or when "orphaning" them by passing a size of zero, it's not something that is required by the spec, so personally, I, as a developer would not use techniques that rely on potential or event actual behavior of implementations if it is not guaranteed to behave the same on all implementations.

Perfect example is buffer orphaning (or renaming or whatever you want to call it) which can be easily done explicitly in the application by using multiple buffer objects, or a single buffer object with unsynchronized maps and sync objects.

Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
Technical Blog: http://www.rastergrid.com/blog/

There is the GL_ARB_invalidate_subdata extension that was introduced this past summer. As it's very new it will only be supported by newer drivers. You should be able to invalidate a 3D texture with that. However, you'd still likely have to upload the texture again when the context switches back to your app.

Out of curiosity, how big is "a very big 3D texture"? The GPU should swap out the other texture (or parts of it, perhaps) when it needs the space. Is there a specific reason you feel you need to manually intervene?

"very big" means hundreds of megabytes, if two processes exist simultaneously, video memory cannot afford.I don't know how openGL handle this. Does it has a cache or something? As the data is very big, CPU memory cannot afford many processes at the same time either, therefore I have a CPU file-mapping.When switching to other process, current process would delete the memory and keep it in mapping files.As I have my own memory management, I don't know whether opengl's cache is necessary and efficient enough.