Personal Information

The scenario is the following.
I have quite old app using dx9 which allows plugin integration. Unfortunately the plugin integration is through GDI, so I receive Paint callback with HDC. What I do is drawing into Texture2D using DX11 in another process and share the texture with GDI plugin. It works fine when I share the texture bits through CPU. Now, I want to use GPU only and use the texture sharing between DX11 and GDI so I tried to create ID2D1DCRenderTarget but that I cannot link with any shared surface ...

It looks like CreateDxgiSurfaceRenderTarget cannot be used with D2D1_RENDER_TARGET_USAGE_GDI_COMPATIBLE and CreateSharedBitmap cannot be used with ID2D1DCRenderTarget as it must be used with render target created by CreateDxgiSurfaceRenderTarget , so looks there is no straight way to do it. Any tricks ?

The mystery is solved. The code itself is OK (option1), however I was trying to share the resource across 2 different adapters which cannot be achieved using GetSharedHandle() and needs to be done through CPU.

ok, I will take a close look at the CPU memory. I would expect a leak at GPU as I memcpy to GPU memory and then create the commandlist... I thought a commandlist is just a kind of handler for GPU resource... though I am still an newbe in dx11.

Hi,
I have the following scenario. I am integrating my DX11 plugin with a game application providing open interface for mods.
One thread is generating dynamic textures stored as Texture2D in ResourceShaderView. Once the bitmap bits are prepared on the deffered context I call Map(), then memcpy to GPU memory, then do UnMap() and call FinishCommandList() on the deffered context.
In the main thread I have a Render callback function that is called by the 3D game application every visual frame and providing me a target view where my plugin is attached to, In this callback I call ExecuteCommandList with the latest commandlist generated by my working thread. It work pretty well.
Now the problem statement is the following. As those threads are asynchronous, the working thread from time to time prepares the bitmaps more often than the main thread is rendering a frame. It means thet per each frame I might have 2-3 commandlists that are never executed as the rendering thread only needs the last snapshot of the texture. In the main thread I only call Release() of the CommanList I executed and do nothing with the previous command lists. Propbably this is not a good approach, however I run the tests with 50 FPS on the working thread and more less 25 FPS on the main thread which means every frame I created two commandlist where only one was executed and released, however I have not noticed any increase of GPU memory as there were not any leaks. Why is that ? Could someone explain. I run the test for 30 minutes with a video rendered on the working thread that was shown in the main scene. Is there any "smart" memory management when the CommandList is bound to the same texture ? Should I change my algorythm and manage the unexecuted command list. The thing is when I tried to do it I had some crashed from time to time and could not figure it why. On the other hand not releasing this "idle" commndlist created no issue ...