If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

And this goes partly into the "OpenGL is faster than D3D" claims you've seen recently for draw-call-bound applications. D3D drivers in Windows run in an entirely separate process, your application has zero interaction with kernel drivers, and all of the dangerous operations are performed by an unprivileged process. The direct communication of buffers and such are done through an IPC channel to a much fatter, more intelligent driver layer (not in the kernel) that can do a lot more complete validation and verification, not to mention scheduling (not allowing one app to monopolize the GPU).

OpenGL on both Linux and Windows are typically just shared libraries loaded into the application's address space and requiring direct kernel access to thin shims that talk directly to the hardware. Hence it is far, far easier to BSOD/oops your kernel with OpenGL, or take advantage of security holes. The upside being less overhead per individual driver call, which is _critical_ for OpenGL since the API is retarded and requires dozens of driver calls to configure a single object (vs D3D which relies on a single call with a descriptor struct, creating an immutable object).

Again, not just security, but also the scheduling. Not to mention debugging, system logging, etc. The WDDM architecture also makes it possible to restart the GPU and driver if it locks up or crashes (your screen turns black for a few seconds and then your whole desktop comes back, no apps are closed or crash... unless they aren't properly handling a lost device signal). It's also possible to upgrade your GPU driver without so much as a reboot, since only a tiny thin little kernel shim is needed and the rest is just a userspace library loaded into the WDDM process.

While I'm sure there are some good improvements that could be made to the current model, going down the route that WDDM went is going to give a lot more bang for the buck. It will help greatly with security while also allowing solutions to other problems (like scheduling) that the DRI model is pretty limited in handling. Time and energy spent on improving the stack should be spent going down the proper road, rather than a half-solution that only partially fixes one of the many problems that needs to be solved.

And this goes partly into the "OpenGL is faster than D3D" claims you've seen recently for draw-call-bound applications. D3D drivers in Windows run in an entirely separate process, your application has zero interaction with kernel drivers, and all of the dangerous operations are performed by an unprivileged process. The direct communication of buffers and such are done through an IPC channel to a much fatter, more intelligent driver layer (not in the kernel) that can do a lot more complete validation and verification, not to mention scheduling (not allowing one app to monopolize the GPU).

OpenGL on both Linux and Windows are typically just shared libraries loaded into the application's address space and requiring direct kernel access to thin shims that talk directly to the hardware. Hence it is far, far easier to BSOD/oops your kernel with OpenGL, or take advantage of security holes. The upside being less overhead per individual driver call, which is _critical_ for OpenGL since the API is retarded and requires dozens of driver calls to configure a single object (vs D3D which relies on a single call with a descriptor struct, creating an immutable object).

Again, not just security, but also the scheduling. Not to mention debugging, system logging, etc. The WDDM architecture also makes it possible to restart the GPU and driver if it locks up or crashes (your screen turns black for a few seconds and then your whole desktop comes back, no apps are closed or crash... unless they aren't properly handling a lost device signal). It's also possible to upgrade your GPU driver without so much as a reboot, since only a tiny thin little kernel shim is needed and the rest is just a userspace library loaded into the WDDM process.

While I'm sure there are some good improvements that could be made to the current model, going down the route that WDDM went is going to give a lot more bang for the buck. It will help greatly with security while also allowing solutions to other problems (like scheduling) that the DRI model is pretty limited in handling. Time and energy spent on improving the stack should be spent going down the proper road, rather than a half-solution that only partially fixes one of the many problems that needs to be solved.

Agreed. WDDM was one of the long overdue improvements that MSFT made in Vista. Fixes a ton of issues related to security and stability of the graphics stack. Problem is, because of OGL's design, I don't know if the WDDM approach could easily be replicated in Linux.