"AMD worldwide developer relations manager of graphics Richard Huddy has blamed Microsoft's DirectX and its APIs for limiting the potential of GPUs in PCs. 'We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it's very clear that the games don't look ten times as good. To a significant extent, that's because... DirectX is getting in the way.'"

The article was not about Direct3D per se, but about how there exists a faction of development studios who would be interested in having lower-level access to the GPUs, like they do on the consoles.

Its incontrovertibly true that console games often look nearly as good, if not as good, as their PC counterparts -- I'm not talking about identical, but I'm talking about easily within the same magnitude, dispite having GPUs which are, now, and order of magnitude fewer shader resources, and ~4 generations behind. The perfect example is the PS3, which has a modified Geforce 7600 CPU. You would expect a PC having so much more resources at its disposal would blow the PS3 and 360 out of the water clearly and definitively, and yet they do not.

There are several reasons for this -- the rest of the architecture is built around game workloads (for example, large numbers of smallish DMAs), being able to design and profile around a single platform, removing unnecessary abstraction layers, and having more control of the GPU hardware (being able to build command buffers manually, reserving GPU registers without the driver butting in, explicit cache controls). With clever programming, this kind of stuff can gain you back that order of magnitude (or so) that the hardware itself lacks.

Some are asking to get back to software rendering with something resembling the Cell processors SPU, or Intel's ill-fated Larrabee (knight's Ferry) and I think that moving in that direction will play a part, but a purely software solution cannot be the whole answer -- things like texture sampling simply cannot be done efficiently enough in software, and has sufficiently-different memory access patterns that purpose-built caching hardware is a requirement.

The PC ecosystem is simply too diverse to remove most of the abstraction layers we've invented through OSes, graphics drivers, and APIs like Direct3D and OpenGL. At any one time, a PC gaming title needs to support a minimum of 8 wholly-different GPU platforms (4 generations from 2 vendors) -- not to mention the various speed grades and shader-counts for each distinct platform. On the console side you've got precisely 2 GPU platforms (ignoring the Wii, which doesn't compete with PC-style games) and you know exactly how fast they are, how many shaders they have, and their precise behavior all the way through the pipeline. You can actually spend the resources to optimize around those two platforms knowing exactly the size of the market they represent.

Now we can talk about "virtualizing" the GPU -- in the same sense that someone already mentioned an LLVM-like platform for GPUs, and plenty of parallels could drawn to the x86 platform being CISC externally, but translating on the fly to internal RISC-like architecture. Its not an approach without merit, but a couple points about this are worth making: First, this is somewhat like the driver/GPU already does -- the shader opcodes don't necessarily have a 1:1 mapping to hardware instructions, and the GPU takes care of batching/threading things across whatever hardware resources are available. Second, the cost of the more-flexible software model you just attained is having a less-flexible hardware model that the GPU vendors now have to design around. This hardware model may be better, at least for now, but stands to impose similarly on innovation in the future -- as one might argue that the x86 model has stifled processor innovation to a degree (notwithstanding the fact that Intel has done a great job of teaching that old dog new tricks over the years).

Greater programmability will play some role here, and possibly some amount of virtualizing the platform as well. Another factor here is the consolidation and commoditization of the graphics and game engine market. In some way, Direct3D and OpenGL aren't a solution anyone is *really* looking for -- you have guys that want an even higher-level solution (scene graphs, graphics engines, game engines) who account for maybe 75% of the target audience, and you have the guys building those scene-graphs and game engines who are a small part of the audience, but who have the knowledge necessary to put lower-level access to good use and, because they support the other 75% of the market, can make a living by providing good support for the 8 or so GPU platforms -- of course, then the "problem" becomes that "they" will be the only one to drive graphical innovation and research, while the rest of the market turns out products that can't go very far beyond what they've been provided.

This is basically what the article said, in a round about way: Some developers stand to benefit from lower-level, console-style, access to the GPU, and that Direct3D (as well as OpenGL) is in their way. What they failed to mention is that the other part of the market -- the other 75+ percent -- want something that's *even higher level* than Direct3D or OpenGL! You know, the folks buying Unity, Unreal, iD TECHn or Source.

Ultimately I think one of two things will happen -- Direct3D and OpenGL will continue to evolve and become thinner as GPUs become more general (and perhaps, virtualized), or an essentially new programming model and API will arrive and only graphics-specialists and the strong-of-heart will be able to wield it, and they will make their living selling solutions to the rest of the market. We'll still see DirectX/OpenGL implemented in terms of this new API, so it won't go away, but it will probably be relegated to use by serious hobbyists and small studios who need something that isn't provided by popular engines, and needs to support a wide-range of hardware without an exponential increase in effort.