Tim Sweeney: The End of The GPU Roadmap

Epic Games founder and 3D engine architect Tim Sweeney has presented what he calls “The end of the GPU roadmap”, where he essentially says that GPU as we know them are too limited, and predicts that by 2020 developers will switch to a more flexible massively parallel programming model where all fixed functionality (texture filtering, anti-aliasing, rasterization) have been replaced with a software implementation, backed by massive computing power. There’s no denying that such a perspective is exciting for software engineers, and T. Sweeney commands a lot of respect: the Unreal engine was one of the best software 3D engine before we all jumped onto the GPU boat and the Unreal Engine3 is used in about 150 games today.

To make a forecast that far away in the future, Tim Sweeney makes bold assumptions about the evolution of computing power and about the quality of the compiler/tools that will be available by then. He also recognizes that development has to be economically viable and that there are many challenges ahead. Tim talks about “the end of the GPU roadmap” because at that point, there’s no evolution in “features” anymore, but only in computing power. Roadmaps usually refer to upcoming features. This presentation was delivered in a context where Intel will introduce Larrabee, a graphics processor that gets rid of almost all (except texture filtering) fixed graphics functionalities. Check out his deck of slides (PDF).

Update: after I posted this, I started a few conversations on emails with people that are interested in the subject. So, I thought that I would share:

I personally see this as an interesting point of view about where computer graphics needs to go (or what Tim Sweeney wants it to go). He makes a very compelling argument, but for one, it is a so far away in the future that anything can happen. Secondly, I’m pretty sure that this will happen in “steps” as I suspect that the first switch to a completely flexible graphics architecture will induce many performance issues (bandwidth, threading etc…).

Of course, there are tremendous benefits in being able to program *everything* oneself, but we need to take into consideration the cost associated to build everything (from scratch) in software. Today, there are hundreds of folks at Nvidia, AMD or Intel writing texture filtering, rasterization, threading or triangle setup code that’s optimized to death. That’s hundreds of thousands of man-hour that developers will have to do themselves if they want to reap the benefits of a general-purpose architecture. Tim Sweeney might be OK with this, but most people can’t afford or don’t have the know-how to build that stuff. They will end up using a third party engine/middleware that might or might not suit their needs. Anyone who can build their own engine would rather do so.

The tradeoff between special-purpose computing and general purpose computing is one of costs and convenience. Even seemingly easy tasks like network packets processing are better off being offloaded to a special purpose Ethernet hardware. May be texture filtering should be too – that’s debatable. It’s up to developers and the market to decide what they want, what they can handle and what they can afford.

The current hardware accelerated graphics is relatively efficient, as measured by computational power per dollar, per square millimeter or per Watt). That’s why GPUs can now creep into mobile phones with DirectX9 capabilities. Current GPUs can also continue to evolve and address specific needs for flexibility, without inducing a radical change.

In the end, Tim’s vision can be realized and he is right to think that we’re headed towards a greater flexibility, but my point is that it would happen in steps, and it won’t be as much of a “slam dunk” as some people think. As a software engineer, I am excited as well about the prospect of controlling every aspect of a rendering engine, but what we like and what will happen are two very different things. It’s more likely that we’ll end up with a hybrid solution.