I'm mirv. I have been using GNU/Linux for quite a long time, and programming OpenGL since GL1.2. In my (limited) spare time, I help out on the xoreos project, and will one day hopefully release my very own game...maybe.

I'm writing this as a follow-up to the editorial by Liam here. The idea is to go into more detail into why performance is often worse with games ported to GNU/Linux, with an emphasis on graphics.

Firstly, I'll address the issue about porting. This is a big thing, and the term "port" is really the important part. It implies the game is adapted to running on GNU/Linux after running on another platform first, likely some variant of Windows. It's been optimised to run under (likely) Windows, and there's a good chance it wasn't using OpenGL. Leaving aside having to replace any possible proprietary middleware, and leaving aside any OS overhead of threading, process context switching, desktop environments and so on, graphics is where most of the difference will probably occur.

I should stress this point actually: the focus here is really on graphics APIs, but physics, networking, audio, etc, can all play their part - just fortunately it's rare that any of those are a potential issue. Even on Windows, physics is something that can cause a good deal of difference in performance across platforms, and most games with heavy use of it will have fallback options, or the ability to reduce physics-based effects to mitigate this problem.

I won't go into driver workarounds, but they can add a lot of performance to a game. And it's horrible to have to do that. It makes drivers more complex, and that can lead to higher maintenance, more difficult testing, and strange issues cropping up at weird times. Vulkan should help overcome this particular aspect, and will close performance-related gaps here.

One more thing before getting onto the meat of the subject: audio, networking, input handling, etc, can all take quite a bit of effort to port to different platforms, and can possibly tie into a game's engine design. The more tightly coupled it is, the more difficult it can be to port across to a different system. In some cases, an exact port isn't even possible (e.g networking with Company of Heroes 2, and Dawn of War 2).

To understand some of the graphics based porting difficulties, it does help to understand where DirectX and OpenGL came from. OpenGL itself was never originally intended for games. It's still a 3D graphics rendering API - not a gaming API. DirectX has perhaps had more of a focus on entertainment, especially during its early years, so there's a bit of difference in design there. Leaving aside politics in the history of each, OpenGL had roots in CAD (Computer Aided Design) and hardware at the time: the focus was on speed for very dynamic data sets. Lots of triangles, changing very quickly. Originally OpenGL mapped very well to hardware architectures, so state-based, fixed-pipeline rendering, dedicated rendering context and thread. Industrial software investment puts a lot of pressure to keep backwards compatibility; OpenGL could not do a "clean-cut" new version, and that core design of single context and thread has stuck with OpenGL since its creation.

Internally of course, drivers can do a lot of threaded tasks, but it's always presented to the user/developer as a single-threaded queue of commands. Commands are pushed in, and run in the order in which they're submitted (from the user perspective). There are possible ways to cheat with this: use different OpenGL contexts and share backend data, but that can have all kinds of issues - some drivers handle this better than others (coincidentally, this is what the Witcher 2 originally did, which resulted in a lot of performance problems). DirectX, which could change quite a bit between versions, could eventually allow for more thread-friendly command submission. This means you can assemble the graphical resources and get them ready in multiple threads (again, from a user perspective). When that's an assumption made during game engine design, it can be pretty difficult to try and turn it into single-threaded design. Most will likely write some kind of thread-safe queue that is regularly processed in a "graphics thread" to mimic multi-threaded design when porting from DirectX to OpenGL. That helped VP with the Witcher 2 - they saw quite impressive performance gains when they did this, but it's all still overhead on top of overhead when compared to the original Windows version. So performance may be a little less.

As mentioned before, OpenGL was originally state-based. It was basically one big state machine. Much of recent work has been to remove that, or to otherwise present all state information up-front when describing data, so it's less of a problem with more recent OpenGL versions. The issue is really one of checking for correct state: lots of continuous overhead in making sure everything is ok for rendering, and of course it's all "single-threaded" so must wait sequentially for commands, and must make sure everything is setup ok in order to proceed. As mentioned, this is less of a problem with recent versions because it can align much closer to how (recent) DirectX versions do things: allow the state to be processed and validated when creating an object, thereby not needing to do it continuously for every single command on every single frame. This is all something that comes down to engine design again though: if you need to change things, then states need to be re-validated, and it's simply not feasible to rip apart an entire game engine and redesign it to minimise this kind of impact.

When it comes to data handling, or rather data manipulation, different APIs can perform it in different ways. In one, you might simply be able to modify some memory and all is ok. In another, you might have to point to a copy and say "use that when you can instead and free the original then". This is not a one way is better than the other discussion - it's important only that they require different methods of handling it. Actually, OpenGL can have a lot of different methods, and knowing the "best" way for a particular scenario takes some experience to get right. When dealing with porting a game across though, there may not be a lot of options: the engine does things a certain way, so that way has to be faked if there's no exact translation. Guess what? That can affect OpenGL state, and require re-validation of an entire rendering pipeline, stalling command submission to the GPU, a.k.a less performance than the original game. It's again not really feasible to rip apart an entire game engine and redesign it just for that: take the performance hit and carry on.

Note that some decisions are based around _porting_ a game. If one could design from the ground up with OpenGL, then OpenGL would likely give better performance...but it might also be more difficult to develop and test for. So there's a bit of a trade-off there, and most developers are probably going to be concerned with getting it running on Windows first, GNU/Linux second. This includes engine developers.

I haven't mentioned another problem that affects porting either: shaders. GPUs are programmable these days, and shader code & compilers can make quite a difference. There are tools to automatically convert from HLSL -> GLSL (DirectX to OpenGL shaders), but they suffer similar problems to above: they convert behaviour, which doesn't mean the most optimal rendering path. That's before we take into account driver maturity in such matters (let's face it, Microsoft put in quite a good deal of effort in that arena). Fortunately things are improving in this area, but it's still another area that will mean ports generally give less performance.

About Vulkan. No, it will not magically make games run better. It cannot magically even make porting easier. I'm just going to post this link. While it's about implementing OpenGL on top of Vulkan, same rules apply for DirectX. An engine design may simply not be compatible with efficient methods of using Vulkan. That's really up to porting houses to look at and decide, but Vulkan does not necessarily mean better performance when porting a game. It likely will with recent games, as a lot of things can line up nicely then, but older titles may be better off using OpenGL.

As a final note, and this last paragraph doesn't apply so much to smaller indie teams, modern graphics APIs are much more complex than they were a decade ago. Getting experience with multiple graphics APIs across multiple platforms, to say nothing of testing on each one, can be very difficult where large, complex game engines are concerned. While I believe Vulkan will help bring comparable performance across platforms, it's simply not feasible to put in equal development time across all platforms. It's often much more economically viable to simply hire people experienced on the matter - that's why porting companies exist.

LeopardVulkan will improve gaming on Linux,because more engines come out with Vulkan support there will be no need like porting stuff.

I wrote this,because your article sounded like; with Vulkan we will going into another porting sequence.

I think it's more accurate to say Vulkan will help lower the barrier to in house development. Graphics is only one part of porting too - file system access, audio, networking, etc, and all the associated platform testing remain - and that's assuming a dev doesn't want to tweak the engine to suit their own needs on the graphics side. DX12 also has a bit of a head start, so a lot of games might need DX12 -> Vulkan work.

So I think we'll see more games done in house, but I also see there will be a need still for porting.

With modern OpenGL you can do threading to build up your data but you still have to submit on the current context.

Mesa doesn't have a "graphics server thread". Every call to an OpenGL function is followed through the entire driver, which also means that the GL thread is blocking as long as the driver is doing something (okay, there are threads for compiling shaders and things like that, but that's basically it).

There always seems to be the believe that error/state checking in OpenGL is really expensive. From what I can see in mesa it's really not much work at all. Would be interesting to actually check it but that might be more work than acceptable.

The tooling is something that most people don't even mention but it's incredibly important. OpenGL has terrible tooling which makes writing something that uses OpenGL really hard. Unfortunately it doesn't look like the tooling will get better, ever. Vulkan on the other hand… shiny.

While we're at Vulkan: HLSL shaders are everywhere and if you want typically D3D focused engines to get ported to linux you really want the engine developers to keep all their shaders. Fortunately it's possible to compile HLSL source into HLSL IR and SPIR-V to feed into D3D and Vulkan respectively.

Basically....yes to everything.
Indeed threading is one of those things about OpenGL that _can_ be done, but it's almost always error prone - not necessarily in concept, just that it's easy to mess up something in practice.

A lot of error and state checking has been simplified in GL4.x (GL4.5 and DSA is really nice!), and even GL3.x for that matter. So it's less than it was - it's just that if you change state an awful lot, it adds up. I haven't even mentioned in there that semi-related is that if you're switching out textures, shaders, etc, some of that can be quite expensive - but DirectX made it less expensive to swap out pipeline objects I believe. Things like that get involved in engine design and don't translate directly to OpenGL easily. In any case, the re-validation can interrupt command flow a lot if done at the wrong spot, which comes down to API experience. Even better is when something that should stall, doesn't in one implementation but does with another....as you say, tooling. Microsoft put a lot of effort into driver validation, and the abstraction model of DirectX helped stabilise drivers a lot. OpenGL didn't benefit from that.

....and if I don't stop now, I'll write another novel.

-- edit: I tend to go into more technical detail than Liam, just because I have a passion for 3D graphics. Also means I focus more on one aspect; there's obviously a lot more to the state of the industry than what I wrote here. The original editorial covers more topics, so this is just supplementary to that.

My article was obviously only scratching the surface and touches on different things.

"It's the drivers! And look, we don't deserve good stuff anyway because we're so few people! And do you even notice a difference? With 2 Titan X you can barely notice a difference because our eyes can't see over 30fps anyway."

Whatever.

liamdaweThis article compliments what I already said remember.

Pretty sure it doesn't compliment you, or anything you said. Even saying it complemented your article would be a gross overestimation of what your article provides.

liamdaweYou just expected something entirely different from my article.

Now you can read minds, too? Oh well, not expecting a total disaster is probably too much asked here.

My article was obviously only scratching the surface and touches on different things.

"It's the drivers! And look, we don't deserve good stuff anyway because we're so few people! And do you even notice a difference? With 2 Titan X you can barely notice a difference because our eyes can't see over 30fps anyway."

Whatever.

Do you know how you sound when you try to childishly belittle the people who make this site possible in the first place? Take your Steam Forum bile elsewhere, please. It doesn't belong here.

My article was obviously only scratching the surface and touches on different things.

"It's the drivers! And look, we don't deserve good stuff anyway because we're so few people! And do you even notice a difference? With 2 Titan X you can barely notice a difference because our eyes can't see over 30fps anyway."

Whatever.

None of that I actually said.

swick

liamdaweThis article compliments what I already said remember.

Pretty sure it doesn't compliments you, or anything you said. Even saying it complemented your article would be a gross overestimation of what your article provides.

My article directly inspired this one, and it goes into more detail on things I already mentioned. It backs up my note about OpenGL and multithreading, it backs me up on the business side of it with developers not putting as much time in as the original and so on. It's like you didn't even read this article or mine properly or in their entirety.

swick

liamdaweYou just expected something entirely different from my article.

Now you can read minds, too? Oh well, not expecting a total disaster is probably too much asked here.

While you're here, please consider supporting GamingOnLinux on Patreon or Liberapay. We have no adverts, no paywalls, no timed exclusive articles. Just good, fresh content. Without your continued support, we simply could not continue!

We also accept Paypal donations and subscriptions! If you already are, thank you!