Strictly speaking, this isn't specific to GLX. The same issues would apply to using a graphics card in a system whose CPU has a different byte order to the GPU.

Actually no. The OpenGL standard requires that, if the client writes a string of bytes as a "GLuint", then the server must interpret those bytes as a proper "GLuint". So whatever bit fiddling that the server needs to do must be built into whatever processes the server uses to read that memory.

FWIW, I have trouble understanding why there seems so little interest in exploiting one of the features which really sets OpenGL apart from DirectX.

Because:

1: It requires having more than one computer.

2: Doing so requires being Linux-only.

3: It relies on the asymmetric computing situation, where your local terminal is weak and a central server has all the processing power. This situation becomes less valid every day. Between GLES 3.0-capable smart phones and Intel's 4.1-class integrated GPUs, the chance of not being able to execute OpenGL code locally is very low.

It's very difficult to exploit this feature unless it's explicitly part of your application's design requirements. It may differentiate OpenGL from Direct3D, but it's such a niche thing that very few people ever have a bone-fide need for it. It's nice for when you need to do it, but you can't say that it's a pressing need for most OpenGL users.

Actually no. The OpenGL standard requires that, if the client writes a string of bytes as a "GLuint", then the server must interpret those bytes as a proper "GLuint". So whatever bit fiddling that the server needs to do must be built into whatever processes the server uses to read that memory.

I don't really see your point. If the GPU can be made to use either byte order, then the X server can tell it to use the (X) client's byte order rather than the server's byte order. If the GPU's byte order is hard-coded, then a driver for a big-endian system with a little-endian GPU would need to twiddle the buffer contents based upon the commands which use the buffer.

Originally Posted by Alfonse Reinheart

1: It requires having more than one computer.

That's the case for practically anything beyond "home" use.

Originally Posted by Alfonse Reinheart

2: Doing so requires being Linux-only.

I regularly run an X server on Windows systems.

Originally Posted by Alfonse Reinheart

3: It relies on the asymmetric computing situation, where your local terminal is weak and a central server has all the processing power. This situation becomes less valid every day. Between GLES 3.0-capable smart phones and Intel's 4.1-class integrated GPUs, the chance of not being able to execute OpenGL code locally is very low.

The example of smart phones is one where it's useful. The local terminal has decent graphics capability (where there server may have none) but limited CPU, memory and storage capacity. Making it a reasonable "terminal" for a back-end system but not so good as a stand-alone system.

Originally Posted by Alfonse Reinheart

It's very difficult to exploit this feature unless it's explicitly part of your application's design requirements.

It's trivial to exploit this feature. Every X11 GUI application automatically has the ability to be run remotely. Well, except for ones which rely upon OpenGL 3 support, although it's not just the lack of GLX wire protocol which makes such reliance problematic at present.

I don't really see your point. If the GPU can be made to use either byte order, then the X server can tell it to use the (X) client's byte order rather than the server's byte order. If the GPU's byte order is hard-coded, then a driver for a big-endian system with a little-endian GPU would need to twiddle the buffer contents based upon the commands which use the buffer.

Well, indirect GLX allows the sharing of buffer objects between different clients, so "the" client byte order may be ambigous. The way GLX handles this is that it doesn't allow the creation of any GL context including buffer objects unless the client explicitly opts-in to the different byte order semantics (via the GLX_CONTEXT_ALLOW_BUFFER_BYTE_ORDER_MISMATCH_ARB attribute), and then the client (i.e. your application) is responsible for filling the buffer in the server byte order.

That just feels wrong. Seriously though, you probably are among a select few there.

The example of smart phones is one where it's useful. The local terminal has decent graphics capability (where there server may have none) but limited CPU, memory and storage capacity. Making it a reasonable "terminal" for a back-end system but not so good as a stand-alone system.

Am I getting this right? Do you suggest offloading rendering to your smart phone over the network is a reasonable use-case for supporting such capabilities?

Every X11 GUI application automatically has the ability to be run remotely. Well, except for ones which rely upon OpenGL 3 support, although it's not just the lack of GLX wire protocol which makes such reliance problematic at present.

At least on Linux distributions that go down that path, as soon as X is dropped in favor of Wayland or Wayland-like architectures, remote rendering isn't available anymore. At least not with vanilla Wayland. You can layer stuff on top of Wayland but in general the capability is gone.

I'd like to make my (usual) case for why/how I think the entire remote rendering jazz of X is borderline useless. Here goes: in times of past the idea was that the terminal (the thing that did the displaying) had a very poor CPU and could only really be used for displaying stuff. This idea made perfect sense ages ago.

Then X came along, and now that terminal needs to run an XServer. The powerful remote machine would then send the drawing commands over the wire for the terminal to display. To be honest, this sounds kind of neat and in decades past it was not a bad idea.

Now enters OpenGL; that means the terminal needs to have a good GPU to render stuff at a reasonable speed. If a box has a good GPU, it likely has a reasonable CPU. I suppose there are the severe corner cases where some super-hefty CPU box is doing lots of calculations and the terminal needs to visualize the data and the way it is visualized it does not send oodles of data. Seems to me like a rare corner case.

It gets worse; implementing a good XServer driver system is pain, severe pain. OpenGL remote rendering is very touch and go anyways; it can be tricky to setup, there are limits on what one can expect to work well... can you imagine how poorly something like glMapBuffer is going to work? It is hideous. X makes a very severe implementation burden and the benefits of that burden are rarely used; and more often than not when that remote rendering is really used bad things and bad surprises happen.

Even ignoring the OpenGL thing, most UI tool kits usually do NOT want to use X to draw. Qt prefers to draw everything itself (it does have an X-backend which is labeled as native and it performs horribly when compared to raster). Similar story with Cairo, GDK, and on and on.

When X dies, it will likely be a very, very good thing for Linux desktop; to give an idea of how bad X really is, watch this where the fellow talking was a major contributor to X and essentially said after a while, X is not working:
http://www.youtube.com/watch?v=RIctzAQOe44

If your going to use a smartphone or tablet as a terminal, using X avoids having to construct a separate client for each platform for each application.

Let's look at the evolution of, well, all computing.

In the earliest days, computers were gigantic. But they were kinda useful. So people found a way to make these large, centralized computers which could be used by multiple people. Thus, the smart server/dumb terminal paradigm came to be. Time passes and computers get a lot smaller. Personal computers made dumb terminals... effectively obsolete. They're still used in places, but it is exceedingly rare. Even when you're networking to a smart server, you're generally using it from a smart terminal.

In the earliest days, the web was very much server-only. The server coughed up HTML. Someone invented PHP scripts that allowed server-side mucking with the HTML. Again, you have smart server/dumb terminal, just with the web browser as the dumb one. Fast-forward to... today. Sure, PHP scripts still exist, but client-side scripting via JavaScript is all the rage. You can't effectively navigate half the web without JavaScript on.

In every case, we started with dumb terminals, then slowly traded them up for smart ones. That is the nature of computing: client-side wins in the long term. And the same is true for OpenGL: client-side won. There are numerous features of modern OpenGL that only improve performance if everything is running on the same machine. Mapping buffers for example would absolutely murder performance for a networked renderer compared to even a much slower client-side GPU.

That doesn't mean that some people can't find uses for it. But it's very much a niche application, so niche that the ARB is spending precious little time keeping the protocol up-to-date.

If your going to use a smartphone or tablet as a terminal, using X avoids having to construct a separate client for each platform for each application.

Or you could make your application completely independent of a network, and therefore more useable and reliable. No network hiccup or going through a tunnel or whatever can interrupt your client-side application. Not to mention faster in many cases. Smart Phones may not have the best GPUs, but they're reasonably serviceable for most needs.

Also, using X does nothing for being able to write a platform-independent client. Sure, your rendering code may be independent, but that would be no less true than if you were using straight OpenGL ES. You still need the platform-specific setup work; even initializing an application that will use X differs between the platforms. Not to mention processing input or any of the other tasks you need to do. Oh sure, minor quirks between implementations would not exist, but the majority of your porting work doesn't deal with them anyway.

Now enters OpenGL; that means the terminal needs to have a good GPU to render stuff at a reasonable speed. If a box has a good GPU, it likely has a reasonable CPU. I suppose there are the severe corner cases where some super-hefty CPU box is doing lots of calculations and the terminal needs to visualize the data and the way it is visualized it does not send oodles of data. Seems to me like a rare corner case.

Not really. Dedicated server systems often don't have any kind of GPU. It's not that useful when the system is serving many users, none of whom are in physical proximity to the server.

Originally Posted by kRogue

It gets worse; implementing a good XServer driver system is pain, severe pain. OpenGL remote rendering is very touch and go anyways; it can be tricky to setup,

It shouldn't require any setup, beyond what is required for X itself and the OpenGL driver. To the driver, the X server is just another client.

Originally Posted by kRogue

there are limits on what one can expect to work well...

To be honest, I don't expect OpenGL with direct rendering to work well on Linux. It isn't a high priority for the hardware vendors, the hardware is complex, and the hardware vendors historically haven't been particularly open with technical specifications.

Originally Posted by kRogue

can you imagine how poorly something like glMapBuffer is going to work?

That depends upon how badly it's misused. If you map an entire buffer but only read/write a portion of it, that's going to be inefficient. It will be far more inefficient with GLX, but it's significant in any case. Use of glMapBufferRange() with the invalidate/flush bits shouldn't be any worse than glBufferSubData() or glGetBufferSubData() (clearly, you can't avoid actually transferring data over the network).

Originally Posted by kRogue

It is hideous. X makes a very severe implementation burden and the benefits of that burden are rarely used; and more often than not when that remote rendering is really used bad things and bad surprises happen.

This isn't my experience.

Originally Posted by kRogue

Even ignoring the OpenGL thing, most UI tool kits usually do NOT want to use X to draw. Qt prefers to draw everything itself (it does have an X-backend which is labeled as native and it performs horribly when compared to raster). Similar story with Cairo, GDK, and on and on.

All of those use X. Maybe you're confusing "core X protocol" with XRender?

All of those use X. Maybe you're confusing "core X protocol" with XRender?

No; all of those use X to do exactly the following:

Create -one- window

Poll X for events

All the drawing is done to a -buffer- by the toolkit. The entire "remote" rendering thing is dead. In order for the program to run on one machine and display on another usually means that the buffer (the window contents) is sent over the wire. What you have now is essentially a really crappy per-window VNC. One can claim that if GL was network happy on the XServer then the application would send the GL commands to the XServer and all would be great; but it does happen that way. Sorry.

It shouldn't require any setup, beyond what is required for X itself and the OpenGL driver. To the driver, the X server is just another client.

OpenGL resides on the XServer. The OpenGL implementation is then required to be able to take commands from a remote device (the client). OpenGL itself together with GLX are part of the X-driver often enough. Pretending that it will just work is putting one's head in the sand; it requires heroic efforts to make a GL implementation take commands from a remote source. Compounding the pain is that many GL features do not even really make sense in this case; my favorite one being glMapBuffer, but there are others.

To be honest, I don't expect OpenGL with direct rendering to work well on Linux. It isn't a high priority for the hardware vendors, the hardware is complex, and the hardware vendors historically haven't been particularly open with technical specifications.

Huh?!! AMD has released the specs to the GPU's (outside of video decode); Intel's GL driver for Linux is entirely open source. Lets take a real look at why it is not there: the effort to make remote rendering just work is borderline heroic. The underlying framework (DRI2) does not work over a network.

Regardless this proves my point: remote rendering is such a rarely used/wanted feature that it is not implemented really. Exactly my point. If there was commercial demand then it would be. Therefore the only ones warning it are, well no offense, borderline Slashdot trolls.
Please everyone who thinks X is network transparent and great, take the hour to watch that video (or stop when he talks about how great Wayland is); that video will wake you up to the reality: X should die.