If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

But usually commands are transported asynchronously to the server, right? So you sacrifice a bit of debug-ability for higher performance.
What I find to be the biggest problems, are applications using XLib as it were a synchronous API, which it actually isn't.

Regards

From Daniel's comments in his video and Martin Graesslin's comments about kwin and migrating away from xlib, I was under the impression that everything Xlib did with clients was completly synchronous. XCB is the replacement for Xlib that is asynchrous though adoption is (relatively speaking, its a big change) slow.

that everything Xlib did with clients was completly synchronous. XCB is the replacement for Xlib that is asynchrous though adoption is (relatively speaking, its a big change) slow.

Well, for requests which depend on a response from the server (like XGetImage), XLib is synchronous. The call blocks and nothing happens until the client has received the requested data. However most X calls like XDrawSomething or even XCreatePixmap/XCreateWindow do not need any response (as the handle you are working with is actually a client-generated ID anyway), therefore the commands are buffered in a command-stream.
Rendering commands don't need to wait for the "ok" of the server, if something goes wrong the server notifies the client asynchronously. This is why Xlib/XCB do have a synchronous mode for debugging

The big "plus" XCB offers, is that requests which wait for a response from the X-Server (XGetImage, ...) can be called asynchronous. Therefore you can e.g. continue rendeing to other pixmaps while transporting the content of one to the client.

Well, for requests which depend on a response from the server (like XGetImage), XLib is synchronous. The call blocks and nothing happens until the client has received the requested data. However most X calls like XDrawSomething or even XCreatePixmap/XCreateWindow do not need any response (as the handle you are working with is actually a client-generated ID anyway), therefore the commands are buffered in a command-stream.
Rendering commands don't need to wait for the "ok" of the server, if something goes wrong the server notifies the client asynchronously. This is why Xlib/XCB do have a synchronous mode for debugging

The big "plus" XCB offers, is that requests which wait for a response from the X-Server (XGetImage, ...) can be called asynchronous. Therefore you can e.g. continue rendeing to other pixmaps while transporting the content of one to the client.

I ask because Wayland has hit [stable] in Arch, weston is in [testing], qt5 has hit [stable] gtk with the wayland backend is in [testing] or [stable] (not sure which, i'm a KDE guy not a Gnome guy lol) and I think mesa with all the required compile-time flags is in testing, though it may have been moved to stable with the last mesa update. So you're about to get a distro full of potential bug testers that has latest-stable of everything Wayland needs (in theory) across the board, that can download all the needed packages fairly easily (yay pacman)

KDE still relay on 4.8, which do not have full wayland support (KDM have no code for wayland merged!).

KDE still relay on 4.8, which do not have full wayland support (KDM have no code for wayland merged!).

KDM is basically dead, maintenance mode. Its been called spaghett code, unmaintained, poorly abstracted, among other things. Most likely Frameworks 5 will wind up with KDM being depreciated and an official LightDM greeter being supplied with lightdm as the official backend. Thats my guess.

Also what are you talking about KDE relying on 4.8?? KDE is on 4.10. You're right that KDE doesnt have Wayland support yet, that will come with frameworks 5 after the rebase off of Qt5 that brought Lighthouse which gives seamless Wayland/X support.

Something that really bothers me about X12, from the very wiki page you linked, is that they are making Network Transparency mandatory... We've spent the last 5 years without true network transparency (if you use shared memory or DRI2 in your application, your application isnt truely network transparent anymore). X11 is network-capable, it is not network-transparent unless you decide to go without any and advancements made in the last 5-7years. Which some do, their choice.

But I really dont see how X12 could do network transparency without basically recreating VNC. You can't do network transparency in the 'traditional X way' without breaking a lot of and regressing in a lot of ways.

In case you are wondering, uid, Wayland does have networking built in, in theory. Its basically a better VNC, meaning, server creates the image, server compresses the image, and server sends it over the wire.

'Traditional X way' is, I believe, sending the rendering commands over the wire and then the local server does the rendering. Which is faster rendering in THEORY, but the problem is...Core X is synchronous. Every command has a break where the client sends back an "Okay, received and done." which the server waits to see, then sends the next one. Waits, sends the next one, over and over.

As much as people yell "X11 is network transparent and is X's biggest feature!" ...it really isn't. Because of the synchronous nature, network transparency is actually X11's biggest failure and THE worst-case-scenario. Every command sent to the local server, and every confirmation back to remote server is affected by latency and network lag. So instead of 1-way lag, where its just the server sending commands. You have double lag because its the latency of the packet that has the rendering commands, and then the latency of the reply back for success/failure.

I have little doubt that the implementation of X network capability is "less than ideal", to put it diplomatically. But it seemed like the big brouhaha that arose with early Wayland talk regarding what we can loosely call "network transparency" was largely due to some poor communication (IMO). "Network [whatever-we-should-call-it]" is actually a pivotal aspect to some people that need two primary functions: 1) remote display, and 2) distributed load.

I remember at the time, people who were dismissing network transparency were saying that people should just use web apps or some kind of VNC-like pixel scraper. First, the web apps idea is ridiculous of course. Second, the mention of VNC in a distributed environment causes many of us to break out in hives and desire a quick death.

At my previous place of employment we relied on both of those aspects of X for both us developers and also our end users. (Now we're probably talking old X here -- Solaris 8/9 so maybe it was Xsun and also RHEL 4/5.) Let me admit first of all that we weren't doing anything impressive like 1080p video playback or gaming or something like that. We just had a Java application, but network lag was generally not an issue [on especially non-flattering hardware]. Granted the distance between boxes was not particularly impressive (about 5 miles one-way trip), so latency was probably not an issue. For our end users, there were times where they preferred to ssh into a main server, redirect their display to their local box and then launch the app from the main server (which I actually thought was headless but I could be wrong; not sure if that's possible). So you could have many users on a central server simultaneously and the local GPU would do the heavy work. Since our app was very data intensive (lots of data in, lots of data out), the network traffic from X was sometimes preferred to what you'd have to deal with through NFS.

Now modern X might be completely different and not even allow for the distributed load concept any more. And perhaps the VNC-like solution is the best going forward (or maybe something better can be created? something GPUDirect-like?), but I can understand the firestorm early on when the whole concept of networking was sort of dismissed as a sideshow that could be tacked on later if somebody ever got around to being interested. I think some of that controversy has died down as communication has gotten better.

But certainly for our UNIX and Linux systems X network capability was a critical component of our environment.

Now modern X might be completely different and not even allow for the distributed load concept any more. And perhaps the VNC-like solution is the best going forward (or maybe something better can be created? something GPUDirect-like?), but I can understand the firestorm early on when the whole concept of networking was sort of dismissed as a sideshow that could be tacked on later if somebody ever got around to being interested. I think some of that controversy has died down as communication has gotten better.

But certainly for our UNIX and Linux systems X network capability was a critical component of our environment.

And actually, when written "right" the network-model of X11 allows for way more responsive applications over network than VNC-like snapshoting does (not to mention, VNC like forwarding consumes quite a bit of server resources).
Well however, as pointed out in the slices, most applications are written in a horrible stryle ... like gedit with its 150 round-trips at startup....

I have little doubt that the implementation of X network capability is "less than ideal", to put it diplomatically. But it seemed like the big brouhaha that arose with early Wayland talk regarding what we can loosely call "network transparency" was largely due to some poor communication (IMO). "Network [whatever-we-should-call-it]" is actually a pivotal aspect to some people that need two primary functions: 1) remote display, and 2) distributed load.

I remember at the time, people who were dismissing network transparency were saying that people should just use web apps or some kind of VNC-like pixel scraper. First, the web apps idea is ridiculous of course. Second, the mention of VNC in a distributed environment causes many of us to break out in hives and desire a quick death.

At my previous place of employment we relied on both of those aspects of X for both us developers and also our end users. (Now we're probably talking old X here -- Solaris 8/9 so maybe it was Xsun and also RHEL 4/5.) Let me admit first of all that we weren't doing anything impressive like 1080p video playback or gaming or something like that. We just had a Java application, but network lag was generally not an issue [on especially non-flattering hardware]. Granted the distance between boxes was not particularly impressive (about 5 miles one-way trip), so latency was probably not an issue. For our end users, there were times where they preferred to ssh into a main server, redirect their display to their local box and then launch the app from the main server (which I actually thought was headless but I could be wrong; not sure if that's possible). So you could have many users on a central server simultaneously and the local GPU would do the heavy work. Since our app was very data intensive (lots of data in, lots of data out), the network traffic from X was sometimes preferred to what you'd have to deal with through NFS.

Now modern X might be completely different and not even allow for the distributed load concept any more. And perhaps the VNC-like solution is the best going forward (or maybe something better can be created? something GPUDirect-like?), but I can understand the firestorm early on when the whole concept of networking was sort of dismissed as a sideshow that could be tacked on later if somebody ever got around to being interested. I think some of that controversy has died down as communication has gotten better.

But certainly for our UNIX and Linux systems X network capability was a critical component of our environment.

I haven't heard anything about distributed load under X11, but remote display under X I thought only worked the way it did because instead of sending images / data over the wire, they sent the actual rendering commands. (Hence the idea of "Mechanism, not policy." They didnt say "here's how you draw a rectangle" They said "Draw a rectangle." Which is waaaaaaaaaaaaaay too specific to work in the long run.)

As far as the dismissive attitude the wayland devs had about networking...thats probably actually a good idea. Wayland is meant to be absolutely minimal as possible, and adding in mandatory networking support would just add in stuff that the protocol itself shouldn't have to worry about. Push it to the clients, the clients can change. The protocol can't. Bonus of the fact that it IS the clients job and the clients problem: networking under Wayland can be rooted (traditional VNC, full desktop) or rootless (Traditional X, individual window), its up to the user/clients. And as networking gets faster and faster, the fact we're sending a compressed full image is going to be less of a problem.

Honestly...I really think people are making a big deal out a non-existent problem.

Daniel. I asked some people before. What are the causes of x server restarts and will they be prevalent with wayland too. Some said that the cause is the toolkit and the wm.

They're just bugs, either in the X server, or in your session manager - if either crashes, your whole session will die. The reason that your clients don't survive is because the toolkits lack support for resuming connections, which would be nice.

Originally Posted by Linuxhippy

But usually commands are transported asynchronously to the server, right? So you sacrifice a bit of debug-ability for higher performance.
What I find to be the biggest problems, are applications using XLib as it were a synchronous API, which it actually isn't.

Aye. Plus, as you allude to later, a lot of events are really just prompts for round trips. It's kinda possible to work around it, but really really really painful to get all the corner cases right.

Originally Posted by Ericg

Hey Daniel, a few questions about your X/Wayland talk that i've been meaning to ask.

1) What'd you mean when you said "I dont have a slide saying its not introspectable... but X isn't introspectable." Were you referring to the horrendous error codes? function names? something completly different?

In this case, I meant using window properties as a means of IPC.

Originally Posted by Ericg

2) You briefly hit on the error code numbers at one point, should I take that to mean that X returns pure integer errors, no helpful hints associated with them? And in that same context...(disclaimer: I havent look at the code base, way above my level of coding at the moment) if they do return pure integers, why aren't the error codes enumerated? At least then you could return strings, such as "DEVICE_NOT_FOUND" for the example of someone unplugging the mouse. enums got added to C89 and were a base feature of C++ so its not a matter of language.

Oh, in this case I wasn't calling out the integers per se. What I was talking about was that objects in X11 are global, and typically don't have well-defined lifetimes. So, if you have the misfortune of making an XI request on your mouse just as your mouse gets unplugged, you'll get an error which is fatal by default because that object doesn't exist anymore. Same with embedding windows from other people: if that window goes away, you have to try really hard not to crash yourself. Wayland fixes this by having the clients in total control of the lifetime of their objects.

Originally Posted by Ericg

3) Personal opinion on when Wayland/Weston is "Usable"? You said in the talk that you'd start using it when gnome shell and the touchpad were brought up to date (which you were meant to do ) krh was running it at XDC last year and it looked fairly good.

It really depends what you want from your session, I guess. If you're happy with a quite bare-bones kind of session a la XFCE, then it should do you pretty well today.

Originally Posted by Ericg

I ask because Wayland has hit [stable] in Arch, weston is in [testing], qt5 has hit [stable] gtk with the wayland backend is in [testing] or [stable] (not sure which, i'm a KDE guy not a Gnome guy lol) and I think mesa with all the required compile-time flags is in testing, though it may have been moved to stable with the last mesa update. So you're about to get a distro full of potential bug testers that has latest-stable of everything Wayland needs (in theory) across the board, that can download all the needed packages fairly easily (yay pacman)

Neat What we really need at this point though is a lot more buy-in from the apps and the desktop environments to help push things forward and fill in the gaps in the protocol. The core protocol seems quite stable and well-tested, and what problems there are with that are already reasonably well known. But there's a lot of missing functionality we can't really fill in without an implementation.