If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

It's like VNC in that we send the final composed images, rather than a series of rendering commands (gradient here, text here, etc). This usually ends up being cheaper to transfer over the wire, as is true for most things today - even 3D scenes, which were once totally remotable since it was just a series of (not very many) polygons. But unlike VNC, it does smart damage and compression.

It's like VNC in that we send the final composed images, rather than a series of rendering commands (gradient here, text here, etc). This usually ends up being cheaper to transfer over the wire, as is true for most things today - even 3D scenes, which were once totally remotable since it was just a series of (not very many) polygons. But unlike VNC, it does smart damage and compression.

It's like VNC in that we send the final composed images, rather than a series of rendering commands (gradient here, text here, etc). This usually ends up being cheaper to transfer over the wire, as is true for most things today - even 3D scenes, which were once totally remotable since it was just a series of (not very many) polygons. But unlike VNC, it does smart damage and compression.

Finally a video that articulates my understanding about the x/wayland situation. Sometimes while reading discussion here at phoronix i start to doubt myself since so many write with such certainty utter crap.

Good to see Wayland development is on good tracks, and the people designing it semm to really know what they are doing.

How does Wayland handle multiple screens in "clone mode" with different subpixel geometries?

If the client is responsible for antialising and subpixel rendering or some kind of transfor, if you have different kind of monitors connected to your graphics card or some kind of transformation on one of them, the image will be fucked up for one of them.

Rendering performed by clients should be abstracted from output devices (the way postscript is for printers) and actual rendering should happen on the server.

There's a reason X11 is complex, and I'm growing less convinced that Wayland is a good solution for Linux graphics.

Messed up?

Originally Posted by newwen

How does Wayland handle multiple screens in "clone mode" with different subpixel geometries?

If the client is responsible for antialising and subpixel rendering or some kind of transfor, if you have different kind of monitors connected to your graphics card or some kind of transformation on one of them, the image will be fucked up for one of them.

Rendering performed by clients should be abstracted from output devices (the way postscript is for printers) and actual rendering should happen on the server.

There's a reason X11 is complex, and I'm growing less convinced that Wayland is a good solution for Linux graphics.

No, it won't be.
Context-resolving is mainly happening in the appropriate graphics-drivers, which handle their own context (even of multiple screens and modes).
It is the task of the compositor to tell the drivers what to do, so the client-sided-implementation makes sense. No one really stops you from writing a lib that makes this handling easy.
I am sure it would be simpler than the bloatware what the Xorg-Server is in many cases.

No, it won't be.
Context-resolving is mainly happening in the appropriate graphics-drivers, which handle their own context (even of multiple screens and modes).
It is the task of the compositor to tell the drivers what to do, so the client-sided-implementation makes sense. No one really stops you from writing a lib that makes this handling easy.
I am sure it would be simpler than the bloatware what the Xorg-Server is in many cases.

My point is that clients cannot render sub-pixels correctly to buffers if they don't know what context are they rendering to. I don't know if X server actually renders taking that into account, but ideally, clients could give the server context independent comands (as in postscript) which are then transformed and rendered by the server. Of course, this is not as fast as direct rendering by the client.

Sub-Pixel-Rendering

Originally Posted by newwen

My point is that clients cannot render sub-pixels correctly to buffers if they don't know what context are they rendering to. I don't know if X server actually renders taking that into account, but ideally, clients could give the server context independent comands (as in postscript) which are then transformed and rendered by the server. Of course, this is not as fast as direct rendering by the client.

I am not completely into the Wayland-spec, but I am certain this is part of it. How did the devs put it? Every frame is perfect, and judging from my tests with GL-applications (like glgears), this works well.