Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Bytal writes "Seth Nickel, a GNOME hacker, has an extensive treatment of the next generation Linux graphics technologies being worked on by Red Hat and others. For all those complaining about the current X-Windows/X.org server capabilities, things like 'Indiana Jones buttons that puff out smoothly animated clouds of smoke when you click on them,' 'Workspace switching effects so lavish they make Keynote jealous' and even the mundane 'Hardware accelerated PDF viewers' may be interesting."

Um, if it's hardware accelerated, it will be eating fewer of your CPU cycles

Not necessarily so. Well, only if they use hardware acceleration to do existing tasks that are already being done solely by software acceleration. I mean, how many resources does xpdf et al use really?

However, if they are introducing new eye candy wizz-bang GUI magic, chances are that the hardware requirements (including CPU and RAM) will be much higher anyways - even with suitable h/w- accel compatible hardware. And for course those without the h/w-accel compatible hardware, this would eat up even more CPU cycles for the rendering. I repeat, how many resources does xpdf et al use really?

I don't know about 5+ seconds, maybe it is your box? However, I do agree. Just run top and move to another page in xpdf and watch the processor usage jump way up, 90%+ is not unusual (this is on a 3.04GHz HT P4 with 1GB memory). The official PDF viewer from Adobe is not much better, it sucks up a bunch of processor time between each page display. Once the page is displayed, my processor usage drops to about 0%-1%. Something isn't right with xpdf or the official Adobe PDF viewer under Linux.

My thoughts exactly. When was the last time you viewed a PDF while playing Quake? Even if half of the new apps gets hardware support, it's kind of a good thing: the GPU was MADE for this very reason...

I can already see the future spec for cards:
"Get 2356FPS for rfc2616.pdf! "

Hardware-accelerated PDF viewers, huh? Aqua beat already does that. The entire OpenGL-composited interface is described using PDF, which also makes it awesome for publishing because what you see on screen is how it's going to look on paper (and you get a free "Save to PDF" in your print dialogs).

Not that it isn't cool to see the OSS desktop community finally looking ahead like this. It's something people have definitely been crying out for. But when I see the section titled "What It Might Look Like," I look over at my Mac and see what it already looks like.:)

Then again, I am quite happy to have people follow Apple's lead rather than Microsoft's. Please, no more taskbars, "start menus," integrated filesystem/net browsers, and whatever else is coming over from the Windows world and polluting desktop Linux. Though KDE is still cool, at least Gnome is willing to try some different directions in the name of usability (rather than familiarity...because from a usability standpoint, the Windows GUI sucks the most of all, and we should not be cloning it).

Keep in mind the taskbars and menu's were highly influenced from NextSTEP before MS added them into Windows. Same with using a graphical language like postscript and now pdf.

Also gnome is very macintosh like and one of the early macintosh developers wrote nautilus if I recall.

No one is stealing anything. Even the menu bars on the top of the screen came from Xerox before Apple used them.

What I like about kde and gnome to some extent is that they are highly customizable compared to either mac/windows. The problem is the later versions of kde look a little cluttered as a result but you can make your desktop look like anything.

Also you can have kde put a menu on the top of the screen just like gnome and macos. I think you can add a task bar to gnome as well.

I think perhaps some new innovative idea's are needed instead of just borrowing existing ones. Perhaps a way to handle many apps running at once without the desktop looking cluttered is next.

But I believe(could be wrong) that Windowmaker,kde,gnome all use ghostscript which is a postscript clone. The original macos and nextstep used it. Windows has an equilivant but I do not remember the name since its been a long time since I admined Windows boxes.

I don't really see your point - the concept of virtual desktops or workspaces solves the problem of having many apps open at the same time. I currently have more than 15 windows open and none of them are minimized or behind another window, because I simply aranged them on 4 workspaces. Finding the right worspace isn't difficult either, because I arranged them by topic (shells are on worspace 2 for example, Firefox and Thunderbird are on worspace 3).

Well, my suggestion is to combine together multiple desktops with something like this [stuff.gen.nz], which allows you to group and control windows elegantly, and potentially in complex and useful ways. If groups could, for instance, hint to the taskbar to group their entries, and applications were capable of hinting to the WM whether to create a new group for its subwindows... well, then you'd have some very useful new window control/management tools available to you.

NeWS used it's own interpreter written at Sun. In many ways it was much better than DPS. Most important was that it defined operators to actually create and manage windows, while DPS required you to use X or whatever to create the window and then you could use DPS to draw into it. NeWS also supported an object oriented extension to the PostScript language that was used to create user interface objects in it. NeWS also had many other minor improvements over PostScript, such as allowing null to be a dictionar

True... but even System 6.0 was a paragon of usability compared to the state of Gnome and KDE today. Even the constant crashes gave you a dialog whose buttons were action verbs (not to mention a cute little bomb icon). Hard to hate something like that.

I'll beg to differ a bit. When I was at college, the opportunity to spend a couple minutes at a Mac usually ended up with me leaving, in less than a minute, due to a headache. Seriously the thing seemed sluggish, the refresh was hyptonizing me, and two parts -
everything seemed counter-intuitive and 1 mouse button?

I wouldn't call it a paragon of usability. It may just be me, but I always seeked out the sparcs where I could just log onto the unix system. Could do almost as well logging on at home over the console with modem on an 8088 with 640k ram and a CGA screen, console hasn't changed too much since then.

But don't get me wrong, I'm not bashing Macs. I honestly don't have enough experience with them to bash them. I just question the
term "paragon of usability". OS X seems good,
I played with it at CompUSA for like 2 or 3 minutes and thought, "cool" with no headache:)

You have go to be kidding. It even got worse when System 8 would chastise you for "not shutting down properly" when you were forced to hard reset the locked up bastard. Gah! Nothing like that smiling little MacOS face telling me I've been a bad boy and to be more careful next time. YOu! YOU! Be more careful! You don't overwrite other programs memory space and trash my work!Oh yeah and the crash dialog boxes may as well have been labeled "Fuck me" for all the good they did. Force quit? Yeah that worked well. Should have been labelled "Finish Crashing".

I wrote a really long post correcting this widely and wrongly held opinion some weeks back. I don't feel like finding it, or being that verbose again. So short versions.

No PDF, no OpenGL.

Quartz 2D is a display-list engine, but it is not a PDF interpreter. Rather, Apple wrote some very, very simple shims that quickly translate PDF files into Quartz 2D display lists and back. Nothing in Quartz 2D is represented in PDF format unless it's sitting in a file on the disk.

The windows are drawn on the screen by a piece of software called Quartz Compositor. A couple of years ago, Apple rewrote Quartz Compositor to take advantage of hardware acceleration. They did use OpenGL for this, but only in a very limited way. Each window is represented as a texture on a surface and fed to the graphics pipeline for compositing.

Quartz is amazing. Nothing else in the world comes anywhere close to it, despite what some very confused people seem to think. But you're really selling it short when you describe it as "PDF and OpenGL." Because it isn't.

"MacOS X is the first operating system on the market that actually uses PDF-technology within the operating system itself. Apple calls this technology 'Quartz'. Quartz is a layer of software that runs on top of Darwin, the core (or kernel) of the MacOS X operating system. It is responsible for the rendering of all 2D objects. Alongside Quartz, OpenGL takes care of handling 3D data (used in games like Quake or Unreal as well as professional 3D applications like Maya) and

There's a lot of misinformation in there. I have no idea where some of this stuff came from.

1. Dividing it up into Quartz and OpenGL is misleading. If you want to talk about it in terms of functional block diagrams, OpenGL and Quartz 2D (note: not just "Quartz") do the same job. They take instructions from a running program and turn them into patterns of pixels on the screen. But Quartz 2D is not responsible for all 2D drawing. QuickDraw can also be used to draw to the screen; pre-Tiger, QuickDraw is quite

What can I say? That site is just flat-out wrong. It's an ancient description of an equally ancient Quartz demo, and it gets the internals flat-out wrong.

It says, "Quartz does not use Postscript as its internal graphics representation language. Instead, it uses Adobe's Portable Document Format (PDF) standard which is a superset of Adobe Postscript."

That's just completely incorrect. Quartz 2D graphics are not represented internally as PDF. They just aren't. When a Quartz 2D graphics context is stored in memory, it's stored as a display list, very similar (conceptually) to the way OpenGL scenes are stored in memory. To convert the context to a pixel buffer for display on screen, Quartz Compositor (or Quartz Extreme, depending on hardware) renders and composites the graphics context, which results in a bitmap.

A Quartz 2D display list is very similar to PDF in the way regions are defined and paint applied to them; this makes it easy for PDF files to be converted into Quartz 2D display lists and vice versa. But it's equally true that the Open Inventor file format is similar to an OpenGL display list in the way that vertices and surfaces are defined. You would be wrong to say that OpenGL programs store scenes internally in Open Inventor format; you'd be equally wrong to say that Mac programs store their graphics internally in PDF format. It just ain't so.

Can an Open Inventor model be trivially read from disk and turned into an OpenGL display list? Sure. Can a PDF file be read and trivially turned into a Quartz 2D display list? Yes.

Well, an understanding of the topic we're discussing, for starters. I mean, I know what all the words mean, which is clearly something that you can't truthfully say. All you've done is pull quotes from marketing brochures! There's no evidence at all that you have even a passing familiarity with the basic concepts under discussion here.

you're going around claiming Quartz doesn't use PDF for imaging

Correct.

when every developer documentation from Apple directly states that Quartz uses the PDF imaging model

Also correct.

Gasp!

How can this be! How can Quartz 2D both be PDF and not PDF!? He's a witch!

Friend, in order to wrap your head around this topic, you're going to have to understand what the expression "imaging model" means. An imaging model is not a file format, and it's not an instruction set, and it's not an interpreter. It's not actually any type of computer software at all. Rather, it's a way of looking at things.

Back in the old days, we had QuickDraw. QuickDraw used a pixel-based imaging model. You drew to the screen by specifying coordinates in terms of pixels: integer coordinates, bottom-left origin, one pixel was exactly one seventy-second of an inch. Regions were translated literally by shifting bitmaps around in memory. That was the QuickDraw imaging model.

That worked great for drawing to the screen, but it didn't work at all for drawing to a laser printer. For drawing to a laser printer you needed a totally different imaging model. Which means you had to do one of three things in your program: Either you had to maintain an internal representation of whatever you were drawing in whatever form was appropriate for printing and then convert that to QuickDraw for on-screen display, or you had to maintain a QuickDraw representation and convert it at print time, or you had to do both.

But the advantage of QuickDraw was massive: You could draw right into video memory. Toggle a bit in memory and a pixel changed color on screen. Very efficient.

Quartz 2D is different. It uses an entirely different imaging model. Rather than representing on-screen graphics as bitmaps in memory, Quartz 2D creates a layer of mathematical abstraction. With Quartz 2D, you still have a bottom-left origin, but you're not longer on an integer plane. Coordinates are given as floating-point numbers. You don't deal in pixels, but rather in mathematically pure regions of the drawing plane.

You draw in Quartz 2D by defining regions. A region is a locus of floating-point coordinate pairs. For example, (2.1, 3.37), (6.29, 5.3), (7.889, 1.961) defines a triangle. You draw by telling Quartz 2D to fill that region with a certain color, defined by any of the supported color spaces. For instance, you might use RGBA, meaning you'd specify red, green and blue color components and a floating-point opacity value.

Sending these commands to Quartz 2D from within your program creates an in-memory data structure called a display list. This display list doesn't look like anything at all; it's just a sequence of bytes that are encoded to represent the scene you drew. The display list doesn't become anything until you send it to Quartz Compositor (or Quartz Extreme) to be rendered into pixels.

The fundamental assumptions behind Quartz 2D drawing -- the coordinate system, the color spaces, all the low-level details --are referred to collectively as the "imaging model."

PDF has an imaging model that is very similar to Quartz 2D's imaging model. Not identical, but very similar. That's because Apple's engineers were inspired by both PostScript and PDF when they created Quartz 2D.

Because Quartz 2D and PDF use the same imaging model -- the same set of fundamental assumptions --it's very easy to convert a PDF file describing a scene to a Quartz 2D display list that describes that scene. Or you can go vice versa, starting with a Quartz 2

I think it's more that Quartz is another graphics API, where many of the rendering features ever-so-conveniently map on to PDF 1.4's rendering model.

In other words, it's easy to go from one to the other - it's trivial to convert a bunch of Quartz instructions to an equivalent PDF document and vice versa, even though the internal representations of the data are completely different.

Quartz isn't about applications sending actual PDF data across a pipe or socket into a renderer, it's a bit more sensible than

I don't know much about Quartz vs. PDF, but it is clear you are missing an important metaphysical point.

DATA != REPRESENTATION

Simple example: the digits "42" are not a number. They are a textual representation of a number, which is an abstract concept. A number has certain properties which the textual representation does not. I can add and subtract numbers, but I can't add and subtract text.

A "PDF" file is a representation of an image using various bytes, starting with "%PDF-1.3". Another representation

...because from a usability standpoint, the Windows GUI sucks the most of all, and we should not be cloning it

Hmm, I can't really agree there. There's lots of things wrong with windows, but there is a lot of things they have done right in the gui.

It's seems too many linux devs detest windows to the point where they don't allow themselves to see what they have done right.
We should be thinking embrace and expand whenever it's appropriate. We should look at what they have done right and benefit from it.

Here are some examples I find of things that just should not exist at this point. In these examples I'm talking about gnome.

1) Remembering windows size/positions. This drives me nuts. I've read that the reasoning behind this is for the most efficient use of the desktop (e.g. you launch a 2nd term and it positions itself beside the 1st term instead of overlapping). Sounds good, but in practice it makes me a less efficient user. Back in my windows days, I liked that whenever I launched the file browser it was always in the same position where I left it. I could rely on this and be ready to click whereever I needed. Same with the file dialog, calculator or whereever. I EXPECTED them to be in a certain position and thus I could work faster/more efficiently. I think maybe a compromise on this would be the default should be that gnome remembers size/position for all apps unless the developer of an app explicitly coded an app not to follow this behaviour. So the wm is the default unless the app says otherwise. I can see the benefit of autopositioning maybe with terms, but for most other apps it just makes me slower and gets in the way. As it stands I feel like I never know where an app will be when it launches.

2). Hot keys. For the love of god can someone fix hotkeys in gnome! Ok again this is coming from a windows background but bare with me. I was used to the alt key toggling the menu of whatever is the active app. Toggles are good, they are efficient and I believe intuitive. Just like play/pause on almost every player that exists. Ok so when I first used gnome, no alt hotkey toggle. Ok fair enough, I have to actually press alt f, but then I try alt f again to get out of the menu and nothing. I have to press escape to get out of the menu. Ok ignoring that, once I'm in the menu the other hot keys are rendered useless. Go ahead try it, press alt f, and then press alt e to get to say edit. Nothing. This is clunky. Once you are in the menu only the arrow keys navigate the menu's.

I work for a company testing applications and a key thing we look at is the hotkey placement of apps as when employees are using apps everyday all day, you want those hotkeys to be laid out efficiently as possible. So sometimes once in a menu it's quicker to just left arrow over once but sometimes it's less keystrokes to use the hotkey while in the menu.

I was going to go on about the menu functionality with gnome but I'm going on too much. You might say it sounds like I want kde but there are many more things about kde I don't like over gnome, and I appreciate the streamlined environment of gnome over kde.

Now you might say I was conditioned to the windows way of things. But really look at what I said above about say the hot keys. Which system is the more efficient. I'm talking number of keystrokes here and navigation.

It erks me when people say just flat out say the windows gui suck most of the time. On my thought of embrace and expand. I think there should be a document really analysing what windows has done right, and if they have done it right, why would we or would we NOT implement it.

You're raising some interesting points here. Your Windows background does show, but you may be quite representative of what new Gnome users will stumble on their first time around.

I'll try to address these points, while avoiding being too technical (which is a pain sometimes:)

1) Remembering windows size/positions. This drives me nuts.

Okay, okay. You're probably right on this one. However, please consider that a linux desktop is not used like a Windows one, specifically:- people generally use several workspaces and lay out their windows on multiple "screens", so to say,- you're not supposed to reboot an X terminal as often as a Windows workstation - you just lock it and leave it as is. This comes from older times, but still shows,- typically, people just arrange their windows *once* and leave them that way. For a very, very long time. When time comes to reboot, they save their session, preserving their windows' position (okay, this does not work all the time) then log back in again later.

Indeed, the X Window System was not supposed to be used like an MS Windows desktop, and the differences still bite us from time to time (why does evolution remember the active pane but not its window position across sessions ? WHY ? Answer : because it's the window manager's business, not his, and e.g. Metacity doesn't support this quite right yet).

2) Hot keys. For the love of god can someone fix hotkeys in gnome! I was used to the alt key toggling the menu of whatever is the active app.

The Alt key is a modifier. It is not a "real" key. It is meant to be used in combination with another "real" key, just like Shift, Control, Super, Hyper, Fn, Apple, etc. It is not cross-platform. It is not standard. It is usually mapped to the Meta key under Linux, which was once used to set the high bit on characters you typed on older terminals. You don't expect something to happen when you press the Control key alone, right ? The same applies to the Alt key.

Use F10. One press of F10 activates the main menu, both on Linux and Windows. Another press dismisses the menu. I don't know about Macs (do they have an F10 key ?) but a real (though nonstandard) key like F10 is much easier to code for than a modifier like Alt.

You can use other window managers and still have all your gnome applications, including the toolbar.

2). Hot keys.

You can use other window managers - enlightenment had these features well over five years ago, and most other window managers worked on since then (and probably some before then) have these features. Gnome is a big project, and parts of it just aren't being worked on anymore, but you don't have to use the window manager that comes with it - most other window

Actually Aqua is currently probably the furthest of all GUIs, but it also has problems.
(Having a Mac myself I know the problems)
Compositing needs lots of video RAM, open a handful of Windows on Aqua (or x.org with xcomposite on) and see the rendering speed go down significantly once, the memory limit is reached and stuff has to be swapped over the agp into the main ram.

The next problem Aqua has, is that only a few functions really are hardware accelerated, Fonts for instance still are a problem with no

Seriously. Otherwise all the effort being put into X.Org's newest extensions is basically tied to the good will of card manufacturers when it comes to modern videocards.

Anyway, there's a lot of terrific work being done on X.Org - Cairo, XComposite and Damage specially. When these extensions become supported by the GUI toolkits, we'll be in for a treat. It's a shame it took guys like Keith Packard so long detach themselves from XFree86.

I second this. At the very least, we want the appropriate documentation for the cards. I can understand if they can't release their current drivers, but I don't understand releasing the info on how to interface with the card. What's it going to reveal? That they have some sort of super-secret magic instruction on the GPU?

Well, forgetting ATI, the nVidia drivers are solid, and I don't see why you're so adament on them being open. I've created some great 3d renders in OGL code without needing to know the details on the drivers. Also Windows GUI was designed all without knowing exactly what code is in the drivers.

Good documentation is all that's needed, and if you are going to insist on something from the manufacturers being open, how about we get Open standards so the same calls work on all vid cards.

Perhaps the drivers are solid, but only if you are running the right Linux (not *BSD, reactos, or any of the other open source operating systems that would like good support for these cards) on 80386 (not PPC, sparc, MIPS, or any of the other systems linux and the others run on - though admitidly not all of them have the right hardware to connect the card - but some do. I'm not sure about x86-64 either, though I suspect not)

In short, your stable drivers are useless to me because I'm an old BSD guy (complete with beard) and I'm convinced that the sysV style init that most of linux uses is evil and all that. I'm looking for drivers that are stable on my systems, not theoretically stable if I'm willing to run something I don't otherwise like.

Such "negotiation" would be largely a waste of time. You need to give graphics card manufacturers a market to care about and demand for their cards. Currently usage of 3d on Linux is very limited, a few games, visualisers and niche apps.

If 3d is used more widely used on the desktop then more card makers will see linux as a market for their cards and more people will be using 3d and pressuring for better, more open drivers.

Forward: For a drawn out post on next-generation X rendering, this blog entry is really short on eye candy. I apologize, but I'm at home, separated from my beloved eye candy, and figured I should write this while I felt motivated. As a way of forcing my own hand, I'm making a link now to a blog entry I haven't yet written that will contain screenshots in the future:-)Next-Generation Rendering For the Free Desktop

For the past half year or so Red Hat's desktop team has had people working toward making accel

OS X, at least in its current incarnation, does X11 badly. Hopefully Jobs will find it not stylish enough and come up with a clever way to fully integrate it into Quartz. So they're basically cloning OS X. For example, run that Indiana Jones app and select to keep the icon in the dock. Quit the app, then drag the icon out from the dock. {POOF!} with a lovely cloud.

I want an Intel/AMD based machine...and OSX doesn't work nativly on my machine. Nor on all the other machines that run Linux.

Would buy OSX in a heartbeat if they made an Intel based one...hell, would even buy two! Then get WINE to make XP system calls on it so I can play some of the XP only games...yet run it on OSX on my Intel based computer. Even get Apple to throw some dev money into WINE.

OS X and quartz is no standard, it runs on one architecture and one OS only. This is meant for all the other OS:es who needs good visuals. Apple puts their mony on qartz, all other unix companies on this. Lets se who wins, whall we.

OS X is wonderful, to be sure. But it is proprietary and only runs on Mac hardware. Xorg is open source and runs on many operating systems and architectures. Big difference. You will continue to see Linux improve in the coming years and there will be more and more Linux desktop deployments. That is the advantage of open source. The battle is far from won. You didn't hear it here first, but you did hear it here.

Everytime ANY topic comes up, some OSX troll pipes in "So its just like OSX!" well, OSX isn't fully (or even mostly) open source, Quartz Extreme isn't an open standard, etc. Ok? Do you understand that? Even if it was, we like our software to be GPL'd so that we don't just shift ourselves from being slaves to Bill Gates and Microsoft, to being slaves of Steve Jobs and Apple. Mabye you don't care if you really 'own' your PC or not

You know, using something that's not GPL'd doesn't make you a "slave" to anything. Your emotive rants against people who (gasp!) enjoy their operating systems drown out any rational points you tried to make about open standards.

There are plenty of "just like Linux!" posts on Slashdot all the time too. Plus, someone could argue you're a slave if you use the GPL, since you're not 100% free like you are with a BSD license. See how easy it is to paint people with a broad brush.

I want to be able to open my case and see what shits inside it, and I don't want to have to use a fucking laptop harddrive in a non-portable computer!

When the "non-portable" computer is 6.5"x6.5"x2", it's not unreasonable to expect it to have a laptop harddrive... especially when it comes with a CD-ROM also. I mean, there's only so much physical space that they're working with, here.

If you want a normal-sized HD, you can buy the regular iMac or the G5.

You know, there's a reason that "but does it run Linux?" is a running gag around here. If you can't tolerate multiple OSes, you may find you're on the wrong site.

"Look you obnoxious pricks -- not everyone digs your fucking Macs."Not everyone digs your fucking hatred either. Claiming friendship with the GPL and then leashing out against one of the companies that are starting to build more and more of their software based on open source techno

"Indiana Jones buttons that puff out smoothly animated clouds of smoke when you click on them"

This is kinda cool. I know it seems gimmicky and all, but I have to say there's something to be said for having a UI that subtley lets you know what you just clicked on.

I know a few people aren't keen on eye candy. They worry about slowing things down etc. But I have to say, in my own experience, the more visual feedback I get from my computer, the more attuned I get to using it. A lot of my actions become reflex instead of having to decipher what I should do next. For example, I use Opera. When a page is loading, a red X lights up. (Click on it and it stops the page from loading.) It's subtle, but I actually do react to that red icon there when it's on. Somewhere deep down, I have a sense of "This page is ready for you to browse". I find that sort of thing useful.

Of course, it can be done badly or absurdly, but eye candy like this can actually be really useful.

There's a difference between eye candy and visual cues. The genie effect on OS X looks cool and is fast because of the hardware compositing going on. But more importantly, it's a quick visual cue to show you that you have just minimized a window, and it travelled down to the second spot on the right of your dock, so you know where it is. You also get a scaled version of your window down there. When an icon bounces for your attention, it's a cute little effect, but it's also a visual cue to let you know the app is wanting your attention.

It goes beyond animation effects, too. People have commented on OS X's "gumdrop" window controls, which look cute and friendly, but few seem to notice they're arranged like a traffic light, which is intuitive for most people. Red, yellow, and green circles--red closes the window, yellow minimizes, and green zooms.

Note that I use OS X as an example simple because I think it's the undisputed king of GUI visual cues. I think Linux needs more creative taste and aesthetic in its interfaces. I'm willing to contribute.

People have commented on OS X's "gumdrop" window controls, which look cute and friendly, but few seem to notice they're arranged like a traffic light, which is intuitive for most people. Red, yellow, and green circles--red closes the window, yellow minimizes, and green zooms.

How is that intuitive? They are completely *UNINTUITIVE* because colors don't actually translate into physical cues.

Or are you suggesting that when I see a yellow light, it means I should minimize my car?

Traffic lights typically mean "go, prepare to stop, stop" - telling you what to do, rather than you telling them what to do. If people were to use them like traffic lights, they would only use the window when the green button was bright, then quickly prepare to stop (say, by saving their work) when the yellow button was bright, and not using the app when the red button was bright.

I believe you are missing the point. It's not that the buttons function exactly like traffic lights, but that it uses a paradigm that people are already somewhat familar with to help people know what to do. True, it is not a 1 to 1 correlation, but the user could at least get the idea that red indicates that you will stop using the program, yellow, that you will put the program on hold (by moving it out of the way), and green, that you will proceed with the program. It may not be a perfect system, but I do agree with the grandparent that it is at least a clever way to help convey visual clues to users who may not be familiar with the interface.
Are traffic light colors universal? I know that those colors have that connotation here in the U.S., and I think I remember them being that way in Europe too (but I didn't drive there, so I didn't pay a lot of attention to them). I suppose that even if the connotation is not present in other countries, the colors shouldn't be detrimental to people's understanding.

"Can you give me an example of something that is eye candy without serving as a visual cue?"

I can. You know all those Gnome and KDE themes that try (badly) to imitate Aqua? They copy the eye candy without understanding the reasoning behind it, or the value of visual cues. Too often, the result is badly misapplied pinstripes, distracting transparencies, and so on.

These offer no visual clues other than "We're different!", when in fact the only difference in this respect is the way they usually fail to replicate all the Windows UI conventions (e.g. iTunes used to refuse to maximise when you double clicked the title bar, and so on).

About the only things I can see an argument for with kewl skinz is apps that are trying to be small/compact - e.g. Winamp etc., where the standard controls don't work well that small.

I think that most people who say they dislike visual effects actually think about USELESS eye candy.

A visual effect is useful only if it conveys additional information. It must not be used simply because it's possible to do so. For background/low importance tasks, I'll take a subtle icon animation over a modal dialog box any day.

" Eye candy is there for nothing more than the purpose of looking cool.."

Well, I'm going to get needlessly nitpicky here: Any time Eye Candy is being triggered by something happening, it is Visual Feedback. One of the most useless bits of eye candy I can think of is the Windows Start Menu fading in. I have no use for it. It's pretty, but it interferes with my productivity. (Just as you mentioned...) You have to understand, though, that it is still providing Visual Feedback. It's giving you a moment to

It's an OpenGL-based X11 server, complete with some screenshots. Apparently, window dragging is very smooth (no repaint events are even given to the apps), and with Cairo and GTK, this really could be the future backend for Linux desktops.

IMHO, no app should be given information about it's environment. The only reason expose events exist is because back in the day there wasn't enough memory to store a complete image of every window - so the apps had to be asked to update parts of the display when they were exposed. Apps interacting with other apps without user intervention is definitely a no-no and is the source of some Windows security holes. I just how when (even IF) an app gets to screen capt

I didn't see any specific mention in here, but does this include having a fully 3D accelerated Framebuffer device across graphics cards? I've been missing this for a while in X, having just gotten triple-monitor across two Radeons working. It would be cool to be able to play any 3D game across three monitors.

I'm not losing any sleep over this, but it would be cool. I read on an X board that some people are looking at this, but it's obviously a big undertaking.

Xdmx works. It will build a single X server across multiple X servers (even different machines) and effeciently pass the opengl through. There may be a more efficent method, but at least one method works.

Well, to be fair, compare the task of running Doom 3 to the task of being a pretty desktop UI. Cards will always get better. If the idea takes off, new cards will be tweaked to make the experience more interesting. (For this reason, it's a good thing for all of us that Microsoft is heading in this direction, too.)

Particularly true with Tiger (what with the new CoreImage technology), OS X really can push eye candy more than Windows (and Linux) for one main reason - the mac development team have a limited number of graphics cards to develop for, and the drivers are pretty much rock solid.

I just don't see that happening in Linux / Windows - developers must write for as wide a range of hardware as possible. One would therefore imagine that such eye candy being talked about in Linux would be optional, and you'd only get the full benefit with the highest powered and most compatible graphics card - whereas in OS X, most users can get the eye candy without any problems. Of course, there are certain graphics cards on macs that don't support Core Image, Quartz Extreme etc, particularly on the older macs people are upgrading, but I'm willing to bet the majority of macs will be able to run Core Image etc. Whereas here, the minority of PCs will be able to run the Linux eye candy.

You're right. I mean, what we really need is some way of programming graphics stuff which didn't really care what card was doing the rendering. Some way of standardizing the interface and available functions. Maybe I'm crazy...I don't know, it seems like it might just work.

As for naming, well, it should be Direct. And modern sounding, like Xtreme or something. How about DirectXtreme? Bit long. "DirectX" - yeah - that's cool!

Synchronized smooth resizing so there's no disjunct between window borders moving and the contents redrawing (you should see the demos of this in luminocity... it really makes a difference in how real the interface feels, just as double-buffering did for stuff moving)

Indiana Jones buttons that puff out smoothly animated clouds of smoke when you click on them

Hundreds of spinning soft snowflakes floating over your screen.... without messing up nautilus

A photograph of a field of long dry savanna grass as your desktop background... where the grass is gently swooshed around by a breeze created by moving your mouse across the background

Windows that shrink scale and move all over the fucking place with cool animations

Vector icons with very occasional super subtle animations rendered in realtime...a tiny fly which buzzes around the trash every several minutes, etc... think mood animations as in Riven (which as a total random aside is still a shockingly beautiful and atmospheric game years after it came out, postage stamp sized multimedia videos notwithstanding)

Workspace switching effects so lavish they make Keynote jealous

Brush stroke / Sumi-e, tiger striped, and other dynamically rendered themes where every button, every line looks a little different (need to post shots / explanation of this stuff, but another day)

Progress bars made with tendrils of curves that smoothly twist and squirm like a bucket of snakes as the bar grows

Text transformed and twisted beyond recognition in a manner both unseemly and cruel

A 10% opaque giant floating head of tigert overlayed above all the windows and the desktop.

Now, these fancy effects are certainly kind of cool, and may look nice. (Though I can guarantee that when they're all in, I'll probably still be using Blackbox.) However, is that really all that the future holds? More special effects, without any substantial improvements in usability?

I know you are joking, but why not have animated vector fonts. Now I can really have that aqua font I always wanted.

Also imagine if the font specifies its own translucency and reflectivity with normal angles. Suddenly you can now have a terminal where the fonts look like they are made of glass or mirrors, and reflect other fonts and applications around you.

Of course -- taking this to the extreme, we should have ray traced desktops for that ultimate visual candy.

"...is that really all that the future holds? More special effects, without any substantial improvements in usability?"

Improvements such as.... (?)

Don't think I'm singling you out, as I'm not, but why is it whenever someone posts articles regarding improvements to X, and really to Linux in general, that everyone comes out of the woodwork to complain, without offering any positive comments, or conclusions?

Your post goes into detail about how you don't want these effects, and run Blackbox still, but WTF do you want then? And I ask this to everyone when they complain about what developers are working on in Linux... Everyone can complain, but few are able to offer good input, let alone suggest how we get from point "a" to point "b".

Let's face it... Eye candy sells pc's (No... Not to you Blackbox users... You guys are probably happy running on a PII still). You want to know why the Amiga still gets the nod a lot of the time? Because it did things with graphics (aka 'Eye Candy'), which no one, on any platform, was doing at the time!

Yes, they had a multi-tasking environment, and a lot of other unique things about them (as a former Amiga owner, I can tell you that the pro's and con's were pretty equal in some ways... Lemme tell you that I don't miss the "black screen of death", with tis esoteric guru errors!), but the fact remains that the Amiga stood out from the pack due to its eye candy capabilities.

You know why a lot of people (again... Not us typical Slashdotters, but the average Joe Computerguy) are drawn to the Mac? It's clean, well thought out, and it looks good on screen! You laugh at the puffs of Indiana Jones smoke comment, but one of the things which many people notice first about my Mac is the "puff of smoke" that appears when you drag an icon off the dock. Yeah, it's cheezy, and won't entertain anyone for too long, but it grabs the eye and sticks with you!

A lot of people in this thread, and elsewhere, point out how much they hate Windows, and its GUI, but look at one of the faster growing segements of consumer software: GUI Mods, and eye candy! People want a cool looking computer, and have shown that theyr'e willing to pay for this.

So when everyone's here knocking these guys for adding new and accelerated features to X, I applaud them! Will it win over new users? Very possibly, and even if it does not, it will show that Linux is capable of the same kind of cpu-waste than Windows and OSX is, which is important to a very large demographic of people.

And I hope that this also indicates that more hadrware vendors will be jumping on board soon too! I still find it very frustrating that if I want accelerated graphics in Linux, I have to either run it on older hardware (My old ATI Pro Wonder, and a CompUSA branded S3-Virge, for instance, will run in accelerated modes), or purchase an Nvidia card. I personally like ATI card, and have them in both my X86 boxes, as well as my Mac, and they perform great! Until you add Linux into the mix...

Under X, my 9600 card still will not run in accelerated mode when driving dual monitors. My OSX box and Windows however will handle this just fine.

My point is rather than berating people for developing something that you're not interested in (all the while alluding to the fact that they should be focused on something else, without quite saying what that something else is), why not focus on the potential increase in users of OSS software (Linux), and think about the hardware support and technology which will follow such an increase in usage. Or better yet, start learning how to code, and prove to the world that you're right. All's you're doing otherwise is whining IMHO, and potentially driving developers over to other platforms.

Think about it... You're an OSS developer trying your best to ignore the financial gains of developing for Windows or OSX, in favor of developing something the whole world can enjoy for free, and all's your target audience do

Dude, I hope you got a fucking F on that paper. "Display PDF?" Come on, man. Let's run this down, okay?

Display PostScript worked by embedding a PostScript interpreter right into the operating system. The system would run PostScript programs, rasterizing them to the screen, to produce screen output. The system would do exactly the same thing but route the output to an attached laser printer instead of the screen to produce printed output.

It's a nice idea, for sure. I just hope it fares a little bit better in reality than Seth Nickell's last grandiose idea [gnome.org]. I'd like to see some of these idea implemented and not just discussed. Of course, I've contributed nothing to the success of these projects either -- and Seth's ideas are great. I'm not saying that I'm so much better than him, just that I hope some reality can emerge from this grandiose idea so that Linux doesn't develop the same reputation for vaporware as does Duke Nukem.

From reading the article, it really just sounds like they are talking about ideas that Raster and co. have been long advocating (and developing) in Enlightenment DR17.

Granted, Enlightenment is a window manager that lives on top of the existing X protocol, but nearly every single piece of 'eye-candy' this guy mentions is already do-able in E17.

Since taking advantage of these new toys would require a new theme system, Havoc and I have been talking about how a very different theme / widget rendering system might work with this that allows for custom design of any window, widget, or anything in between. One of the things us designers have been experimenting with behind closed doors is what you can do with a window's design when its not drawn out of a bunch of stock widgets but you have a freer hand.

Don't get me wrong, the things Seth describes sound cool, but the way he describes it makes it sound like they're the only ones with these ideas, when in fact Enlightenment 17 is already enabling most of what he mentions in this article. Sure, it's not a "production" release yet, but DR17 is certainly usable today, and has most of the features he mentions.

Heck, some things Seth talks about (Live window thumbnails) have been available in Enlightenment for quite some time (I know DR16 has them, and maybe earlier versions as well)

No, the sort of things Seth is talking about is not what E is about. Maybe in the superficial stuff like pagers and advanced theme management, but Seth is speaking of a whole new framework.

Enlightenment is sort of hard to categorize. I believe they refer to the whole suite as a "Desktop Shell". That is, they offer more than a Window Manager (a suite of high-powered libraries, a launcher/panel, desktop effects, file browser, etc.) but less than Gnome or KDE in terms of a desktop environment (which include full cross-platform toolkits, application interoperability, central configuration, random daemons, etc.) The goal of E17 seems to be creating an amazing user desktop experience, but the goals of the next-gen rendering are mainly a superset.

What Seth is talking about is the fundamental application stack for rendering windows and widgets on the screen. Right now, the printing usability situation is really bad. This is one area where I think Windows really gets it right. Adding a printer is quite easy, and your document always looks like what those "print preview" pages allege it will. Currently, there's little guarantee that the printed output of your document will match what you expect it to be, because there's two different rendering pipelines for screen versus page. This is what Seth is talking about. Unless they want to get even more ambitious, the Enlightenment project has nothing to do with printing.

By itself, E17 may be able to give your windows shadows or fake transparency, but a full compositing manager + hardware accelerated backend will allow true alpha blending, fast updates and fun live animations like OSX's genie. Note that this extensions can be easily used by E17 as well, but are really impossible without them.

Finally, the toolkit integration is probably the most exciting. I know E17 is sporting a basic toolkit library itself, but that's probably because they want tight integration between native E17 apps and the WM. I personally think this is the wrong move, because they're probbaly not going to be able to create a fully-featured and cross-platform toolkit like GTK+. (Hence, not many application developers are going to use EWL.) A GTK (and eventually, QT4) application will be able to rely on a sophisticated drawing level (Cairo, instead of Xlib), which will allow all of its applications to be rendered nicely, allowing blending and more free-formed widgets. gDesklets and the like are just the beginning.

so are they putting any work into how these applications will work in the X Client/Server model, or are they just sweeping that under the rug (a la the dri and shm extensions). i'd be thrilled if they were looking at how they can add these as extensions that reduce the amount of X calls that need to be sent accross the wire, so you could use meaningful gui applications over slow to moderate speed network connections. of course it doesn't sound like it from any of the things that he mentioned, and it seems that X development lately has taken a 'the thin client is dead, so who needs network transparency' route.

I don't know about this. Linux Geeks really don't have the eye to make an appealing desktop. Microsoft's and Apple's (especially Apple's) UIs are the results of lots of studies, then professional cooperation between graphic artists, professional animators, and programmers. With an open source project like this, it tends to be a mish-mash of gaudy concept effects in odd places stuck in by guys who's idea of a perfect GUI is a VT1000 terminal. If they could all get together and hire some real graphic consultants, then maybe they could come up with somethat is really appealing and easy to use. If you use a Mac, after the first minute or so you don't even notice the effects, they are just part of the experence (unless you are using an old G3). The same is true of Windows XP much subtler alpha transparency effects.

I can't tell you how many potential Linux/UNIX users I know that have told me they're waiting for something like buttons that disappear in a puff of smoke and, until Linux has that, they'll stick with Windows.

It looks like they are moving toward the direction of Mac OS X (Open GL accelerated, Vector Graphics 2D backend). Why not put the effort on GnuStep and make it easier for Mac OS X app developers to bring Mac OS X apps to Linux and extending Linux developers' access to a more commercial market of Mac OS X applications.

This isn't talking about an OS, just window rendering. Providing hardware acceleration won't force DE designers to use snazzy effects, but it will make it so any snazzy effects they do use will be able to take advantage of modern hardware to render things quickly and efficiently.

I was under the impression that most of the UIs for Linux already are functional.

With that said: Visual feedback is part of being functional. Imagine if ythe cursor you used in the field you typed this into didn't blink. You could adjust to it, but admittedly this 'snazzy' feature is helping you.

'Snazzy' is more benefical than most realize. Remember that we, as a species, are interactive creatures. Visual snazziness really isn't all that different from body language.