Posted
by
kdawson
on Sunday June 21, 2009 @05:34PM
from the billg-at-least-pretended-users-demanded-innovation dept.

jammag writes "The Linux desktop has seen major innovation of late, with KDE 4 launching new features, GNOME announcing a new desktop, and Ubuntu embarking on a redesign campaign. But Linux pundit Bruce Byfield asks, do average users really want any of these things? He points to instances of user backlash, and concludes 'Free software is still driven by developers working on what interests or concerns them. The problem is, the days when users of free software were also its developers are long gone, but the habits of those days remain. The result is that developers function far too much in isolation from their user base.' Byfield suggests that the answer could be more user testing."

I think your title is a bit misleading. When you say "Linux" I think Linux kernel. Like the Linux operating system itself. What the blogger goes on to talk about are just GPL software projects that are intimately tied to Linux. That said, I could install slackware, damn small linux or any number of flavors of Linux that have none of the projects being discussed.

You can chat all you want about Gnome vs KDE and which one is bloating--trust me, that is not something I'm ever going to take a position on. I value my life too much.

I might have missed it but I didn't see anything about people wanting their changes to be seen. That's probably a big problem and you could spend days optimizing the kernel for a better experience but the average user doesn't see anything. Or you could add this awesome UI functionality to some windowing framework (compiz fusion?) and suddenly everyone's seeing it. Pretty obvious what some people might aim for...

Lastly, I've noticed that some of the more mature products like to move in a even/odd fashion where one release is to stabilize things the next is to add new features the next to stabilize then new features... ad infinitum. Even kernel development is done this way I believe. So you know people like Shuttleworth are trying hard to make this work. I think the last bit of criticism that's going to help them move forward is "You're innovating too much."

But that's why he didn't say "Linux". He said, "Linux Desktop", which I take to mean the entire software ecosystem based on Linux on a user's desktop. It's an appropriately apt description.

It's not a misleading title, if you accept the premise that "over-innovation" is what is causing the disjoint between developers and users. I think it's just more likely that developers don't really understand the users, and for all the merits of free software, there are some things that centrally-managed, proprietary software does better, because the non-programmer professions involved in product development expect to be paid for their services, and most open source projects do not have a workable way to monetize the overall project to cover those costs.

But that's why he didn't say "Linux". He said, "Linux Desktop", which I take to mean the entire software ecosystem based on Linux on a user's desktop. It's an appropriately apt description.

Right, because when you're running "Linux Desktop" you're running only KDE or Gnome and using Open Office. I'm certain Linux developers would quietly point you to the door if you told them that Linux Desktop means that. I also think the KDE, Gnome or OO.o devs would point you to the door if you told them that they are Linux Desktop. They work on other operating systems, you know.

They are the Linux Desktop though. If you look at X sessions, the tremendously overwhelming majority are KDE or Gnome sessions. Furthermore, the neatest, fanciest features of Gnome and KDE tend to come later to the BSDs and Unixes. Case in point: How many versions did it take before the gnome-volume-manager worked on FreeBSD to allow for automounting?

As someone who believes the average person needs to know more about computing, I think that the difference between micro and monolithic isn't on that list of things that are relevant. I agree with your overall point, though.

"I think it's just more likely that developers don't really understand the users"

What if what happens to developers is that they don't give a damn about what "the users" want or need?

There are developers that do care about your kind of "Joe Sixpack" users be it because their personal inclination or because they are paid for it and then, there are developers that program for a myriad of other reasons and that's perfectly OK. Unless you can point and demonstrate that there are developers that genuinously try to focus on Joe Sixpack kind of users and fail then there is not such a "problem", at all.

"for all the merits of free software, there are some things that centrally-managed, proprietary software does better, because the non-programmer professions involved in product development expect to be paid for their services, and most open source projects do not have a workable way to monetize the overall project to cover those costs."

And now you are mixing apples to oranges. It is not "centrally-managed proprietary software" but "centrally-managed software" as long as its central management does focus on Joe Sixpack satisfaction. Can you demonstrate if even at the logically level only that a centrally managed open source software project focused on Joe Sixpack satisfaction is worse fitted to the challenge than a centrally managed proprietary software focused on the same goal? I don't think so.

"What if what happens to developers is that they don't give a damn about what "the users" want or need?"

Precisely, and this I think is why we need to erase the developer/user distinction by creating languages and tools which allow USERS to create their own solutions. Then the users BECOME the developers, and make the system they want.

This doesn't happen at the moment because it's too hard - there's a big gulf between 'user' and 'developer' level tools. The conventional default way of creating GUI apps - C/C

Users can become developers... very easily, if they want to and have the time to learn things properly.

The languages themselves aren't hard, but to explain exactly what you want to do in a programming language in an efficient manner, can be depending on the task

Also this configurability you speak of, with many non developers doing it, would more or less confuse people. We already have enough people bitching about "there is no standard blah" with only a few well done options available that service different

For example software like multitalk [ucam.org], server-less sharing of a work folder over the internet* or a multi-head view for pdf-presentations may have innovative available solutions intended for the average office user, but never reach end-users.

=> Polishing, publishing (as in getting it as package to the end user), maintaining, reacting to user requests is extra-work that take a lot of time and effort.

This is very true. Having worked at a large software company writing developer tools, we had HIE (Human Interface Engineering) people evaluate everything with a GUI that was shipped to customers. Mind you, this was software written by and for developers so the rules were a bit relaxed but, I have never been so close to committing homicide as I was when I would get e-mails like this in my inbox:

- The black line between widget foo and bar needs to be 1 pixel closer to widget foo.- The black line between widget foo and bar needs to be color #111111 instead of #000000- The splitpane between widgets foo and bar should default to 437 pixels wide and not 450 pixels wide- The vertical scrollbar should scroll 5% slower- The hotkey for menu item foo should be Ctrl-baz and not Ctrl-barEtc, etc, etc.

It took me slightly longer than normal to implement all these changes because I was distracted trying to decide a fitting way to end the e-mail authors life but, in the end I implemented all their "suggestions". I'm ashamed to say that they were right. The product was far more polished after I did all those seemingly pointless things.

To summarize: Developers shouldn't be in charge of GUIs. Even if those GUIs are only intended for other developers.

Just out of curiosity did you notice that the product was lacking some polish before you made the changes?

No, the product seemed pleasant looking and very usable from my standpoint. After implementing the changes HIE suggested I was blown away at how great the shipping product was. In fact, that single experience probably changed the way I write GUI applications and, 10 years later, I think if I were to write a GUI application for the same company, HIE would be sending me far fewer e-mails about mundane details.

"Human Interface Engineer" sounds like a bullshit title but, if you get one that actually knows what they are talking about and you listen to them, it can drastically improve the quality of your software. I think the point of the GP was that open source software often doesn't have the level of strictness where a non-programmer can say, "No, it's not polished enough to ship". When you know that the final judge of whether your software will ship or not comes from someone that cares more about presentation/interface/usability than the technology behind it, you write your software differently.

Yes, what they do decide to work on is more important in some ways. But I daresay adding that little extra can be just as important if not more so in other ways.

Apple understands very well how the perception of "insanely great" can cover a multitude of problems under the hood.

There's a vast difference between the users perceiving your product as "oh well it works", "this is nice" and "hey this is sooooo coool! (must have ASAP!)".

Whereas KDE says:

"Kicker is currently unmaintained, you can look to your distribution for help, however."

Look to your distribution for help? A lot of people might just look to (or stay with) OSX/Windows for help instead. And tell the Linux Desktop Zealots who try to "convert them" that OSX "WORKSFORME", or Windows "WORKSFORME", and who the fuck cares that it's not OSS and it's an "evil proprietary OS".

As for innovate too much, a lot of what they do is not innovating at all. For example: "wobbly windows"?! How the heck does that help? If I want to play with stuff that wobbles, I load up World of Goo or something.

Without a good Human Interface Engineer or someone who understands that stuff with a lot of say, they'll end up producing tons of "innovations" are not actual innovations in UI. Stuff like initially attractive cutscenes in a game, that the users eventually try to skip because they end up being annoying or getting in the way.

While it's sad that the issues weren't addressed back when they were made, closing them nowadays with unmaintained is the proper response to anything related to KDE 3.5, the KDE3 code is unmaintained and they have stated multiple times that there will be no more releases of the KDE3 code base (from the KDE team at least, people are free to fork GPLed software...)

Honestly, I don't think that kind of UI design is all that critical. If it'd been a few steps higher up like workflow design, then I'm all with you. Like if a user wants to do this, he should [click a button/use a menu/write a command line], after which he should get a [dialog/wizard/use defaults] which should contain [basic options/all options/preview]. Often it gets so complex because geeks design it with a million things to tweak underways from A to B, when most people want the simplest straightest route. Particularly I've noticed that geeks are much better at visualizing certain kinds of results, so they understand what they're doing while others don't. Often what's needed are simple tools to show "where am I in the process?" or "what will the effect of this be?" to go from zero to hero.

That level of polish is critical for user acceptance. If you give a user an unthemed GTK desktop (which is hideous) they will blame any and all failings in their ability to use the software correctly on it being "primitive" just because it doesn't look flashy. For better or worse, compiz wobbly windows probably drove more users to linux than, say, the superior workflow paradigm of multiple workspaces.

Having said that, I agree that workflow design is also important. It was included in the e-mails I'm referring to but, to make my point about the culture clash between Humans and Nerds, I only included the most ludicrous examples of the types of things that proper HIE will make you do.

"Honestly, I don't think that kind of UI design is all that critical. If it'd been a few steps higher up like workflow design, then I'm all with you. "

And here's my question to you:

Why can't the user create their *own* workflow on a modern desktop?

Why aren't there tools available to allow the user to script and remix modes and functions of applications into their own new applications?

Why can't I take a GUI application that annoys me because the buttons are laid out wrong, and edit the window so the buttons are 35% bigger and slightly to the left, then post just that change somewhere safely on the Web so others can critique it and use it?

If I see a spelling error in a dialog box in a free application, why can't I *instantly* click somewhere, fix that error, and repost it, just like I can on Wikipedia?

How much do I need to know about thread-safe signal-handling GUI event loops in order to change a badly-drawn icon or resize a scroll bar?

We're not leveraging the full power of the Free Software mentality unless we can enable small, safe, incremental fixes like this, all across the user base.

Not to mention that as someone who worked in support once upon a time, that level of configurability would be a NIGHTMARE.

Rep: "Click on the balloon icon?"Customer: "The what?"Rep: "The icon next to the ruler."Customer: "Oh. I made mine look like a pony. I love ponies.".Rep: "Whatever. On the window that pops up go to the formatting tab."Customer: "I don't see it. What's it do?"Rep: "It lets you change how your text looks.".Customer: "OH THAT. I renamed it to 'How stuff looks.'. I drug it over to that 'About' window too, because how my file looks is kinda like what it's about, you know?"

Just look at what most users DO do with the customization options they're given. Mind you, not Slashdoters who want to "tweak for optimum performance" (which is only true for half of them - the other half will do things like transparent terminal windows that must get dragged into just the right positions for the most 1337 UI screenshot they can devise), I mean your standard old cubicle bound office worker. They don't try to make anything more efficient or fluid. No, they have a pile of icons scattered around their desktop with kitten wallpapers, dinosaur cursors, yellow/pink/neon green color schemes, and cows mooing at them when they throw something in the trash.

Just as I wouldn't want to sell these people a TV that had an easy access panel and included a soldering iron and modding options, I wouldn't want to give them software that's TOO easily changed either.

That would explain a lot. At work our log program looks OK, but requires an inexplicable intervention of the mouse when changing between two specific fields. Everywhere else on the form I can get to the next one by hitting tab, except for that one, which doesn't work right. And on top of that the developers working on it decided that rather than being able to type 24 as 2 4, we should have to do it 2 2 2 2 2.

I can't pretend to understand what sort of brain damaged logic resulted in that being signed off

It took me slightly longer than normal to implement all these changes because I was distracted trying to decide a fitting way to end the e-mail authors life but, in the end I implemented all their "suggestions". I'm ashamed to say that they were right. The product was far more polished after I did all those seemingly pointless things.

Don't feel ashamed.

It's been said time and time again, but it bears repeating - developers don't understand how important a GUI is to the end user. All those little things you mentioned were an annoyance to implement, and yet had a cumulative effect that even you could appreciate. The problem is that you had someone to kick your ass and tell you what was necessary to implement for the GUI, and since it was your job and you were being paid to do this, you obviously had to implement the additions. Developers for OSS unfortunately do not have such motivation and do not have an external force to push them into improving the GUI in such subtle ways, and this is why OSS tends to (but not always) have a far less slick interface than their closed-source counterparts.

The iPhone has a slick interface. This is noted by virtually anyone who uses it, but this interface wasn't an accident of design.

This is exactly why I switched from Linux back to Vista (I went from Vista to Linux initially because of hardware issues). I wanted to love Linux, I think at it's core it is a fantastic OS, and I'm all about free.

But I'm not a programmer, I've got very, very mediocre artistic abilities, and my attempts to pretty up just my desktop experience didn't produce anything I could continue with. I couldn't find a "theme" I liked, and they didn't fix a lot of the other UI issues, like not being able to change a number of settings via the GUI. In windows and OSX, cli is reserved for when things have gone very wrong, or you are trying to do very power-user level stuff, not everyday things like adjusting sound settings or installing programs that aren't in your distro's repository.

Basically, if Linux on the desktop looked as nice as OSX and the user rarely needed to go to the CLI for anything it would be crushing OSX and be solidly competing with Windows. That's my estimation, anyway.

Then there's desktop GUIs like enlightenment where you effectively have two layers of developers. The graphical elements are implemented as themes over the top of the window manager. One of the problems with the 0.16 version is that while you don't actually have to be a software developer to write a theme you do have to be a bit more than a layout artist or web page designer. A few years ago some very useful desktop GUIs were produced by a variety of people (including Rob Malda the founder of this site)

To summarize: Developers shouldn't be in charge of GUIs. Even if those GUIs are only intended for other developers.

I would add: Developers shouldn't be in charge of platforms, especially if those platforms are intended for end-users.

One of the main problems of most FOSS projects trying to produce normal 'everyman' apps and OS's is they they are primarily coding (and trying to design) to show off to each other. They aren't connected with the end-users' interests and expectations in meaningful ways, so even when going for maximum polish they end up with something impressive or passable mainly to very advanced users. These FOSS devs also tend to have poor software methodology which further prevents them from cataloging and prioritizing users' wants and needs (requirements and use cases anyone??).

As a KDE fan, I have to say a lot of KDE stuff falls into this category of the "candy-fied yet inaccessible". On sites like kde-apps they are very into showing off kool 'end-user' type stuff to each other without any thought as to offering solutions with feature stability.

Ah, feature-stability. That's what the supposed "Desktop Linux" platform would have if it were a platform. But its not. There is nothing that specifies a set of rich and modern features/behaviors that would cause either a budding application developer or a typical end-user to feel reassured and at-home as they try to write-for and use various Linuxes. Such a specification would entail making an "interface contract" to non-peers (non-system-coders, i.e. end users) when these coders are really only thinking about the reactions of their peers.

Its the applications that 'sell' the system. As young application developers cut their teeth they are almost certain to start with and stick with a highly targettable (well-defined) platform. And they will learn first the suggested coding styles at Apple Developer Connection or MSDN, starting with the default toolsets offered (Xcode, Visual Studio). At some point inspiration will strike them and (unless its web-centric) they will try out their ideas in these nurturing environments first.

I think your title is a bit misleading. When you say "Linux" I think Linux kernel. *snip*

Yes, misleading, but rather typical of the general misunderstanding that is prevalent. But then again, what value is a kernel to the average joe? So its just easier then trying to explain how it all fits together to a non techie. ( kernel, X, desktop, etc.. )

Even with BSD where it IS the sum of its ( official ) parts, the explanation still gets messy.

It's not necessarily a misunderstanding. I, and most of my friends and colleagues, use "Linux" as shorthand for "Linux-based operating system". We are fully well aware what the Linux kernel is, and what the operating systems consist of. However that usage is both concise (no, I will bloody well not say "GNU/Linux" every time, andy more so than "Linux-based operating system") and understood to a sufficient extent by non-techies as well as IT people. By all means try and earn nerd-cred by complaining about it if you want, but I view that behaviour as pretty much on the same level as the grammar-nazis here on slashdot - they may be technically correct, but they are annoying and unproductive, and we could get by with a lot fewer of them.

...but I view that behaviour as pretty much on the same level as the grammar-nazis here on slashdot...

Technically "Slashdot" should start with a capital letter since it is a proper noun.
Yours sincerely, a spelling/grammar/punctuation-Nazi.
>_>
*ducks*Disclaimer: I probably screwed something up there; feel free to call me out on it.;)

But what else do you call it? To most folks 'desktop' is the place where the computer sits. When you work as a computer technician like I do, you get into the habit of explaining things in fairly simple terms to people. A regular computer user is going to scratch their head if you say that you're talking about a KDE desktop environment, running on X.org, running on a GNU/Linux system. 'Linux Desktop' is as good as an umbrella term as any.

Bear with me for a moment: where do elephants pack their cloths when they go on vacation? In their trunks, of course. Now, this is a joke that you or I get right away. There is a pun involved, i.e. the two meanings of the word "trunk," and the ambiguity of the context provided by the elephants. Now, tell this joke to the average 7 or 8 year old, and watch them as they repeat it to other children. It is quite likely that they will tell it incorrectly, leading to a joke that doesn't make sense (i.e. they might replace "trunk" with "suitcase," or forget that elephants are involved). They understand that the joke is supposed to be funny (people laughed when it was told, so it must be funny), but they don't really understand why it is funny, because they don't really get the pun.

I think that they same might be true of many developers. They see UI elements in software like Mac OS X, Vista, MS Office, or other programs, and understand that these elements must be important. However, they don't really get why they are important, so when they clone them into their own projects, they come out misshapen and not quite right. They clearly understand that the element is useful, and that people want it, but without understanding why it is useful, or why people want it, they end up with something that doesn't make sense.

It seems like the Major Linux Distributions have put effort into fancy eye candy for eye candy sake not for usability sake. There are so many details that the Linux community has never really considered to make a major part of the distributions. It has 2 main targets, the complete Idiot user, and the expert user. Between that gap there is really a big hole.

Just recently I needed to switch my network settings from DHCP to a Static IP address. For Windows and a Mac that is a simple task. Fill out the for

While I absolutely agree with the general idea you're referring to, NetworkManager does have a cute GUI that can very easily change, among other things, the configuration of a network interface from DHCP to static, much as one is accustomed to do with other OSes. Granted, I believe this dialog is quite a recent addition to the project; I'm quite sure it wasn't there a couple of months ago.

On a related note, this particular problem is an excellent example of over-innovation on the part of Vista; am I the

If you are someone that's comfortable with ifconfig or/etc/networkm, would youeven notice the GUI if it were there? How long would it take you to notice? Howsoon would you go out of your way to do things in a manner other than "how you'vealways done them" so that you would notice such changes?

The long time expert Linux user is perhaps not in the best position to evaluatehow good Linux at accomodating newbies.

Granted, I believe this dialog is quite a recent addition to the project; I'm quite sure it wasn't there a couple of months ago.

I thought NM had had a dialog for that for a while; certainly, Ubuntu has had a GUI for changing settings such as DHCP/static IP for as long as I can remember. That the OP couldn't find the setting is, I guess, a problem, although it's not obvious to me where would be a better location than the "Network Connections" item on the "System" menu.

UI and workflow design and project management aren't glamorous or interesting so they don't get done.

I don't really think it works like that. It's 2009 and by now I'm sure everyone understands the value of good UI and workflow design, but it's quite difficult to do well and I'd be surprised if either GNOME or KDE don't often find themselves without the time or expertise needed to get usability up to the desired standard. Of course I would argue that there are several apps on the Linux desktop with great usability - I personally like Firefox, Dolphin and Okular, just to give a few examples. But I would agree that usability isn't as consistent across the platform as it were when compared to say Windows.

Cowboy coding only gets you so far.

Oh, okay, so the basic gist of your comment is just that the free desktop coders are a gang of useless cowboys hacking together a bunch of buggy, improperly documented crap for the riches and renown which will obviously be forthcoming from such an endeavour. How about you go and read e.g. some blog posts by KDE or GNOME developers, because you will discover that a lot of the people working on such software are passionate and proud about what they do and put an awful lot of thought and effort into trying to do quality work. Granted there are some bad apples in the bunch as usual, but I think that the majority of problems these projects face are down to lack of resources, above anything else. But hey, why not throw around inflammatory, pejorative terms like that.

The basic gist of my comment was exactly what I wrote. Passion and pride are apparently not enough, you need to attract people who will do what's needed, and that isn't coding at the moment. I would suggest switching off of whatever medications you are currently using if you found what I said to be inflammatory or pejorative.

UI and workflow design and project management aren't glamorous or interesting so they don't get done.

You don't say anything about not attracting the right people, instead your words suggest that the work that needs to be done is not actually considered important or worthwhile by the people who should be doing it. I can't see how anyone could get anything else from those words

Wampus has probably already left this thread of discussion because you seem to keen on reading into what he's writing.

All they want is something that will be stable and get the job done, in a consistent manner. Often times the bells and whistles for the sake of having htem just get in the way, and damages consistency making things confusing when they don't need to be.

all the average user wants is to chat via live messenger, check their hotmail account, look at facebook, and check how badly their ebay listings are doing... they generally couldnt give any less of a toss about everything else that is going on

Bad features die, good ones remain. The alternative is to shove crap into end users throats.
And when they don't like it continue shoving the way Microsoft did with MS Bob aka Clippy
from MS Office. The big difference here is innovation does not occur without failiure. Open
source can afford to make mistakes. Closed source companies have to add useless and failed
features to their products, otherwise the time spend has been wasted and investors may sue the company.

User centric design is the issue. When MS puts clippy in, I don't know how much of that was some developer of pinhead thinking it would be really cook, and how much of it was actually user centric design. Same thing for putting the command to change the desktop on the context menu. Sure, it was something easy to do, and certainly showed those people who made fun of MS for being the only modern OS where one had to reboot to change resolution, but does it serve a rational purpose. One rational purpose it might serve are for those that occasionally need a lower resolution, but that problem has been better addressed through other means.

In the end one has to have a system where best practices win over bloat. Where things that aren't that useful are removed so they do not involve recurring resources at every release. For instance, i know that egos are tied up in the multiple *nix desktops, and all desktops have a right to exist, but significant progress could be made if the community could select on desktop to develop towards, even if means that the solution is imperfect.

One thing that Microsoft has done well is to maintain continuity with the past. The desktop of Windows 95 is still available on all consumer versions of Windows up to Windows 7. In Windows 7, you can select the "classic" appearance for the desktop to get the Windows-95 look and feel.

Most people -- except tech geeks -- do not want to learn a new way of doing things once they learn a particular way that suits their needs.

Moreover, learning takes time and money. If your company has 100,000 employees, then training them to use a new desktop can cost millions of dollars.

If GNOME developers want Linux to take a significant share of the consumer market, then they must ensure continuity with the past. Before they embark on the next super-duper upgrade of the desktop, they should spend some time in asking their grandmothers what they want in the next super-duper GNOME desktop. Grandma's advice could help a lot.

I think simple Desktop environment projects as LXDE [lxde.org] show how do it right: focus on speed and responsiveness.

Don't try to be artificially different, don't confuse, do what users want but don't do more; keep dependencies as few as possible, if it doesn't work as intended throw the component away. Do one thing with one application. And most important of all: The Desktop Environment is not the application. It should be like a professional servant, you won't notice him and you don't need to waste your time to command him.

Funny - I've usually seen it's the geeks who take the trouble to turn on the 'classic' look and feel in Windows and get rid of all the cloying eye-candy. Meanwhile non-technical users just stick with the default.

That's probably because only geeks care about the extra desktop-real estate gained by reducing the size of window-decorations... Most people use programs at near full-screen size anyway. (Probably partly because of the excessive bloat in window-decoration and toolbars almost requires it to be usable at "normal" resolutions these days. Trying BeOS a few years ago, gave me the feeling of almost doubling the resolution of my laptop, so effective/minimalistic were the windows. And that was compared to "classic" in XP!)

Funny - I've usually seen it's the geeks who take the trouble to turn on the 'classic' look and feel in Windows and get rid of all the cloying eye-candy. Meanwhile non-technical users just stick with the default.

I consider myself a geek but I like Compiz and Aero because not only are they more modern looking than the boring old grey themes of past desktop GUIs, but they also have the benefit of offloading the rendering of the GUI from the CPU and onto the GPU, which in most cases improves responsiveness.

Funny - I've usually seen it's the geeks who take the trouble to turn on the 'classic' look and feel in Windows and get rid of all the cloying eye-candy. Meanwhile non-technical users just stick with the default.

That's the power of the "default" which is a big deal as well. Most non-technical people don't even realize such options exist or that you do not have to use the default. To be fair though, to Microsoft's credit, often the default is good enough and many don't even care to change it because it will typically allow one to get the job done. Some might say this is NOT the case with some recent changes in Linux desktop environments.

Continuity is there in free desktops in the same way it exists in OS X and Windows....in parts. Gnome and KDE and MS and Apple have all at some point had to accept that backwards compatibility has too high a price, then swallow hard and offer something which upsets a lot of people (even more than usual ha ha). Anyway there's plenty more to the free desktop than Gnome and KDE so it's not even a notable issue for many.

Mostly the article is filler. Precis: is KDE lead developer pissing in the wind? Maybe. Shou

The essential problem with free software is that most of it is written to scratch someone's itch. Usually, the ones who start off coding to fix their problems are the developers. Over the last decade that I've used linux (and other f/oss) on my desktop, I've seen a radical shift in how the developers are influenced to do what a user wants. More so, I've seen the system favour the ones who have user focus rather than dictate from their ivory towers and yell back "sure, send me a patch & we'll talk about it". You did your bit and the others stepped on those to get where they want... and with GPL in place they didn't really step on your toes.

Essentially, you didn't owe the user anything for real. The user paid in attention & respect. The developer did what the user wanted as long as he (or she) wanted the respect. Over that, it was just about fun when it was Y2K days.

It'd be vastly different if someone paid me for it. Well, yes... someone does pay me to churn out F/OSS code, I deal with vastly differently from my other projects.

The essential problem with free software is that most of it is written to scratch someone's itch. Usually, the ones who start off coding to fix their problems are the developers. Over the last decade that I've used linux (and other f/oss) on my desktop, I've seen a radical shift in how the developers are influenced to do what a user wants. More so, I've seen the system favour the ones who have user focus rather than dictate from their ivory towers and yell back "sure, send me a patch & we'll talk about

If developers work on whatever they feel is important, Free Software eventually wins, just like the zombies. If people don't like new things, then they pay attention to old things, and work from the old version to fix things. Free Software can never get worse. Old versions never get discontinued. Free Software is an always expanding ecosystem, and it grows with every line of code that is shared with the public.

Oh, come on, fork KDE 3 and go on with its development. Or fork KDE 4 and bring it back to be more KDE 3-ish.

What you are missing is that certain level of organization is required to manage projects as large as that. And if you don't like the direction that some Free Software project is heading in, you cannot fork it without forking the entire organization behind it. And it's so much easier to just switch to something else.

That is a very valid viewpoint to hold. You can most certianly say "I don't owe the user shit." It is your software and you are nice enough to let others use it. Thus they can use it on your terms.

HOWEVER, when you do that you lose any and all right to claim that your software is "better" for the user or "what they should use." If they have objections to the way it works, you need to be graceful and say "Ya, my software doesn't do that well, I don't care to fix it so if something else works for you, go to it."

The problem is that there seem to be a number of OSS types that want to have their cake and eat it too. They evangelize an "OSS for everyone," position. You should use all OSS all the time, it is the One True Way(tm) and gives you better software because everyone collaborates. However, when a user then says "Hey this doesn't work as well as my commercial app," they get angry and say "You didn't pay me, I'll do what I want, fix it yourself if you don't like it."

Sorry, can't have it both ways. If it is a situation where you think your way is the best way and you want everyone to use it, then you've got to work to accommodate users. You need to make it do what they need as good or better than their old apps. On the flip side if you want to offer it with no support, you then need to offer it as is. Don't push it as being things it isn't and won't be.

This is a problem I've run in to with people trying to covert me to Linux. I tell them the things I want to do, but can't seem to. They then give me things that aren't real alternatives. When I say "This doesn't do what I need," I get told that I either "shouldn't do that" or that I "should write it myself." Sorry, those aren't legit options. If you want me to use your stuff, it needs to work for me. If you don't want to make it work for me, then don't push it as a solution for me.

When I think of Free Software, I generally think of the community were the developers are the users are the developers. "Open Source" still smacks of the buzzwordism of the late-90s, getting corps. to invest in opening code under the assumption that they'll be able to get free work out of some sort of "community" while lowering their development costs.

What's wrong with the developers working on what the developers are interested in? If I (the royal 'I' here), am not being paid for my time or more code, then "users" should just be glad that 'I' have decided to make the fruits of my labor available to them, too. Perhaps I just don't get this mentality that it's some sort of competition between 'Linux' and Microsoft and Apple, and that we have to compete for desktop marketshare for some stupid ass reason. I just don't really see it as that big of a deal. Maybe for a company like RedHat, it is, but that's not me.

The concept that the developers are 'innovating too much' and 'alienating the user base' just seems akin to someone crashing a frat party and then complaining that all they're allowed to drink is the Beast.

It is a problem when the developers are trying to make a consumer desktop, however; last I checked, many big Linux-related projects (including both Gnome and KDE) are gunning just for that; so no, your statements are not valid here.

There are, of course, exceptions: but none of those are what this is talking about.

Yes, but/why/ are these projects "consumer desktops", or supposed be? Back in the day, they were just doing their thing. KDE started because people thought it might be nice to have a desktop system for Linux, and CDE was expensive. GNOME started because KDE wasn't technically "free software" due to Qt licensing issues.

RedHat jumped on the Gnome bandwagon, started paying devs, and sort of took the lead. A similar situation occurred with KDE, iirc. The way I see it, the community projects got hijacked by the corporate Linux pushers, and then people are complaining about the stuff that hobby hackers are putting into projects.

If having some "consumer" desktop that gives warm fuzzies to people when they're looking at computers in Best Buy is so damned important, than maybe RedHat, Novel and others ought to just pull an Open Group and write said desktop, rather than attempting to exercise overbearing authority over projects that were started without them.

But I am not now, nor have I ever been an influential figure in f/oss, and my contributions have mostly been fairly insignificant and flown under the radar unless you were specifically looking for them. However, if I ever get around to releasing something intereting that's worth being hijacked by IBM, who for some reason leaves relatively in charge rather than forcing a coupe, makes the project and international sensation and then puts me in a position where people I've never heard of are making demands that I add features to support their "mission critical" b.s. or design it to look the way/they/ want, I'll tell you right now -- I'm going to be kind of pissed off.

To clarify, 'beast' is the slang term for Milwaukee's Best, which is a particularly nasty so-called beer, which costs about $7USD for a 24-pack of cans, which at most schools, at least mine and any other where I ever went to a party, the fraternity houses stock up on to provide for guests at parties who are not special enough to be entitled to the good stuff, or smart enough to know they should bring their own anyway.

Oh yes, another self-righteous rant attacking the directions of free software projects just because they have the audacity to venture far beyond where windows stagnated a decade ago. The article's author doesn't say much besides criticizing projects such as KDE, GNOME and even Ubuntu for their ideas regarding the desktop. And he does a bad job at it, to boot.
For example, the author criticizes KDE for the audacity of thinking about implementing social networking features into the desktop. Is that supposed to be a bad thing? I mean, what's the difference of having an application such as windows live messenger constantly running and implementing some sort of widget that performs the exact same task? At least with KDE their implementation follows standards which are open and it doesn't force plenty of ads down our throats. What's wrong with that sort of innovation? Absolutely nothing.
And his criticism of GNOME is pathetic. I mean, he criticizes GNOME not for innovating but for rewriting it. He hasn't absolutely any detail to grasp on and in fact the only thing he can muster about GNOME is "its final form at this stage is anybody's guess". Is that what the author perceives as innovation?
And more to the point, who exactly is the author to make authoritative judgments about what the users want or don't want? His he a psychic? In fact, where was the author on these past dozen years of the desktop windows? I mean, after all these years windows is incapable of offering extremely basic stuff such as the ability to set any window the user wishes for to be always on top. And what about the ability to scroll a window without changing the focus to it? And what about getting rid of that really annoying bug that, when a user launches an application, keeps the focus on the former application while the newly launched app is placed on top of every window on the desktop? Fixing those bugs would also count as too much innovation?
The article isn't worth the read. Nothing to see here, move along.

"For example, the author criticizes KDE for the audacity of thinking about implementing social networking features into the desktop."

Actually as far as I'm concerned the absolute last thing I want anyone to be implementing in my desktop is "social networking". Social networking should be an application that people who want to use social networking should run from the desktop or in a browser but in no way, shape or form should it be "integrated" in to my desktop. That would be a case of a developer making

The best of the Apple experience is polished, user-oriented, and "insanely great" because it takes a Steve Jobs to set the vision and make every component answer to that design. That's hard to do in the FOSS world.

So I, for one, am glad Mark Shuttleworth is attempting to put some top-down focus on a user-oriented set of goals into the Ubuntu desktop. Linux has not lacked for technical innovations, it has lacked for a unified vision that elevates the end-user and a chief to get developers to sign on to that vision. Go Mark, go!

Unfortunately for Linux (which I'd love to see kick ass and take names), Shuttleworth is interested in consensus over quality too often. To do what you're saying, you need a hard-nosed, damn-near-messianic figure who is willing to fight tooth and nail to realize exactly what he wants. This is not really very compatible with open source to begin with, and Shuttleworth is not that guy anyway.

Yeah, hence "cat-herder" vs. dictator. I don't know anything about Shuttleworth's management effectiveness, but we agree that an actual Steve Jobs style could not work in FOSS.

But in FOSS-land, Shuttleworth seems to be in the best position to put out a distro unified behind making the end-user experience great, which is what Jobs clearly aims for in his products.

And personally I think Fedora is already shifting some of its focus towards more end-user happiness in response to Ubuntu, where Fedora developers once made manly sport of scoffing at end-user concerns. (Having said that, I'm obliged to point out that Fedora devs have made huge pre-Ubuntu contributions to stuff that "just works" for users, like NetworkManager. Ubuntu has a long way to catch up to contributing actual lines-of-code, but they are ahead in setting the direction and thus gaining users IMO.)

I, for one, am glad Mark Shuttleworth is attempting to put some top-down focus on a user-oriented set of goals into the Ubuntu desktop. Linux has not lacked for technical innovations, it has lacked for a unified vision that elevates the end-user and a chief to get developers to sign on to that vision. Go Mark, go!

BINGO!

You just nailed the flaw in the original article. The author seems to think that FOSS developers somehow need to remain responsive to anything beyond the particular itch they want to scratch. FOSS doesn't work that way. Developers do what they do. If their output is sufficiently interesting, distro-makers package, polish and bundle their work.

See what I did there? I allowed for diversity and division of labour in the FOSS ecosystem. Imagine that! Developers doing what they do best and distro-makers preparing that work for public consumption.

Do poorly-socialised package maintainers sometimes drive their users away? Damn straight. Are there flaws in Linux distros? You bet your boots. But if we're going to criticise them, couldn't we at least point our critiques in the right direction?

FOSS development, packaging and polishing is a decidedly human process, with all the inefficiencies, redundancies and illogical acts that all human processes entail. One can argue (though I never would) that commercial software designed and developed by customer-focused companies is inherently better. In my opinion it just trades one set of problems for another. (If I had to generalise, I'd say it's the difference between often useful but unpolished software and often useless but highly polished software. There are notable exceptions to each case, of course, but statistically, they are exceptions.)

At the end of the day, the FOSS ecosystem has differentiated roles and responsibilities, and the least we could do - if we really want things to improve - is to direct our criticisms to the right people. The folks at Ubuntu are devoted to the goal of making their distro 'Linux for human beings'. I know that when I bitched to them about certain shortcomings, I got a reasoned response from none other than the CTO himself. And given the improvements since that time, it's clear to me that they've taken such critiques to heart.

Linux distros are all decidedly imperfect. But they're a damn sight less imperfect than the alternatives.

I'm prefacing this with the fact that I ran Linux as my only OS for a year (SuSE 9) then I switched back to Microsoft. Linux and GNU are a superior development process - inclusive and plural - but Microsoft right now has the superior ecosystem. When everyone uses it everything gets written for it especially entertainment wise. How does Free go about breaking this lock-in? I know for me if it wasn't for entertainment software I would be all over GNU. Wine steps in and fills that void somewhat but currently does not have enough compatibility to bring me over to the good side. I like Linux, I want to use it, but my games don't play in it and thats the only thing that keeps a closed OS on my desktop. Way back in the early '80s a machine was introduced called the Commodore 128. It was the successor to the Commodore 64 machine and it featured a full compatibility mode with the 64. The issue was that most 128 owners ran their machine in 64 mode therefore the 128 never caught on as no one would make software for it. I see Wine as having a flavor of that situation but since it is contained within a Open OS other applications can run concurrently so that pitfall is lessened. To me, Wine is the application that deserves focus in Linux development because it has the potential to break the dead-lock and provide the bridge from Pay to GNU.

When you break compatibility with everything that currently exists just for the sake of being new and different, that isn't innovation. Unfortunately many times when this happens it ends up getting called innovation because nobody has the guts to call it what it really is. Oh, but we have to scrap the old design because it wasn't forward thinking enough. Then in two years time, scrap the new one for the same reasons.

Thus "innovation" get a bad name, particularly among those on the receiving end who never asked for it to begin with.

Then you get articles like this which assume that it is even possible to have too much innovation because of the false connection between innovation and breaking things.

When you break compatibility with everything that currently exists just for the sake of being new and different, that isn't innovation. Unfortunately many times when this happens it ends up getting called innovation because nobody has the guts to call it what it really is. Oh, but we have to scrap the old design because it wasn't forward thinking enough. Then in two years time, scrap the new one for the same reasons.

Free software is still driven by developers working on what interests or concerns them.

If it is being developed in the developer's free time then this should be expected, The software is effectively a hobby which the developer enjoys and users benefit from. Innovation is enjoyable, maintenance isn't, and users if they aren't paying should expect this. If they want reliable long term maintenance (or any other "boring" issues) they should consider playing for support, like in any normal business relationsh

There are critics and pundits on any side (Mac OS X, Linux and Windows) but of all of them, Linux has the lowest position and therefore has the shortest distance to fall. This gives Linux a unique "coming from behind" perspective and gives it a unique ability to fail without serious consequence. We all see what happens when Windows fails (Vista?) but what happens when Linux fails? Little to nothing really.

The reasons for this fact are various but it is rather undeniable. So is all the innovation bad for Linux? Nope. If there is failure, then the portion of the failure is discarded and hopefully a lesson was learned. And the value of failure is also tremendous when it comes to Linux. Linux gets the value of all failures in all three OSes if the developers involved are observant. And recovery time from failure? Almost zero in the case of Linux. People just keep on keepin' on.

When you realize that a feature is a killer one, one good enough that you can't figure how you lived without it before? Think in simple things, like browser tabs, extensions or things like that. And maybe more important, what is a "killer feature" for you could not be for someone else (i.e. for me could be menussh and nagstamon under gnome, or firebug and some of other extensions that depend on them for firefox, as i said, could depend a lot on what you do).But maybe more interesting could be thinking how

Aaron Seigo thinks he is embarking on a bold new vision of the desktop, but so far, he's produced only developments that inhibit productivity. Making everything into desktop widgets (including social networking fads like facebook) isn't a bold new vision of the desktop environment... it's glitzy eye-candy. Seigo peppers every idea he has with colorful language like "new paradigms" but his ideas so far are hardly innovative. Desktop widgets? Already done. Animations? Compiz did it. Creating folder con

Making everything into desktop widgets (including social networking fads like facebook) isn't a bold new vision of the desktop environment... it's glitzy eye-candy.

I'm not sure where the criticism is with this statement. Widgets are bad? Are you suggesting that the goal of making it easier to add features to a desktop is not worth pursuing? Because as far as I can tell, that's what the project is after.

Maybe they don't have widgets you like, but I'm not sure where you get off dumping on a project that cost you nothing. You know, there's a bug list among other ways to communicate with the developers.

Seigo peppers every idea he has with colorful language like "new paradigms" but his ideas so far are hardly innovative.

Uh huh. I see. Get back to me after coding something as big and complicated as a desktop that _actually_ works and attracts users/contributors. I'll be sure to criticize your efforts.

Last I checked, Linux desktops were loaded with exciting new innovative features but failing on extremely basic tasks.

Perhaps the community should be asking whether it's more important that we add a fun new Swirl effect to switch to another desktop or if people would rather have a sane and complete GL API. Do we need the entire desktop to be rethought or should we simply settle for having a sane and unified sound solution?

I would have to agree in saying that the desktop linux community is getting way too ahead of itself if they think they're innovating themselves away from the mainstream. Read the NYTimes article on Ubuntu Linux and tell me whether or not they even mention innovation- They viewed it as a free but lower quality alternative to commercial systems that was very attractive but failed during basic maintenance tasks.

Why create an Earth-shattering new desktop-web interaction paradigm when users would probably rather have sane and cohesive documentation?

Here are some no-brainers, if you want to see linux improve:

* Now that OSS 4.1 is open source, drop ALSA. It is a proven failure. PulseAudio obfuscated the problem to the point of ruining audio in linux, specifically when low latency is required.

* Support forward-thinking projects like Wayland instead of putting another car on the fail-train that is X. X is architecturally inferior to WindowServer and Windows' display layer for desktop-oriented tasks. A simplified windowing system that puts graphics first and drops the cruft would go a long way in making linux seem modern and easy to maintain.

* Write documentation sometimes. Format it well an ship it with your projects!

Or, if you're really clever:

* Realize that open source != linux. Look at desktop-oriented free software sytstems like Haiku and imagine a world where Linux can be built into an excellent server (or mediocre workstation) and desktop users can have a system purpose built for their priorites! There is no rule that says that linux needs to be the only free system. With the magic of things like POSIX, we can write software that runs on either!

X is architecturally inferior to WindowServer and Windows' display layer for desktop-oriented tasks. A simplified windowing system that puts graphics first and drops the cruft would go a long way in making linux seem modern and easy to maintain.

The graphics subsystem in Windows is a frame buffer graphics library poorly retrofitted for asynchronous calls. X was designed from the start for asynchronous client/server communications and operation in a separate "window server". X got it right 20 years ago. After two decades and several rewrites, both Microsoft and Apple have finally arrived at an X-like architecture.

There are some parts of X that aren't being used much and where desktops like Gnome have their own systems (e.g., Gnome configuration data and DBUS communication). The solutions adopted by the desktops are generally still inferior to the original X mechanisms.

If anything should change, it's that people should take a good hard look at Gnome and KDE and get rid of some of their windows-inspired cruft and replace it with better X-based solutions. This may involve an overhaul of some X mechanisms (X properties and events probably aren't up to the demands of a modern desktop, but that's fixable), but the principles and approaches embodied by X are superior to the "single user desktop PC" view of Windows and its clones.

The graphics subsystem in Windows is a frame buffer graphics library poorly retrofitted for asynchronous calls. X was designed from the start for asynchronous client/server communications and operation in a separate "window server". X got it right 20 years ago. After two decades and several rewrites, both Microsoft and Apple have finally arrived at an X-like architecture.

...what? Microsoft put their networking on top of the display, not beneath it. Windows' display layer doesn't operate with a client/server framework as far as I understand... it's just simpler between the graphics card and display, where it really matters for desktop machines.

In fact, when has X ever surpassed Windows or Mac in the ability to actually draw windows and graphics... especially in the case of rich graphics? There's a good reason Flash will always run faster on Windows and Java FX came out on Windows and Mac long before anything X-based. Why, with the way modern X works with the DRI/Mesa GLX framework, they can never have a full GL stack because the DRM's way of handling graphics memory is flawed. They would have to rewrite the server to do what is and has been fairly simple in Windows, Mac, or BeOS in terms of direct graphics access.

I am not sure what you're talking about when you say X is "superior", but I am talking about desktop use... read: GRAPHICS. Not being a client/server/unixy mess. The average desktop user needs a fast, accurate, and consistent interface to their graphics card, not endless possibilities of socket magic that they can vomit all over the network... it's just not practical or accessible to regular users on desktop systems.

I'll dive right in because this story popped right after I've reinstalled my main console, and I had to reinstall exactly because of my desktop getting "innovated" so much it was crippled. Maybe all these complaints of mine have already been covered elsewhere. But Linux GUI desktop developers had better get their stuff together and start thinking about how to make the GUI desktop quickly navigable for the full range of everyday work. (Not just for simple tasks, and not the new interface idea the GUI developers invent each month after a round of 'shrooms.) Between the Gnome Project's obsession with castrating its core programs' options, and KDE's obsession with making a new KDE app for every single type of application yet not being able to get its desktop and window decorations to be intuitive, I'm looking back at svgalib days with fondness. Or maybe Windows 3.1 days. Maybe I'm getting older. Maybe I used to have more time for this kind of involuntary "adventure" than I do now. Right-clicks and resizing task bars should not have to be treated as uncharted waters for a user at this point.

On my main console machine, I've had Kubuntu 8.10 for a few months, "upgraded" from 7.04. It was clear that 8.10 had damaged the configs unsalvagably - it still refused to mount USB drives so that the normal user could read them. I always had to remount manually on the command line. Yesterday I just wiped the whole OS off my machine (except for moving my old home directory out of the way) and installed Kubuntu 9.04 clean. We'll see how it goes. If this doesn't behave like something other than a damaged system within the next couple weeks, I'm switching to Xubuntu or something - at least it resizes and moves almost anything when you click on the edge, instead of having windows do one thing, tool bars do another, the "desktop" box another. I switched away from Ubuntu to Kubuntu because I couldn't stand Gnome apps censoring any option that didn't fit an 8-year-old kid's reading level. (Fortunately Gimp and Pidgin ignored the the rules. They were hard to learn for their own reasons anyway, so what did they care? At least they could be learned though - Pidgin only played moving-target once when it switched from Gaim.) Now I'm thinking of dumping Kubuntu because there are hundreds of options somewhere, but I can't find them. Xubuntu (what little I've used it) seems to behave very politely on my dual-boot laptop.

Kubuntu 8.10 should never have happened. KDE 4.0 should never have happened. KDE 4.1 shouldn't have even happened. Plasma (KDE's new desktop interface) is too clever by half. It is extremely non-intuitive. I've dealt with Apple II Plus system monitor prompts through ProDOS with AppleWorks, through years of custom BBS menus in ANSI, then Windows 3.1 through 95, 2000, XP, and Vista, with a liberal helping full-screen DOS apps, OS/2, and old X display managers whose menus only appear when you hold down Ctrl or Alt. Yet I still can't figure out how to get the KDE 4 taskbar to form 2 rows of tasks instead of just growing enormous icons for no reason when I change the size.

Anything non-KDE inside KDE is, of course, not quite equal. Firefox has "nice" rounded GUI element emulation in Kubuntu 8.10 but hides things like window tabs under other things (like the web page) when I launch it directly from the menus - but has simpler buttons and works fine when I run it from a shell prompt inside Konsole! How come Firefox has a different skin from Konsole than directly from the KDE menus?!

P.S. while I'm ranting: Why does the KDE "Utilities" menu have an icon that looks like a console prompt, then Konsole isn't in that menu?! Konsole is hiding in System, among the control panels. And how come KDE 4 sometimes does the same thing with right click as left click? If I right-click, it's because I didn't like what the left click did and I'm looking for some other option! Argh!

In many FOSS forums especially on Slashdot you see the Joe Sixpack strawman trotted out to either attack or defend. There's far more classes of users than just witless Joe Sixpack and savvy Tom Developer. There's plenty of people that are highly adept at using a computer but can't and will never program. There's also a lot of users that are adept at what they do often but have little computer knowledge outside of that particular domain. Looking at these users as Joe Sixpack who's never touched a computer before is shortsighted and counterproductive. The article bitching about social media widgets and whether or not people asked for them is inane. If some kid spends all their time on Facebook and Twitter and buys a netbook with Linux pre-installed they'll be far less likely to go back to Windows if their new computer works out of the box with the services they already use. A Facebook widget isn't likely to sell a computer as a part of the feature checklist on the box but it's something that will help endear the OS (as they experience it) to the user.

Well, I, for one, migrated from KDE to Gnome precisely because of this "innovate at any cost" philosophy in KDE. KDE4 was introduced far too soon in the major distros and even promoted to the "default" Desktop Environment in some of them, while still being horribly buggy and crashing all the time. The haste to make the GNU/Linux desktop look cool just made it look bad.

If I could sort of understand this innovation hype while I was a Windows user (novelty sells), I really wish GNU/Linux developers would slow

KDE4 was introduced far too soon in the major distros and even promoted to the "default" Desktop Environment in some of them, while still being horribly buggy and crashing all the time. The haste to make the GNU/Linux desktop look cool just made it look bad.

In the case of Kubuntu 8.10 and KDE4, one problem is that Ubuntu sees the x.04 ("Long Term Support") releases as stable, and the x.10 releases as developmental. Unfortunately this distinction doesn't hold up in the minds of Ubuntu users who think all re

"I, for one, find it puzzling why both Fedora and Ubuntu continue to put GNOME first with KDE as the also-ran."

Probably because Gnome works, and Redhat customers are paying for something that... works.

I tried the latest KDE on Ubuntu recently (not sure which version they're shipping) and while it looked somewhat pretty it crashed fairly often, I found some of the features bizarre and annoying (e.g. the side-scrolling program menu menus) and never found out how to get it to not display the windows on my 1920

For a long time, SUSE was KDE centric, but since Novell took over they started forcing Gnome onto their SLED (Enterprise Desktop).

And no sooner had they done this than the KDE team decided to trash everything and start from scratch which set that desktop back 3 years in terms of functionality. They "pulled a Microsoft" and put look and feel years ahead of functionality.

Novell sent out this horribly broken version of KDE in their community opensuse product and destroyed their own credibility and that of KDE.

It is doubtful that Opensuse will ever regain the popularity it once had even tho it is technically superior to Ubuntu.

So at this juncture, NO DISTRO TRUSTS KDE anymore, as they have burned the distros so badly.

It will take KDE two more releases to get back to where they were with KDE 3.5, but no one will be waiting at the station by the time that happens.