Sex, software, politics, and firearms. Life's simple pleasures…

Main menu

Post navigation

Ubuntu and GNOME jump the shark

I upgraded to Ubuntu 11.04 a week or so back in order to get a more recent version of SCons. 11.04 dropped me into the new “Unity” GNOME interface. There may be people in the world for whom Unity is a good idea, but none of them are me. The look is garish and ugly, and it takes twice as many clicks as it did before to get to an application through their supposedly “friendly” interface as it did in GNOME Classic. No, dammit, I do not want to text-search my applications to call one up!

But the real crash landing was when I found out that the Unity dock won’t let you manage two instances of the terminal emulator separately. Oh, you can click the terminal icon twice and get two instances, and even minimize them separately, but they’re tied to the same dock icon when minimized. If you click it to unminimize, both pop back up. That did it; clearly Unity is a toy, not intended for anybody doing serious work.

I was miserable until I found out how to fall back to GNOME Classic. But then a few days later I upgraded to 11.10 and my real troubles began.

Yes, there’s an 11.10 option that called itself “GNOME Classic”, but it’s a lie. What you get with it is a sort of half-hearted, crippled emulation of the 2.x look and feel with none of the actual Classic themes. So crippled, in fact, that you can’t even set up focus-follows-mouse properly; there’s an option for it, but you have to change that through an obscure utility (not installed by default) called “gnome-tweak-tool” – and there’s no autoraise option to go with it, so the option is effectively useless. Your focus changes but your window-stacking doesn’t!

It gets worse. While you can add applets to the fake GNOME panel, you cannot remove them or shuffle them around. Eventually, by making a fresh account, taking checksums of its dotfiles, adding an applet, and taking checksums again, I found out that the new panel configuration lives in a file called .config/dconf/user that is an opaque binary blob. There’s a resource editor for the blob, but I could find no way to edit the panel applet list in it. Eventually I was reduced to deleting the blob to return to a default configuration.

There are so many things wrong with the new GNOME that it’s hard to know where to begin. I’m going to pass swiftly over the evaluation that Unity looks like a candy-coated turd, because many people will dismiss that a mere esthetic quibble. It would be petty of me, perhaps, to grouch about losing my astronomical wallpapers. But the whole direction of GNOME – emphasizing slick appearance over function, stripping control away from the user in the name of “simplification” – is perverse. They’ve now managed the worst of all worlds – crippled, ugly desktops that meet neither the needs of end-users nor of techies.

The worst, though, is that .config/dconf/user file. One can haggle back and forth about esthetics, and argue that my judgment about what end-users want may be faulty. But burying my configuration inside an opaque binary blob – that is unforgivably stupid and bad engineering. How did forty years of Unix heritage comes to this? It’s worse than the Windows registry, and perpetrated by people who have absolutely no excuse for not knowing better.

I’ll spell it out explicitly because there are a few non-programmers in my audience. User configuration data goes in plain text files, not binary blobs. There are many reasons for this, and one is so they can be hand-edited when the shiny GUI configurators turn out to be buggy or misdesigned. No programmer who doesn’t grasp this bit of good practice has any business writing a window manager, especially not on a Unix-derived system. The fact that this botch shipped in GNOME 3 tells me the GNOME system architects are incompetents who I cannot trust with my future.

Me? I’ve bailed out to KDE. And I may be bailing out of Ubuntu. I want control of my desktop back. I want an applet panel or dock I can edit, I want my focus-follows-mouse-with autoraise back, I want to be able to set my own wallpaper slideshow. Most of all what I want is a window manager that will add to my control of my desktop with each future release rather than subtracting from it. Suggestions, anyone?

UPDATE: XFCE looks like where I’m landing.

Google+

485 thoughts on “Ubuntu and GNOME jump the shark”

It seems that more than a few open source teams seem to be chasing grand visions, rather than user empowerment. Firefox comes to mind, with its new release cycle. It makes you somewhat uncharitably wish that they stop channeling their inner Steve Jobs and make it easy for users to tweak their systems – and share the tweaks.

Or they should look at what Microsoft Office did, changing the GUI so that people had to figure out how to print a damn document.

I’ve always used Kubuntu. Not perfect, but it’s in the right vicinity. It has the goodness of debian/apt software management. But, because it’s still Ubuntu, it’s not always multiple versions behind, like debian stable. And, the Kubuntu team is content to ship a fairly stock KDE.

Been using gnome ever since kde switched to 4, sice both kde and gnome/unity are useless to me in their current form, I switched to xfce4.8 which as far as I’m concerned it’s just a stripped down version of gnome2

A recent botched ubuntu upgrade drove me to arch – I have no problem with botched upgrades, but I can’t stand botched upgrades caused by so many layers of complexity that I can’t find any decent way to recover. I’m on dwm now, and almost entirely happy.

Even if we want to be buzzword-compliant and use XML, at least you can hand-edit the damned thing in any text editor you want.

But the biggest benefits of plain text files come with the more traditional config file styles. My personal fave looks like this:

FOO=”foo_value”
BAR=”var_value”
…

Not only can any human edit it with any text editor, but any semi-competent sysadmin can write a sed or perl script to automate editing $arbitrarily_large_number of these files. That this format is so easily and painlessly assimilated into a shell script via the “.” command is perhaps one of the reasons I love it so. The most common backup program I deal with on *nix machines at work uses the envar approach, so I wrote a shell script specifically to get/change certain values from these files (backing up the old version under a different name in the process). It makes it dead simple for me to quickly turn on a config feature the backup software vendor inexplicably started turning off by default.

My second choice for config files is the passwd style of colon-delimited fields. Those are pretty damned easy to parse and edit with scripts as well.

I had a pretty similar reaction to Gnome-3 as featured in the current Fedora release.

A charitable explanation is that both Unity and G-3 are targeting smartphones, tablets, and other small-screen devices geared more toward touch-screen driven passive content consumption than toward the traditional large-screen, keyboard and mouse driven active text-oriented content creation functions at which Unix has excelled.

My solution was to take a quick detour through KDE followed by settling in with XFCE, which is easy enough to configure in a way that is pretty close to the classic Gnome-2 look and feel.

I’m staying on Ubuntu 10.04 as long as I can…and if the next LTS release doesn’t clear some of this up, I’ll change distros. Advantage to Linux: You never HAVE to put up with this bullshit if you don’t want to.

>I don’t even like the dock on those occasions when I need to use OS X. The growing insistence on copying the dock
>(both in Unity and in Windows) is something I despise quite terribly.

One problem that Windows and Linux both face, especially in light of attempts to emulate the OS X dock is that windows and linux do not make clear distinctions between the windows and the applications themselves, and they can not due to the lack of a universal menubar. Consider esr’s problem where in one icon on the unity dock represents 3 different things. It represents the application launcher, it represents the first window and it represents the second window as well. Now consider the same use case in OS X. When I launch the terminal from my dock, I get a new window, when I minimize it, that window goes to its own place on the dock, separate from the launcher. If I make a second terminal window and minimize that, it too goes to its own place on the dock, rather than being compacted into the same location. 3 distinct things, 2 windows and a launcher, 3 distinct items in the dock. On Windows 7 and Unity, 1 item on the dock, 3 distinct things.

The other problem is that the dock in OS X is not designed to do the same thing that the Windows and Linux task bars have been designed to do. Never has the dock in OS X been about window management beyond being a location to store minimized windows. By comparison every new window regardless of it’s state appears in the windows / linux task bars. You can not have an OS X dock serve the same purpose that a task bar does, they are different concepts, and trying to merge them just creates frustration as noted here.

All that said, Eric, if you tend to prefer the feel and look of Gnome, have you considered looking at Xubuntu which uses XFCE? I switched to that after finding Gnome and Unity to be too resource hungry for my little Apsire net book and it works fairly well for me. My only complaint would be you can’t reorder windows on the task bar.

I was a happy GNOME 1.x user who gave 2.x about ten minutes, switched to KDE 3, and never looked back. I highly recommend it.

In my view, GNOME embodies every way that the Unix mentality can possibly go wrong: obliquely hidden functionality, jarringly inconsistent interfaces, and behavior that always seemed to be surprising in the foot-bazooka sense. Add to that the cluster that was GConf (which was at least text-backed), and I couldn’t understand why anyone liked it.

There’s much value in the paradigm of loosely-knit components working together to make a complex system, but when you’re dealing with a product on the level of a desktop environment, the components shouldn’t be exposed to the user at the granularity of entire separate applications a la command-line tools; the KIO system (at least through KDE 3; I’m still not certain about 4) did all of this the Right Way by making it possible to drag a track from ‘audiocd://Ogg Vorbis/track01.ogg’ to ‘sftp://mediabox/music/’. GNOME made sure you saw all the seams between applications.

I’m normally a very fond user of ‘awesome’ for a wm, but I’ve lately started using lxde, as it’s light, fast, and lets me use the same keyboard shortcuts I use on Windows. (Meta-R for a run dialog, Alt-space for window control, Alt-tab for switching between windows, Meta-E for a file browser, Meta-D to minimize everything.)

The panel behavior is very straightforward and configurable in LXDE, too.

I installed the 11.10 Desktop on my NC-10 yesterday. This was the _absolute_first_time a fresh install went without a hitch! Ever. Well, that is except for some of the Fn keys that didn’t work, but then I found the the ‘ppa:voria/ppa’. Now all is well.

Unfortunately. Other than on a netbook, Ubuntu’s unity would just definitely SUCK. I’ll stick with 10.04.3 LTS on my desktops and server for now. In the future I’ll be looking elsewhere. Ubuntu has lost it’s way.
.

Add me to the long list of long-time Linux and Unix users who have switched to xfce now that GNOME, KDE, and Ubuntu have all followed Microsoft and Apple down the path of “let’s dumb down the computer desktop so it looks like a shiny smartphone” user interface.

We want our computer desktops to look and act like computer desktops, not smartphones!!

There’s a reason everyone eventually settled on the taskbar style UI until recently: it’s *good* and it *works*. And I think there are enough people frustrated with the new polished-turd desktops that xfce is probably getting an unprecedented number of new users right about now. And I’m sure that by next Spring, Ubu 12.04 will include it as a top-tier option (perhaps even replacing ‘unity 2D’) and Mark Shuttleworth will be telling us how wonderful he is because of that decision.

At least in Linux we still have a choice. We might even gain some Windows and Mac refugees who just can’t tolerate the polished-turd desktops anymore.

> But the whole direction of GNOME – emphasizing slick appearance over function, stripping control away from the user in the name of “simplification” – is perverse.

My theory is that the importance of a good wetware memory to hacking has led hackers to be infected by the meme “If you want to do more, then you have to be able to remember more.” The corollary, applied by hackers to UIs intended for non-hacker users is “If you can’t remember as much, then you can’t be allowed to do as much.”

But this is not the path to user friendliness, it’s a path to user patronization. It’s just another variant of the unixism that “users are lusers.”

> User configuration data goes in plain text files, not binary blobs.

Yes! I’m a non-hacker and a Unix-hater, and even I agree with you on this particular point. More than that, I’m a bit puzzled about what incentives programmers might be following to do otherwise.

I use Xubuntu, originally because my computer was too pathetic for the fancy graphics-card-requiring GUI functions that GNOME and standard Ubuntu deigned to grace me with when I installed, later out of habit. Just early this summer I switched to StumpWM and haven’t looked back. It’s…absolutely wonderful! And it contributes to my longstanding quest to use such nonstandard interfaces on all my technology that I can never lend it to anyone because they can’t figure out how to work it. (So far: RPN HP calculator, jailbroken heavily customized iPod, StumpWM. :P)

Hah. I’m still at 9.04 — saw this train wreck coming, just haven’t had any reason (yet) to explore what distro to switch to. Well, the 9.04 repos are gone, so I’m not getting security updates any longer, so I ought to get with it. Pondering aptosid; want to stay with .deb package management.

Eric, have you considered Fvwm? Your config is all in an honest-to-god .rc file (or multiple files, if you wish). It handles extended WM hints, has a panel (I use Fvwmbuttons for that, but there are other choices), and you can use stalonetray or something similar for “tray” apps. There’s modules for various docking operations too. It also has the ability to do piping operations, so you can to fun things such as us a Perl script to generate Fvwm commands on the fly, then have Fvwm read them in and execute them. And it will do whatever sort of focus-follow-raise operations you want, at least as far as I can imagine.

I’m trying to understand why a new version of SCons required moving to 11.04.

I’m perfectly happy at 10.04LTS. When a new video card required a new driver, unsupported by 10.04 in any form I could find anywhere, but integrated right into the 11.04 kernel, the solution was this: add the Natty backport PPA, install the 2.6.38 kernel, and ditch auto updates. I know this puts me at risk here and there, but I don’t often play in traffic, so I’ll take that chance.

>I’m trying to understand why a new version of SCons required moving to 11.04.

I was happy with 10.04 too, but it seems the LTS package for SCons hasn’t updated since it was issued, it’s some version between 1.2.0 and 1.2.1, and I wanted to get my hands on 2.x.x. I upgraded to 10.10 for that.

>It reduces the variables of things you need to check on a support call.

Wrong approach. The right approach would be making part of your diagnostic procedure taking a diff between the user’s configuration and the factory defaults…something that’s easy when the config is textual.

KDE went to 4, Gnome went to 3, Ubuntu went to Unity… and a LOT of folks went to Xfce or LXDE.

Right now I am running Xubuntu since I can get it in 64-bit. Unless something is grossly wrong with it, I plan to move (back) to PCLinuxOS when the 64-bit version (currently in testing) is truly released. My experience has been that PCLOS gets out of my way and does what I desire when *buntu claims to and only almost does.

I have recently started using an Arch derivative on a laptop of pretty much ancient vintage and been overall impressed, so I can see that Arch may well be worth a look.

Nice thing about Debian and its derivatives — they still support a wide range of alternatives. I’ve been using essentially the same configuration with ctwm for the last 20 years. I’ve been considering xfce and fvwm, but really… why?

It does, by default, but look at some of the other options.
You can get it with KDE or XFCE.

I was going to write a rant here a while back stating how underwhelmed I was with OSX when I bought my first mac last year.
The gist was that I was expecting this amazing UI experience. I thought I was going to be blown away by what I had been missing all these years (I’ve been running Redhat, and then Fedora with Gnome for the last decade, both at home and at work). I was going write about how lame and difficult it was to get into my normal workflow it was with OSX and how I gained a whole new respect for my old friend Gnome (2 at the time). Then I replaced my work machine and upgraded[?] to Fedora 15. I was shocked and amazed at how absurd Gnome 3 had become. It seems that they adopted everything that annoyed me about OSX (activity mode, app grouping instead of the traditional CTRL+Tab behavior) and ran even further with it. (nothing allowed on the desktop, no dock).

I’m running KDE now but it seems heavy and bloated. I can hear my hard drive crunching on fairly powerful brand new laptop (something I didn’t hear with Gnome or XFCE). I stuck with Gnome 3 for a month to make sure that it wasn’t just me being too old and calcified to adapt to change. I’ll give KDE and honest shot as well but I’ve already got XFCE installed and ready to go if that doesn’t work out.

In short, Gnome 3 took all the wind out my sails when it came to the “OSX ain’t all that” argument. I don’t see myself sticking with Mac after this laptop and I’m certainly not going back to Windows after a decade of not dealing with viruses and other malware. I’m sure I’ll still be on Linux but I won’t be bragging about the interface (not for a while anyway).

Linus Torvalds has already called Gnome 3 an unholy mess. Maybe with ESR bugling the call as well some momentum will build to either fix Gnome or bring XFCE up to where a 21st century interface should be.

I too was sent into dysfunctional shock when I upgraded to the new Gnome/Unity. It was so disruptive to my normal workflow and usage patterns that I was tempted to just maximize an emacs window and live entirely within the “emacs OS” – something I’ve not had to do since the days of vt100 remote modem dial-in sessions.

I have now been forcing myself to use Gnome 3 for several months now. I must say I don’t feel like I’m fighting the interface as much now, and there are even a few things which I hated at first that I now like a little. But it still has many shortcomings and painful missed expectations. It has to be the most egregious example I’ve seen of a project with such an enormous pretence of focus on ultimate usability that has resulted in something that is so utterly un-usable.

One thing that at least made it tolerable for me was to install the Avant Window Manager. It adds yet another dock-like interface, but it is generally much less mysterious and makes it easier (though not perfect) to manage multiple instances of applications. The most annoying problem I have with it though is that it’s auto-hide feature wakes up and un-hides itself periodically without reason.

Another awkward Gnome 3 behavior is that it is now very challenging to resize windows. Only the lower-right hand corner of a window is relatively easy to grab for resizing; but you have to be pixel-perfect with your mouse dexterity to be able to grab any window edge you want to do a window resizing.

I do agree though that the now-binary blob configuration is an abhorred regression to an anti-user philosophy. There is no excuse for it as there are plenty of text formats, even XML or JSON. Even something like a SQLite configuration wouldn’t be nearly as bad as the “no-user-servicable-parts” blob.

I think the issue with Unity is the problem they were trying to solve. It looks like it was developed for and aimed at the sort of small displays you get on netbooks. It’s actually not a bad choice if you have limited screen real estate, but it falls down badly if you have a large display.

I switched to XFCE4, which I also use on my old notebook that can’t run Unity, but I have half a dozen other WMs installed on the desktop to play with as time permits. With XFCE4, I have a taskbar, icons on the desktop, and an uncluttered desktop because I can stash lots of stuff on panels that auto-hide till mouse over.

I like LXDE too, but it has a few quirks that will take changes in the file manager to resolve.

Agreed on the virtues of text files for configuration, but I’ve seen at least one Linux distro that wants to do everything in Python, with configs stored in blobs the Python programs create and manipulate. I am so not thrilled by this, but I suspect it’s the tip of an iceberg.

I don’t think that linux folks understand user friendliness at all. They know too much about how their computer works and know the work-arounds. The point of user friendliness is to make things hassle free and require less effort. Linux UI’s don’t really do that (although I haven’t used a tiling window manager yet.

Gnome and Unity are unusable for anyone trying to do actual work. I’m an opinionated bastard, and I expect my technology to work how I want it to work. If I wanted to do things in the one true way, I’d get a Mac. I contemplated trying out XFCE, but I invested way too much time on figuring out how to configure KDE to work exactly the way I want. Sure, it’s a bloated pig, but it’s very configurable. Time to give OpenSUSE another go.

BTW, .config/dconf/user is extra-chromosome retarded. WTF are they smoking?

I wonder is we should just petition Google to make a desktop environment for Linux.

And I hear the harmony of thier spring click keyboards
echoing in my dreams with the, ‘\n’, attached.

Friday, I was one click away from ugrading to 11.10. I still don’t know what
prevented me???

Oh, wait–a load of movies I encoded to .avi files via mencoder.
Yes, after I put up a wallpaper of the FBI WARNING, I grabbed
some popcorn, to enjoy a double pass encoded version of my favorite
unamed flick.

I couldn’t find my sound or video using the command line after I compiled a
a new snapshot of mplayer. The gui’s worked but I usually use the command line–
which gave me all kinds of hell. Usually the other way around.

I wanted to understand and fix that problem first, figured I would grab 11.10
next week.

But now…

Time to wait, find a new target down the road, and then I’ll pull the trigger.

As far as not being able to move/remove panel applets in “Gnome Classic” mode, are you sure you just aren’t forgetting their braindead “Because somebody might accidentally right-click the panel, we’re going to make it so that you have to *alt*-right-click to change anything on the panel” idea?

I’m able to move/remove panel applets on “Gnome Classic” 11.10, but having to alt-right-click to do it had me ready to throw the mouse agai^H^H^H^H through the wall. Fortunately, though that was in a VM on somebody else’s machine, so I don’t have to deal with it every day.

On my own laptop, I plan on riding Lucid and Gnome 2 (which, though you wouldn’t know from its descendants, is about the best GUI ever) straight to EOL (if the hardware lasts that long), at which point I’ll probably go for XFCE if the Gnome team hasn’t woken up (assuming that XFCE doesn’t jump into the “let’s design broken UI’s” lemming herd too).

>are you sure you just aren’t forgetting their braindead “Because somebody might accidentally right-click the panel, we’re going to make it so that you have to *alt*-right-click to change anything on the panel” idea?

Next time I’m in GNOME Classic – if there is a next time – I’ll try that. It’s not, like, documented or anything anywhere I’ve seen, but if you say you’ve seen it work I believe you. It would be consistent, anyway.

I use Linux because I want a system that I control, not one that controls me. I use Ubuntu because I want a system that “just works”, where I only have to control the stuff that is unique to me. I can be a sysadmin, but the less time I spend administering my system, the more time I have to tinker with code or whatever I actually want to do.

That said, I’m frustrated with Gnome 3. It’s not the user interface, really. That’s really rather forgivable, for me. (I’ve been making fun of my Mac friends who don’t like the “command line” cause its confusing, but happily type the name of their program into an auto-completing text-box.) Also, I like eye-candy. It’s not the most important thing, but little graphical cues can help reduce the time I spend thinking about managing my windows. (Though, on a system with limited graphics, it doesn’t run Gnome, it runs XFCE or RatPoison and I live with the missing/poor system tools.)

It’s the fact that they are actively hiding my data from me. More and more I feel like the Gnome team wants me to stop tinkering with /their/ system. More and more I keep finding “How-To”s that say things like “Click here” and “Click there” instead of “Run this script:” or “Type these commands”. If I can’t edit my configs with sed, vi and friends, they have failed. If I can’t use diff to figure out what is different between my wife’s config and mine, they have failed. (“Binary files a and b differ” doesn’t count.)

I don’t know that I’ll leave the system just yet, since I am rather attached to the “system tools” that sit in my dock and generally just work: wireless, bluetooth, cpufreq, et cetera. If I had trivial replacements for those (that work as well (alas, XFCE has failed me there a few times in the past)) switching would be an easier decision.

The problem of creating a usable Unix GUI that appeals to newbies and techies alike has been solved. It’s called Mac OS X. It supports the vast majority of open source software out there.

Limiting yourself to an open-source GUI stack is unnecessarily ruling out the easy and obvious solution.

That said, for Linux I run Arch because simple to me has an entirely different meaning than it does to the Average User. I want a desktop with as few moving parts as possible. That rules out GNOME, utterly. I use Awesome as my WM but when it comes to normal “desktoppy” window management it is still to this day hard to beat Window Maker. What a lovely piece of software.

Deep Lurker asked (effectively): Yes! I’m a non-hacker and a Unix-hater, and even I agree with you on this particular point. More than that, I’m a bit puzzled about what incentives programmers might be following to do otherwise.

Laziness, combined with APIs that make it easy, I’m guessing. (There might be some utterly psychotic reason to deliberately do that, but I can’t imagine what it is.)

I’ve never had to copy with config file parsing in Unix, but I’ve had to write arbitrary data out and read it back in Windows under .NET, and I’ve done config parsing in more than one way.

What I suspect the binary blob-file ESR found was, is whatever-language-and-API the developers used, doing binary serialization of whatever object they use for configuration encoding. This has several advantages for certain contexts; it’s small and fast, mainly. Useful for sending data over wires in large amounts. And in some cases it’s probably just the default option.

For a damned desktop utility config file, however, it’s a disastrous choice (as esr noted), because nothing can read it except for a tool designed to do specifically that, and you can just forget about a human being reading or editing it.

Decent, modern platforms should all have an XML serialization solution these days, either built-in or easily added (or, if you’re masochistic enough to roll your own C, there’s probably something you can make work, God help you).

JSON’s better than binary, too, and very widely available.

And one of those is exactly what one should use if one has an in-RAM object tree representing the configuration data.

(Parsers for VAR=value style configs are very old-school. And they work, if the data is appropriate, and if you either find a good parsing library or don’t screw up your own. But why re-invent the wheel?

Especially if you’re not writing for a 30mhz machine with 2 megs of ram and 30ms seek times? Anyone telling me XML or JSON have too much overhead for a config file will be mocked soundly*.

I’ve just updated to Xubuntu as well, primarily because neither Unity nor Gnome Shell will let me get rid of the top panel or move it to the right edge. The systems I use are all either widescreen or multi-monitor, so it makes no sense to use any of the vertical screen real estate on the GUI. Gnome 2 let me do that, as does XFCE. The people behind Unity and Gnome Shell know better than allowing such heresy.

Xfce rocks. Seems to me with each and every upgrade of be it Windows, Mac or Linux, the race is on of who can reach stupid the quickest. Ubuntu is now second and has been overtaken by Apples Lion and rounding the corner in the final stretch, Windows 8 just might pull out a win. ;-)

I felt this pain recently as well. I just couldn’t make myself like Unity after an upgrade. Initially, it impressed me from an aesthetic sense, but I found it rather awkward to use. I might have gotten used to it if I spent more time with it, but I don’t want to spend my time getting used to something that doesn’t quite work for me. Maybe someone who wasn’t used to the old behavior would appreciate it more. Maybe it’s great for people who spend most of their time on social networks. I’m not one of those people.

XFCE is where I ended up. LXDE was also a nice (and as expected, basic) experience. I still use KDE from time to time since I end up installing the libraries anyway to make certain tools available (valgrind’s massif visualizer, for instance). Plasma isn’t my favorite environment, but I can use it without much fuss.

I’m not going to bail from Ubuntu — I do like what they package. I learned to do some serious homework before upgrading in the future. I have one Karmic box left that I’m probably just going to end up wiping.

It’s kind of funny how these time sinks creep in. All you really wanted was a newer version of SCons without bypassing your package manager.

Quite a few people I know who do *nix for a living have macs at home. The reason appears to be that if your day time job is troubleshooting systems you want somethign that “just works” at home. And most unix engineers (at least around here) are well paid so that they can afford to pay the Apple premium.
I have a mac for exactly the same reason. I don’t want to be my own sysadmin.

The only issue I have with Xubuntu (which I just painlessly upgraded from 11.04 to 11.10) is that I no longer seem to be able to easily get a second monitor to show up adjacent to my laptop. This maybe due to the accursed AMD / ATI driver which sucks enormously or it may be due to the way that xorg.conf seems to have fallen by the wayside or some combination. Either way sometimes you just have to reboot 10 times a la Redmond and sometimes it just locks up when you remove the monitor.

However I’ve been using Xubuntu since 9.04 if not 8.x (when did Asus release the original netbook? anyway it was 6-12 months before that) and it does almost everything I want.

In VMs I’m using lubuntu a lot and I recently started looking at Debian Mint though that’s been a bit hit and miss (i.e. right on the bleeding edge) so I’m mostly not using that.

Oh and in relation to the monitor thing. If you have an encrypted user directory be VERY VERY careful about powering off using the switch because you can end up corrupting an encrypted file. If it was a critical config file you just corrupted then restart is a bitch and you can’t always just copy a backed up copy over. Seems like you have to explicitly delete it and then copy the file across.

ESR, I totally agree with you, but beware.
I’m using Debian with fluxbox, and I was using Thunar (the one from XFCE) as file manager. A friend of mine is using Debian with XFCE, and when I switched to PCManFM (which is *waaaay* more responsive than Thunar) he wanted to follow my steps but… we found out that XFCE has the dependency from Thunar *hardcoded in the source* (i.e. it’s impossible to totally switch to a different file manager without rewriting XFCE almost from scratch).

I was surprised to see how long it took for someone to mention lubuntu. I loved xubuntu but LXDE is even more lightweight than XFCE. I’m pretty sure my next distro isn’t even out yet and I don’t know what more I could ask from it.

Whenever I get irritated I launch “tint2” and get a simplified taskbar to quickly switch between windows (not apps).

You may also be interested in:
gnome-shell-extension-apps-menu.noarch : Application menu for GNOME Shell
gnome-shell-extension-places-menu.noarch : Places menu indicator in the system status area
gnome-shell-extension-alternative-status-menu.noarch : For those who want a power off item visible at all the time

Also, you will probably get irritated with the faulty network applet on gnome-shell. Disable it for some gnome-shell iterations and get the traditional nm-applet back:

# mv /usr/share/gnome-shell/js/ui/status/network.js /usr/share/gnome-shell/js/ui/status/network.disabled
then ALT+F2, ‘r’ (won’t work on Ubuntu so either relogin or do first $ gsettings set org.gnome.shell development-tools true )

I was pretty pissed off with Ubuntu when the ditched KDE3 for the unworkable KDE4. If i have to give up Gnome 2 for something crap, I will simply stop using Ubuntu. I use the LTS version, so i don’t have to make that decision yet; why ubuntu and other distros think it’s right to brinhg out new versions twice a year is beyond me — Linux is quite mature now so unless you want to be on the bleeding edge, twice a decade would be fine.

If it is control you seek then perhaps you should move away from distros that have binary blogs entirely! Enter gentoo!!! Yes, gentoo can come with binary blobs but you can avoid those by adding the type of licenses you want to allow in /etc/make.conf (i.e @FREE and @GPL-COMPATIBLE) and don’t forget to add the “-deblob” USE flag to grab a kernel that doesn’t include binary blobs. Yes, the install from start to finish can take an entire afternoon (or longer), but IMHO its well worth it.

Don’t give up on KDE4 – yes, it has got a lot of unnecessary eye-candy which can thankfully be turned off, and it has taken a long way to get to being usable. It was more than two years from the release of 4.0 until I felt it was stable enough to use as a day-to-day desktop, and another year from that until its mail application was reliable enough.

I’ve never just accepted a distro installation of KDE, though. Building it myself means that it is possible to cut certain bits out, such as the dependency on Akonadi (PIM data management system) which permanently adds to the basic desktop ~10 processes and a running MySQL instance just to display some information in the calendar. The much-vaunted desktop search sounds great at first glance, but it is pointless due to the absence of a usable search client, so I’ve turned that off too. And I’ve made some patches that the KDE developers are reluctant to accept because they don’t fit their personal grand vision. So the annoying but absolutely vital (or so I’m told) cashew is now gone :-)

After all that, KDE4 is now back to a reasonable upgrade from KDE3, which is what I wanted all the time. Maybe Akonadi, Nepomuk, Strigi, QML, Plasma Active and all the other clever bits in KDE4 will have their day, but not yet.

The best solution for software bloat, though, is more memory – having recently upgraded to 8G (but still running 32-bit), the disc thrashing has all but stopped.

I’ve been a fan of XFCE on Xubuntu for some little while now. All I want is a workspace that gets out of the way and lets me *work*. Anything that hinders me in getting what I want done in the easiest, most direct way possible needs to go. KDE was too bloated and bug prone for me, and I liked Gnome 2 well enough with some minor caveats. Unity/Gnome 3 are both disasters for me as an end user though. Adding to my workload, no matter how nice it looks or revolutionary the underlying paradigm, is not going to endear anything to me.

Why not give openbox a try. You can configure everything from text and along with the assorment of helper desktop apps, like panels and docks with the same philosophy you get a totally text configurable desktop environment.
The Archlinux wiki is a great reference as far as the configuration of these apps is concerned.
On the down side there is a steep configuration curve, meaning one has to work hard to get all these apps to function as desired when freshly installed.

I’m been using Linux since 1994, moved from Slackware to Redhat/Fedora, and in 1997, to Ubuntu. I’ve looked at the post 10.04 versions (10.10/11.04) and I wholeheartedly agree with OP.. Canonical/Ubuntu/Gnome HAS lost their f’ing minds.. Since I have a policy of only running LTS versions, I guess I have until 12.04 to decide where to go (Debian, likely).. Canonical/Ubuntu/Gnome.. You HAD a good thing, you blew it….

in agreement with the sentiments expressed here, hanging onto lucid atm, however holding out hope in the elementary team. They seem to understand what is required, however the release will be based on 12.04, so need to wait a bit longer.

Can anyone tell me where I can find LXDE packages for Fedora (or rather Scientific Linux 6.0)?

I currently use Window Maker, but all dock apps that I use got lost in depths of time (and I didn’t have foreknowledge to save _source_ RPMS), so it is less incentive for me to continue using it after upgrade.

I agree. I happily ignore all Linux desktop issues, since my ctwm has looked the same since it was twm. The only major changes to my .ctwmrc is that I put the WorkSpaceManager vertically and to the left, to fit with the new lowscreen format, and I had to add a “kill” button since nowadays many windows don’t have their own close button.

I understand (very well) that it can feel bloated at times, but as others have mentioned, KDE is a good option these days. It’s fantastically configurable when you need it, and well engineered. The Kubuntu team have released a “low fat” KDE option with 11.10 — i have not tried it, but it looks like it does a lot of the things that I (and many people) already do manually… turn superfluous stuff off.

I like LXDE and XFCE, and use them now and then. But honestly, while they are good and fast they feel to me like stepping back in time… not as far back as FVWM, and not as minimal as fluxbox (which I also like)… but still, they feel antiquated to me. Of course, that may in fact been some of their appeal to some people, and it’s not necessarily a bad thing… but it should be recognized.

Had this issue, Gnome3 shell wasn’t supported by FGLRX. I’ve managed to find fallback mode, which is just more or less gnome-panel with a few changes to how you interact with it. Did you know you can recover full control of your applets by hitting alt+rightclick? I tried XFCE, but it didn’t play nice with my trackpad (couldn’t handle two finger right-clicking), and generally looked and felt shabbily put together.

At the moment I’m in Gnome3 fallback mode, with which the only gripe I can determine is that I can’t change the colour of the fonts. As a long time gnome2 user, I’m more or less happy.

I’m confused though. With so many people pissed off about the changes, why doesn’t somebody just fork gnome2 and continue maintaining it?

@ Chris Lemmons: “I use Linux because I want a system that I control, not one that controls me. I use Ubuntu because I want a system that “just works”, where I only have to control the stuff that is unique to me. I can be a sysadmin, but the less time I spend administering my system, the more time I have to tinker with code or whatever I actually want to do.”

That’s exactly right, Chris. Remember esr’s parable about user types? After all these years, hackers using Linux want the luxury of, at least sometimes, being able to act like Penelopes. Sadly, “modern” Linux UI designs are moving AWAY from that direction.

The computer revolution has presented us all with the unique opportunity to compete with our ideas. You don’t like the default window manager choice made in Ubuntu? Cannonical will succeed or fail based on the ideas they express in this new Unity interface. That is the beauty of this ‘Idea Economy’. It is, we keep saying about choice. You of all people should realize that (Ubuntu == Linux) = False. Use another distro. Or change the window manager, download XUBUNTU or LUBUNTU. Stop kvetching and find one of the plethora of choices and then write a post extolling it’s virtues. Even better, don’t like the direction of Unity and/or Gnome, start a new project and show us your better idea. Fork the damn code of Gnome 2.x and pursue a different path. That is, after all, the point of the OS in FOSS, yes? I bet you have enough followers that you wouldn’t have to actually do much coding and, who knows, you may invent the newest better mouse-trap and the world will beat a path to your collective door.

Finally, and I think most importantly, while I agree with you on how wrong it is to find a blob of binary configuration data I find your discussion of the topic condescending. While I used to program for a living, back when GUI interfaces existed on mafchines @ XeroX Parc, your, making it simple enough for you non-programmer types (i.e. we’re just to simple-minded to understand) insulting. Even the average Linux ‘power-user’, who has never written a line of code has more than likely run a terminal session and edited a .cfg, etc. file. The implication that they are incapable of understanding this without having ‘children’s corner’ is NOT the way to persuade people to you cause.

Unbelievable! As for me, KDE went in the wrong direction too! All the major DEs went nuts and turned into something utterly improper!

Meanwhile, Mac OS X Lion offers EXACTLY the sort of functionality I thought UNIX DEs will eventually embrace:
– it remembers the states of apps across even reboots, not just exits
– underlying control version combined with autosaves
– more UNIX DE-like concept of unlimited workspaces
– unifying integration of mobile iOS and desktop Mac OS X

Now they’re talking! It so saddens me to see where FOSS community is taking UNIX desktop, it just THE wrong way and I couldn’t agree more with you there!

Honestly, I’d rather jump on a Mac OS X for my desktop experience. Currently Apple guys look more sensible to me than anyone else. They’re making steps in the right direction, in one where all DEs should be making imo.

>Why didn’t you just compile SCons yourself and install it in /usr/local or backport the package?

Because I use the package manager to audit my system state. For this to work, the number of installs I do outside the package system needs to be limited. Generally I don’t do it unless I need to run a version that hasn’t been packaged yet.

Eric: I’ve been running Xmonad under Gnome (and lately, Xfce) for the past couple years with good results. Knowing how you like to have control over your software tools, I’d suggest you give it a tinker.

@ Bruce Barr:
> Fork the damn code of Gnome 2.x and pursue a different path.

Surely you must see, though, that major forks in FOSS projects don’t happen very often – there is only really one* of a big project that persists to this day. Because it would achieve very little: splitting the development effort and the user base, of which there is only a limited supply of both. Not to mention that the developers of the new fork would inherit the responsibility of fixing all of the bugs still open at that point, with no help from the developers of the original.

[*] Or maybe two, if you consider Gnome as a fork of KDE originally made for ideological reasons.

Clearly you don’t get it. This is all the late Chairman Steve’s idea/wish/control. Computers are for users, and users are not developers. Pretty UI, touch interfaces, icons and apps. This is where its at.

I hate it, but clearly Apple’s success has caused all of the PHB to decide that they are right and allowing for users to control their “PERSONAL computer” is wrong.

I strongly suggest that ESR stay away from OS-X and do not try to develop for IOS.

You know I think all this anger as the influence of Apple on UI design is misplaced. The problem to me appears to be that OSS developers have no clue how to design a configuration interface. I have never found myself lamenting a lack of configuration capability on OS X. The settings I have need to configure often, are always available via config software, and the rarer ones are available by text file. In contrast, my experience with ubuntu has been consistently that configuration utilities either always provide every damn option in a cluttered mess, or just whatever random configuration options the software designer was using at the time, but not the ones most people would want to use. It is indeed possible to make user friendly and configurable an option, and for the most part OS X achieves that. If the attempts of Ubuntu and other OSS developers are horrible, I can only assume it’s because they don’t have the years of experience in it that Apple has. What most people here seem to be missing is that text only configuration lacks discoverability without huge amounts of documentation surrounding the config file. Which incidentally also answers the query of a post further up over why max users might deride the command line, but type their app names into spotlight, it’s discoverability.

However, the text editor always works for experts and for those who get advice from experts. If some wannabee UI designer who thinks three decades in a back office qualifies him to design desktops gets ahold of your window manager, the text editor still works. If some wannabee UI designer fresh out of school gets ahold of your window manager, the text editor still works.

The only time the text editor stops working is if the flat configuration file doesn’t exist in the first place.

I think that XFCE will be a good choice. It makes it easy to attach a hot-key to everything. I was enjoying never having to reach for my mouse anymore (unless to effectively navigate the web) after I switched to it. The only thing missing from basic XFCE is good multiple monitor support, which I have remedied by replacing xfwm4 with xmonad.

Desktop environments will always be a matter of choice and personal requirement, but as Linux people, we usually opt for choice and freedom over “simplicity”, which XFCE seems to have grasped wonderfully.

“Linux Mint 12 “Lisa” will be released in November this year with continued support for Gnome 2 but also with the introduction of Gnome 3. The radical changes introduced by the Gnome project split the community. At the time of releasing Linux Mint 11 we decided it was too early to adopt Gnome 3. This time around, the decision isn’t as simple. Gnome 3.2 is more mature and we can see the potential of this new desktop and use it to implement something that can look and behave better than anything based on Gnome 2. Of course, we’re starting from scratch and this process will take time and span across multiple releases. Until then, it’s important we continue to support the traditional Gnome 2 desktop. We’re likely to release two separate editions, one for Gnome 2.32 and one for Gnome 3.2. We’re also working in cooperation with the MATE project (which is a fork of Gnome 2) at the moment to see if we can make both desktops compatible in an effort to let you run both Gnome 2 (or MATE) and Gnome 3 on the same system, either in Linux Mint 12, or for the future. “

I’ve had a similar experience upgrading my 10.10 to 11.04, after that I’ve tried Linux Mint Debian edition with Xfce which is based on the unstable Debian repo and, in general, has the last versions for binutils and compilers (this was my main reason to upgrade). I also have a different machine with 11.10 for testing purposes but I don’t use it directly for real work, when I need something compiled for Ubuntu I typically ssh from a different machine.

Ironically I think Mac OSX plus iTerm2 is now a “better” Unix than Ubuntu with Unity :).

Wow! What a rush. It’s always interesting to see when people hit the wall – and don’t even understand that it’s the wall they are hitting.

It’s easy to see that ESR and all the people here think they are right, but the facts don’t support their opinions. I think this article (which talks about the fact that world was quite vocal about Steve Jobsmostly silent about Dennis Ritchie) explains it the best: https://plus.google.com/112218872649456413744/posts/dfydM2Cnepe

The times… they are changing.

Take this self-righteous ESR’s rant: Wrong approach. The right approach would be making part of your diagnostic procedure taking a diff between the user’s configuration and the factory defaults…something that’s easy when the config is textual.

Looks quite correct? Unix way and all that. Well… facts don’t support this opinion. On server side… it works. On client… it does not. A lot of guys tried to use textual, easily editable, configs – and failed. Lots of other guys violated this rule – and some of them succeeded. Not all of them, by all means, but some.

What does it mean? That all the guys who used text config files are idiots? Hard to believe.

No, it just means that the tools for “normal” users are fundamentally different. Textual configs are good on server because it’s easier to fix them. But on client… you have noone why may fix them. User? Certainly not. Technician on the other end of support call? No anymore. “The gap” is so great today that most systems never hit anyone who may ever hope to understand a simple text config file.

This simple fact changes everything: you don’t care about “easy to fix” anymore. It does not matter. If it’s seriously broken it’ll be thrown away, noone will ever seriously try to fix it. If not the whole device, then at least the configuration files. And yes, I know a lot of cases where old computers were replaced by new ones because virus infected them and made Windows unstable – it was cheaper then to pay someone to fix them.

And this is where the “simple truth” becomes a lie. Sigivald Says: For a damned desktop utility config file, however, it’s a disastrous choice (as esr noted), because nothing can read it except for a tool designed to do specifically that, and you can just forget about a human being reading or editing it.

Yes, you can “forget about a human being reading or editing it” – but this is not a loss in an environment where there are no human beings who will ever even attempt to try to read it. But you can add sanity checks (CRC sums, etc) – and you can guarantee that your config will be in sane state, it will support whatever invariants you’ll put there no matter what. Someone may still attempt to fix the file using third party tool… but in this case you can redirect the support call to the manufacturer of said third-party tool.

Basically we are finally at stage in software where old approach (when new TV sets comes with full schematics for someone who may try to fix it) is replaced with a new one (where you can have few pre-canned solutions for most typical problems – and if it does not work you just replace damn thing as a whole).

But this approach will only work if you remove most of the configuration options. First TV sets had four distinct knobs dedicated to sound – and you dozen or so other fine-tuning controls. Today… you have nothing. There are few controls available for “certified technicians” – but there are not a lot of them. Either damn thing works or it does not – and that’s it.

Inkstain Says: I wonder is we should just petition Google to make a desktop environment for Linux.

It’s not clear if Google will succeed or not, but they understand the first rule of consumer desktop very well, indeed. They do use text files for their configs – but it’s JSON which is basically a serialization of some structure from memory, there are no way to change it (ChromeOS does not give you the ability to touch random files) and I’ll not be surprised if at some point they’ll replace it with binary blob for performance sake. Some files are already binary blobs (Cookies, History, etc).

As for XFCE… well, niche for “opinionated bastards who expect their technology to work how they want it to work” will probably always be there – but it’s about time for the major Linux desktops to leave this niche behind. I’m not saying the fact that they left it behind guarantees success on consumer market: it’s tough competition out there, but the fact that they are finally trying to move in this direction is something to applaud, not something to laugh at.

I’m a user of Linux Mint, Debian Edition. It’s a rolling release with Mint’s tools mastered off of Debian Testing rather than Ubuntu. Unfortunately the rolling release only appears to get remastered about once a year, which means new installations toward the end of that have a rather long update post-install update. The good news is it was just remastered in September. Also, they periodically have repository announcements for updated sets of repositories. I’m however tracking Debian a little more closely, which is a nice option to have with LMDE but isn’t necessary. It’s just a repository change away.

As I understand it, though, even the Mint that’s based on Ubuntu is run by folks who refuse to go to Unity or a broken GNOME version. Mint 11 has GNOME 2.32.

They also have an XFCE edition of LMDE and the main Mint edition is available with LXDE.

After all these years, hackers using Linux want the luxury of, at least sometimes, being able to act like Penelopes.

Penelope bought a MacBook Pro round about 2006 when they first came out. Since then she says that her computer has been much less of a hassle; she can focus on her work, Skype with her friends, and watch lolcat videos on YouTube without worrying about if her sound driver is going to work or having to hand-tweak over 9000 different config files.

Also she met a nice fellow who deejays Friday and Saturday nights at a local club — also on his MacBook.

That is precisely why I said the Unix community should kill their Buddhas.

The “Unix Way” got us far when experienced technicians were the only ones touching a computer. But its day is passed. Linux and open source will only survive if it adopts the relentless, single-minded focus on the end user that Apple did. And that means an end to the religious devotion to separation of concerns and mechanism, not policy that has defined the Unix mindset. And even then, it’ll be many years too late.

Meanwhile, hackers and other skilled knowledge workers are buying Macs.

I’m considering a move forward to XFCE, but this project looks interesting:https://github.com/Perberos/Mate-Desktop-Environment
“MATE Desktop Environment, a non-intuitive and unattractive desktop for users, using traditional computing desktop metaphor. Also known as the GNOME2 fork.”

Looking at the commit history, it appears that this project may actually be serious. I sure hope so, as I prefer Gnome 2 over XFCE and everything else I’ve tried so far.

That’s what I just did around 5 minutes after having finished upgrading to 11.10, and trying gnome-session-fallback (which didn’t even want to autohide the panel bars… what the fuck). This machine is a netbook, and no way I get 5% of my screen eaten up by a sordid static panel bar, be it Unity’s or Gnome’s.

Ladies and gentlemen, the reason why we have yet to see the year of the linux desktop. You may dislike the path that Gnome / Unity are taking, but until the OSS community as a whole can realize that visual aesthetics and ease of use do not preclude configurability and control, then the choices will forever be between pretty yet crappy and unfriendly but functional.

I’ve yet to be convinced that the new efforts (GNOME shell and Unity) are any better from a user-friendliness standpoint. Might as well stick with something that works for *someone* as apposed to *no one*.

I wiped 11.04 and reinstalled 10.10. But I recommend a version called Bodhi. It’s based on 10.04, but stripped down, way down, and running kernal3 and the Enlightenment desktop… which is like a mash-up between unity and classic, without the annoying bits of either and all the pretty.

I think you have a point that most computer users don’t want and shouldn’t need to know about system administration or systems troubleshooting. And that it’s that type of user Ubuntu (and Gnome, …) are targeting. For that type of users, a desktop with a handful of buttons for the handful of tasks that theyr will be doing — and they either work, or they don’t : users are not going to try and fix anything — something like Unity is probably a step in the right direction. Or at least a good attempt.

It’s probably not the type of desktop that suits power users or sysadmins or hackers – but they’re not the intended audience so rants by linus or esr or whoever are irrelevant.

otoh, I don’t see why this needs to be implemented with “binary blob” configs rather than human-editable text files, unless there’s some technical reason I’m not aware of (bad design decisions don’t count as technical reason).
I can think of at least one use case : a sys admin who needs to role out 200 identical yet slightly customized desktops and needs/wants an easy, reproducible, preferably scripted way to do that.

and the added bonus of text config would be that said power users or sysadmins or hackers could customize or fix those desktops if they wanted to, even if no normal end user should ever know they exist.

>You can not advance without being willing to make changes. You may disagree with some or all of the changes, but sticking with the past
>will not bring you to the future.
I absolutely agree, but there is no reason for Unity or GNOME Shell to be anything but experimental beta packages until they’re ready.

>I absolutely agree, but there is no reason for Unity or GNOME Shell to be anything but experimental beta packages until they’re ready.

There is one perfectly good reason. Most people, and especially the type of people who think aesthetics and ease of use preclude usability and functionality are not likely to ever switch until they are made to. It’s one of the ways’ the Apple has succeeded as well as it has, the decision to eventually cut away the old cruft. The switch the USB? Drop all legacy ports other than USB, even at a time when there aren’t a lot of USB accessories. The switch to OS X? Degrade classic OS to a second class citizen. Now admittedly, they did that slower, first with classic mode, then rosetta and now neither but still. The switch to Cocoa only? Kill Carbon. The switch to Intel? Kill PPC. Sometimes you just have to jump in completely, if you don’t, you’ll never make the leap.

> Then again, isn’t releasing “not ready” software to get early feedback from users one of the cornerstones of the open source development model ?
Sure, but it’s usually clearly marked as beta, and stable branches are maintained as needed.

> Sure, but it’s usually clearly marked as beta, and stable branches are maintained as needed.
> The difference here is that x86, USB, etc. actually worked.

Is there a stabe, working, better branch of Unity somerwhere out there for Ubuntu to use ?
or is Ubuntu using whatever’s available right now, and exposing that to the pressure of real life usage to force it to get better, fast ?

to me, it looks like the latter; it’s wouldn’t be the first time ubuntu uses its 6-monthly releases to try out something new, getting some real life experience with it and get it (more or less) right for the next LTS.

1) Your first and largest Gross Conceptual Error. A computer is not an appliance. It is a multi-purpose tool, and must therefore support far greater flexibility than an appliance. Fools who try to turn a multi-purpose implement into an appliance are Doing It Wrong.

2) Do not confuse the config interface and the back end. The purpose of the interface is to abstract away the complexity of the back end. Nevertheless, said back end must be text. Because we have learned over and over again, that it is a matter of when, not if, your back end will get fucked up, and you must have a straightforward way of unfucking it. You can’t diff a blob, you can’s grep through a blob, you can’t open a blob in vi, and a blob will confuse the hell out of your version control system. If you are storing your configs as a blob, You Are Doing It Wrong, and your users will pay the price. By all means, GUI’s and well designed wizards are a wonderful thing, but the back end of said UI’s must be text of some kind, properties, XML, JSON, YAML, whatever floats your boat.

3) I don’t know about your TV, but mine supports all various kinds of configuration changes. Switch inputs, color calibration, etc. My receiver has configuration and customization options that will blow your mind: volume control per speaker, balance, equalizer, signal delays PER SPEAKER, etc. Same goes for my DVD player. Those options are there, because the manufacturer can’t possibly predict my exact setup, and therefore has left them in there. Most people probably don’t need them, but others, like myself, do. Therefore, I do not buy devices which have been dumbed down to the point of being unusable.

> or is Ubuntu using whatever’s available right now, and exposing that to the pressure of real life usage to force it to get better, fast ?
No, the reason Unity made it into Ubuntu is because Ubuntu’s release process is a clusterfuck. [1]

> to me, it looks like the latter; it’s wouldn’t be the first time ubuntu uses its 6-monthly releases to try out something new, getting some real life
> experience with it and get it (more or less) right for the next LTS.
You mean like they did with PulseAudio? Yeah, because PA sure was/is a triumph in every way.

Last fall, when I still ran a MacBook, I installed an Ubuntu VM and booted up the xmonad tiling window manager just for fun. I wasn’t seriously intending to use it; I just wanted to try out something half-baked and geeky, so that I didn’t totally forget my old days of kludging Linux into submission for fun.

After a couple of days, I called up ZaReason and ordered a brand new, preloaded Linux laptop. It turned out that using a tiling window manager made me _massively_ more productive (about 30%, I’d guess). Not longer did I spend my time re-arranging terminals and Emacs frames—instead, I wrote a hundred lines of Haskell and my windows now choose an optimal layout automatically. I also ordered an Intel SSD drive, which is the single best hardware upgrade I’ve ever paid for.

If you edit text in Emacs or vim (instead of an IDE), you might also want to experiment with a tiling window manager. Choose one that you can script in Python. It’s not for everyone, but for some programmers, it’s a huge win.

@khim
So you’re saying the right attitude is to so utterly despise your users that the solution to any problem is to throw the whole computer out and buy a new one? Preferably a new one from your own company, such that you make money from the transaction? That is some seriously evil stuff you’re espousing.

re 1) most users I know treat their computer as an appliance. A distro that caters to that target group better behave like an appliance. If that ‘s ‘too dumbed down’ for you, you need a different distro.

re 2) no argument from me

re 3) My parents have a TV just like that. Occasionally my father feels the urge to “configure” it and ends up worse without knowing how to get it better again. Resetting it to factory defaults usually gives them a better experience than everything else he tried. They’d probably be better of with a TV where you can’t change any settings (other than choosing a channel and turning the volume up or down).
(They also have a VCR. That’s even worse)

For all of you knocking on textual config files, I don’t think esr ever said that configuration should be text *only*. He just said that configuration should be stored as text. It is perfectly possible (and, in fact, ideal) to have a graphical configurator edit a textual config file so that the user never needs to know, as long as everything is going well, whether the config file is textual or binary. When something goes wrong, though, and the graphical configurator won’t start or isn’t doing its job, it sure is preferable to have a config file that a human can read. If I have to go groveling through a config file, I’d sure rather do it with gedit or vim than with hte.

When I first started out with Linux, I went DE shopping to figure out what I like best, and rejected XFCE because its graphical configurators lacked options that I had found quickly and were easy to edit in the GNOME 2 configurators. I didn’t even bother to find out that there were text config files where I could edit those options.

Now, with GNOME 3 and Unity having removed most configurability from their graphical configurators, and having left, AFAIK, *no* configuration options exposed in textual config files, XFCE’s textual configurability is starting to look really good.

(Fortunately, I have until Lucid EOL’s in 2013 to worry about switching to XFCE).

I used XFCE for its reduced resource use, and for the focus follows mouse issues that have recently cropped up in Ubuntu (grrr).

However, XFCE seems to be getting as piggish as Gnome. I notice it because I use a gutless laptop.

You could look into Lubuntu (LXDE on Ubuntu, now an officially supported Ubuntu variant).

Or, you could try LXDE and OpenBox on top of whatever (similar to XFCE vs Xubuntu). Or you could try just OpenBox on top of whatever. Any of those combinations allow you to pick and choose your panel and other creature comforts.

Lubuntu is a little rough around the edges, but I’m *much* happier with it than I was with Gnome-based Ubuntu, or Xubuntu, or XFCE on Ubuntu.

> or is Ubuntu using whatever’s available right now, and exposing that to the pressure of real life usage to force it to get better, fast ?
>>No, the reason Unity made it into Ubuntu is because Ubuntu’s release process is a clusterfuck. [1]

The article looks interesting; I’ll read it when I have more time (almost bed time here)

> to me, it looks like the latter; it’s wouldn’t be the first time ubuntu uses its 6-monthly releases to try out something new, getting some real life
> experience with it and get it (more or less) right for the next LTS.
You mean like they did with PulseAudio? Yeah, because PA sure was/is a triumph in every way.

I was mainly thinking upstart, and some miner desktop stuff sich as their notifications.
PulseAudio woulkd be the example of “‘it doesn’t always work out” – did that ever make it to an LTS ? (don’t remember exactly, and on mine, sound works, so I never really looked in to it. Yeah, I know, WorksForMe is a lame excuse)

Linux mint, debian edition. it’s still a little buggy, (the update crashes, but can be worked around by using apt) Uses all the debian libraries, and mostly everything was configured. gnome 2 desktop, thank god. great development and power user environment, configured, 30 mins install. need I say more?

Upstart worked out well because they maintained compatibility with old SysV init scripts. In fact, they continued using them for most of their boot process at least through 8.04, and through to 8.10 if my memory serves.

I was crazy happy with Kubuntu 8.04. The best experience of my desktop life, but you know what happened with KDE.

The worst part for me was that the applications were all rewritten: kpdf -> okular (gag), gwenview: terrific -> vile, etc.

Mint XFCE and Xubuntu were fine, but both managed to destroy my user accounts (twice, on two different systems!), so I went to Ubuntu 10.04, which has been great, but you know what happened to Gnome.

After getting a netbook, which will be my only available machine for months while I relocate, and after some dithering, I’m now trying Mint LXDE. Pretty much OK, but some things are completely opaque yet, like how to edit the menu.

I’m also keeping an eye on Bodhi, waiting for another look after version 2 unless I have some severe problems with Mint LXDE.

For those thinking that something like LXDE is old-looking, take a look at the tok-tok icon set (I discovered this while playing with Puppy). Throw on a flat gray wallpaper, and the silvery tok-tok icons look fantastic and elegant. (To me!)

Here’s the problem: For those users who need an appliance, there’s Apple, which makes amazing appliances. Canonical will not out-appliance Apple, they will simply infuriate their users (like me).

As far as TV’s; any well designed system will have the option of restoring defaults. A really well designed system will show you the difference between your setup and default. The Unix way make that easy, BTW.

In retrospect perhaps, but at the time, all of these decisions were controversial and criticized. Apple pushing people forward without concern for whether everything works and support for the old. Remember when the iMac was released with USB only and no floppy drive, you could still count on one hand the number of companies making USB devices, and Zip drives were still supposed to be the way of the future.

When OS X was released, the first version was not anywhere close to the polished experience we have today. And it changed many aspects of the way that OS 9 worked. It was also slower than OS 9 and had very little application support.

Admittedly the switch to x86 was much smoother, but Apple had experience doing this before. That said, there’s been plenty of wailing a gnashing of teeth over the decision to end PPC / Rosetta support with Lion.

Right-click on a OS X application dock icon, get a list of minimized windows, choose the one you want. Or left-click and just hold the click for half a second. Or cast your attention to the right side of the OS X dock, where the individual minimized window icons live, one icon per window.

> User configuration data goes in plain text files, not binary blobs. There are many reasons for this, and one is so they can be hand-edited when the shiny GUI configurators turn out to be buggy or misdesigned. No programmer who doesn’t grasp this bit of good practice has any business writing a window manager, especially not on a Unix-derived system.

GUI Configurators should not be buggy or misdesigned. That was the bug. Text editing configuration files (and the windows registry editor) are a workaround. The bug was that you needed to edit the configuration files. The thing that cause linux and open source to be hated, the number one grief inducing thing about linux, is the number one thing that causes your linux system to die and not come back to life, is that you are always editing configuration files. See Xkcd on the topic of configuration files http://xkcd.com/963/

The number one superiority of windows over linux, the number one superiority of commercial software over free open source source software, is that one seldom has to edit the configuration files directly.

> I’m a bit puzzled about what incentives programmers might be following to do otherwise.

Frequently there are possible states of the configuration file that will cause something horrid to happen, as most linux users have discovered the hard way. To stop people, or malicious software, from editing the configuration file, make it binary.

The correct solution of course is to always do a sanity and consistency check on the configuration file, so that bad configuration files are either auto fixed, or reset to defaults.

@James A. Donald
> GUI Configurators should not be buggy or misdesigned. That was the bug.

Sure, and humans should not seek to further their own personal interests, and if we can cure that we will ensure economic equality for all. Nice idea, but the real world doesn’t work that way.

Human nature is what it is, and software quality is what it is. There’s some variation, but it is inevitable that a certain percentage of graphical configurators will turn out to be unusable for a certain percentage of tasks. Failure is inevitable, and failure to plan for the failure of your own software constitutes hubris of the highest order.

I went from KDE 4.x to Gnome and now I am back with KDE 4.x. Though KDE 4 sucks in comparison to KDE 3 (and took away a whole bunch of useful applications that have not been ported as yet to use the 4.x libraries) I think Gnome got worse with Unity.

I always found Gnome fascinating and user-friendly and sad that I no longer have an alternative to KDE. However removing all desktop special FX in KDE does make it tolerable.

Yes, XFCE is decent, but the majority of applications I use are either QT based or KDE based, with some GTK apps thrown in.

I’m sure that ESR could beat this distro into submission if he felt like putting the time in, but the very fact that such remedies are needed at all is why Linux is not suitable for users who aren’t computer experts.

Will it ever be? Not unless someone like Steve Jobs with a company like Apple behind him comes along and creates a coherent product on top of it, the way Apple did with Mach/BSD. How many projects so far have stated their intention to do so and failed?

Actually they jumped the shark in the 9th or 10th release, where they said they won’t support old computers. As far as I know that equals to saying “we are turning into windows, if you keep using a 486 don’t come to us! Oh and hey, it is linux for human beings WITH fast enough computers”

That was enough to make me disgusted. The developments after that, do not surprise me.

Agreed. Our posse pretty much loathes the Unity/Gnome3 cluster*. Even our teenagers can see that the emperor’s groin is feeling chilly. Many of them will be unaffected by this ‘New GUI’ stuff since they are on cheap low-spec machines that don’t like Gnome 2 that much, so we’ve found them alternatives.
XFCE and Xubuntu have been pretty good to us over the years – done dozens of installations on ageing hardware for our friends and rellies. Some are now on Lubuntu which is is significantly lighter, especially with low RAM. Not as feature-complete as Xubuntu, but Lubuntu is a fine little distro.
Probably our single least loved feature in XFCE is the menu config, which is pretty clunky. Do like their lightweight compositor – if you have the RAM/CPU available it works pretty well even on integrated Intel graphics.
A recent addition to our armoury of light reliable distros is Bodhi. They really do seem to have achieved the stability with E17 that I have long sought and failed to find. Well, it works on our hardware – even the Compaq Presario C300 (Celeron 420, GMA graphics) has found a sprightliness only previously matched by my own hand-crufted Mint Fluxbox installations… I do love Fluxbox, but recognise that it’s not entirely a rational choice – not that suitable for the n00bs in our lot anyway.
With reported RAM usage of 85~90MB Bodhi’s pretty darn light (and fast too) at the same time delivering desktop bling that has our teens and their grandpas impressed. Pretty configurable, too.
Works well on all of our varied hardware to date, but your mileage may vary.

This post talks about Unity and Gnome3 as though they’re two different names for the same thing. But I thought the deal was that Ubuntu came out with its own user interface (“Unity”) that is completely different from the latest release of the default Gnome shell (that is, the user interface of “Gnome 3”).

So color me confused. It *sounds* like the OP is displeased with the Unity shell, and hasn’t yet experienced the Gnome 3 shell.

Ah, just now I see Jeff Waugh has already made this point, in his comment above.

>This post talks about Unity and Gnome3 as though they’re two different names for the same thing. But I thought the deal was that Ubuntu came out with its own user interface (“Unity”) that is completely different from the latest release of the default Gnome shell (that is, the user interface of “Gnome 3?).

I’m aware of this, and didn’t mean to suggest they’re identical. I tried both, and IMO they both suck. though GNOME 3 slightly less so.

Heh — Eric, hi, I didn’t realize you were the OP, because your name doesn’t appear on the page and I didn’t pay any attention to the first component of the FQDN. Sorry, wouldn’t have referred to you all stranger-like like that if I’d known.

> The number one superiority of windows over linux, the number one superiority of commercial software over free open source source software, is that one seldom has to edit the configuration files directly.

Have you done much Windows support (that means detailed configuration and fault finding, not just advising someone to restart their computer)? I’m sure that an appropriately constructed Google query could return count the articles in the official Microsoft knowledge base that include the instruction “Open the Registry Editor and navigate to the HKEY_… key” or words to that effect. The equivalent of hand-editing a configuration file.

> Will it ever be? Not unless someone like Steve Jobs with a company like Apple behind him comes along and creates a coherent product on top of it, the way Apple did with Mach/BSD. How many projects so far have stated their intention to do so and failed?

Maybe we aren’t really interested in that kind of thing with Linux.

And part of the Gnome problem is precisely the attempt to turn a Linux desktop environment into a “user-friendly” one. it is counter-intuitive to say so, but too much usre-friendliness is what gets in our way and I am somebody who can never use a Mac OS productively.

My requirements for a WM/DM are simple.
It must not take longer to load than my OS.
That’s 14 seconds to runlevel 3 on my Arch, so KDE and Gnome are out.
It should look fairly good without tweaking and adding too much of 3rd party apps.
So down to XFCE, LXDE and Enlightenment.
I love speed of the lightweight DMs (*box, ratpoison etc.) but I also love not having to edit config files and manually mounting removable devices. Of course there are apps to make configuration easier, but I found that LXDE suited my needs quite nicely. XFCE was my previous WM and it also suits me, but LXDE is marginally faster.

I felt exactly the same when I upgraded. I don’t need damn finger friendly interface. I want my taskbar and a way to see all my apps I have installed without touching keyboard every time I want to launch one.
xubuntu-desktop solved it for me as well, took about an hour of tweaking to bring back sanity to my desktop. Half of the time it took to realise that gtk apps are ugly because some of the listed xfce themes are do not work with gtk3 apps – Adwaita and some others are fine.

The simplest solution if you want binary config files is to have a utility to convert to and from binary configurations to XML. This is what OSX does with their plutil.

The advantages of binary config files have been covered. They’re faster to load than XML and non-expert users are less likely to muck about with them. TextWrangler can natively open up plist files.

Typically this is preferred for non-expert users or even expert users prone to typos. :)

“40 years of Unix heritage” has lead to a desktop Unix system that that is usable by non-techies: OSX. That Ubuntu has the same goal is laudable. Execution could be better…a key aspect of appealing to non-techies is natively including stuff they want and not borking their system on an update. Even Apple gets this wrong from time to time but they work to resolve it…and they haven’t had anything on the same scale as the PulseAudio fiasco. Stability is a key factor and Ubuntu really hasn’t been over the years for upgrades. App support (MS Office, Photoshop, etc) is also another key area that Ubuntu will always be behind OSX.

Eh, the problem for Ubuntu is that OSX IS so good. Perhaps targeting netbooks was the right answer until the iPad came along.

Also, on Gnome 3, the screensaver has been totally erased/killed. gnome-screensaver is just a screen blanker now. No option at all. You’ll have to return back to xscreensaver, if you want anything usable.

The `get a Mac’ comments are hilarious; I just spent a week or so trying to use Mac OS X, and I just kept thinking `this is such a cheap knock-off of a Linux desktop’–and most of those issues were in the GUI.

Just thought I’d add my $0.02 as a non-programmer and Ubuntu user who recently made the switch to Ubuntu 11.04….I have nothing against editing text files and indeed thought that the ability to do so was one of Ubuntu’s strong points (it’s the only Linux distro that I’ve used regularly). Probably what was most frustrating for me about the switch was the lack of documentation regarding how to do what I used to be able to do in GNOME with UNITY. I spent my first 2hrs using Unity cussing like a sailor and very frustrated. If it wasn’t for Ubuntu-blogs explaining how to do a bunch of stuff and the new way to configure things, I also would have probably switched to another distro. Sometimes, I think Canonical is too dependent on Ubuntu’s large help forums and bloggers to clean up their messes…..Which is really a shame because there’s a couple of things I really like about Unity (launcher lists) but there are a LOT of incompletely implemented aspects as well (anything having to do with the Applications launcher is a total disaster). I’ve grown to like as I’ve used it for the last month or so but I’m waiting to see how Canonical rounds out the Unity desktop for the first LTS release with it. That to me will be the final judge of things.

You may not be, but the community as a whole better be if they truly want to advance OSS. If OSS gives users an experience that is (to them) worse than Windows, then it doesn’t matter how many config files they can edit, or how open their documents are, they won’t use the software in the first place.

> it is counter-intuitive to say so, but too much usre-friendliness is what gets in our way and I
>am somebody who can never use a Mac OS productively.

and

>The `get a Mac’ comments are hilarious; I just spent a week or so trying to use Mac OS X, and I
>just kept thinking `this is such a cheap knock-off of a Linux desktop’–and most of those issues
>were in the GUI.

As good as user forums and blogs might be, they also have a huge problem. A problem that becomes more and more serious with the growth in popularity of Ubuntu. The problem: solutions are for different versions, different workarounds of specific problems that occurred at some point in the past, but which might not be suitable for recent versions because maybe some underlaying architectural changes in the system. Like the replacement of the login screen (gdm to lightdm), replacement of the boot loader (grub to grub2), changes made to X.org (xorg.conf to no conf file, CTRL-ALT-BCKSP removed, etc.). If somebody not familiar with these changes (normal users) go out and find an outdated forum thread or blog post about fixing an issue they are having they can do more harm to their system than good and then decide that Ubuntu is crap: user lost for good.

I don’t use Apple’s products, but I certainly can see that when Apple releases something that thing is finished – as in super-polished. Maybe the only thing that Canonical should do is not to release half-baked products, certainly not as the default option (PulseAudio, Unity, etc.). I don’t have a problem with something that has a different vision about how things should work. But for the love of : make it _good_ before you make it default.

OSX traces its heritage back to NeXTStep…which launched in 1989. Two years before Linux.

The dock, ObjC, deprecating X (in favor of Display Postscript instead of Quartz/PDF) are all there.

There’s little in Linux UX that’s “original”. Not even virtual desktops that folks say that OSX “copied” from Linux. The first implementation was on the Amiga and many unix desktops implemented virtual desktops before Linux including VUE/CDE. Apollo/VUE was a contemporary of NeXTStep. Nor is compiz “original”. OSX had a compositing window manager (Quartz Compositor) from the get go and it was GPU accelerated starting in 2001 with 10.2.

There’s a reason why OSX is so refined. NeXT/Apple has been working on Unix desktops since 1989. So I cut Ubuntu a lot of slack…but even so, they aren’t nearly as far along as they could be. A lot of that is IMHO letting ideology get in the way of making the best system possible.

most users (99 out of 100 ubuntu users ?) don’t read documentation and wouldn’t recognize release notes if they hit them in the face.
they do, however, know where ubuntuforums is and most of them are capable of understanding instructions as long as they are tailored to their specific questions and appear to address them personally (even if the information in it is no different from what “documentation” would tell them)

Of course, the first rule of UI design these days is : if your users need a manual (even in the form of custom forum posts), you’ve failed. So yeah …

most people don’t distinguish between unix and linux – linux is (technically) just another unix reincarnation

> So I cut Ubuntu a lot of slack…but even so, they aren’t nearly as far along as they could be. A lot of that is IMHO letting ideology get in the way of making the best system possible.

Their goal isn’t so much “making the best linux|unix system possible, but to make a marketable product out of Free Software — although they seem pretty pragmatic about it when it comes to non-free software that is required for an acceptable user experience.
Anyway, from their perspective, it’s not a matter of ideology getting in the way, the ideology is the reason they started the thing to begin with.

I think all of the arguments above are completely valid reasons to switch from Unity to another desktop environment. Who are we to say that you should or shouldn’t use your desktop the way you want to. I think if Gnome 2 had an upstream maintainer, Ubuntu would at this point include it. But it does not, so Ubuntu really must move forward with Gnome 3 and all of its weirdness. Unity is just another shell on top of that, that I think makes it a lot less weird.

Unity’s keyboard shortcuts are intensely valuable and though I am a former OS X and XFCE user and was fairly happy with Gnome 2, I find Unity to be far more efficient. That said, I don’t understand *at all* the focus-follows-mouse idea, as I see touching my mouse/touchpad as a huge failure of the UI to expose the proper keyboard shortcuts to me.

It seems a bit disingenuous to criticize the Ubuntu project for “chasing grand visions” when the project has a business goal, which is to achieve sustainability. As somebody who gets paid to maintain Ubuntu, I have to agree with the idea that growth is the only path toward sustainability when you are the major underdog OS. Who is going to pay Canonical for support for a platform that has single digit growth and may even have less than 1% of the total world desktop/laptop/etc. market? In order to achieve sustainability, it needs to be far more ubiquitous than “on a lot of developer desktops”. In order to do that, one needs to think more broadly, and Unity is an attempt at an interface that provides a very natural interface to a more broad base of users.

@kn the point is that linux wasn’t even the first “unix-like” OS to have virtual desktops. The other minor point is that OSX doesn’t even have to caveat that they are “unix-like” as it actually IS unix.

As far as your goal statement, I would say that what you wrote is more for RedHat and they are far more successful at it than Canonical.

The technical questions are all well and good, but the thing I can’t quite figure out is WHY DOES THIS KEEP HAPPENING?

Every time a major UNIX desktop environment gets stabilized, the development group apparently feels compelled to blow it up and do a total rewrite in answer to a question that AFAIKT nobody is asking. This has been going on for at least a decade now (see JWZ’s fine 1993 “CADT” rant about the Gnome 1->2 transition). My current half-developed theory is that it’s something to do with the fact that the (hypothetical? desired?) users don’t/can’t participate in the usual open-source dogfood process, it’s all run by foundations that *are not the users*, and thus we end up with something that combines all the problems of proprietary development with a total lack of market discipline.

Yeah, but… the fact that they keep doing this every 5 years, and were about due, makes me think that this is actually a rationalization, not a reason. Total rewrites based on new use-cases are almost always a bad idea…

To kn, I was not referring to release notes in regards to documentation. I rarely read those myself. I was actually referring to the poorly written “Help” for the Unity switch that appears with the new desktop. What’s quite so frustrating about it is that Ubuntu used to have a great help documentation for people switching from Windows. It was hugely helpful to me when I switched not just in using Ubuntu but appreciating and being aware of what was different (& better) about it. So, I know Ubuntu is capable of creating something which explains a desktop clearly in a way that will engage the nontechnical user. Why they didn’t do something similar for the GNOME / Unity switch is beyond me. A “help” file that throws around a bunch of Unity-specific terms and shows me how to create new launchers (b/c drag’n’drop is soooooo beyond the average Ubuntu user), is just insultingly frustrating.

> Every time a major UNIX desktop environment gets stabilized, the development group apparently feels compelled to blow it up and do a total rewrite in answer to a question that AFAIKT nobody is asking.

I don’t know how the money flows about the Gnome community, but maybe people are getting paid to rewrite.

Also, it seems to me that it’s one of the goals of Gnome, KDE, and whoever else to compete with the commercial UI’s. Part of that game is to make the system eternally novel. These projects are competing for users: “GNOME just evolved again.”, on KDE’s site it’s “Experience Freedom!”

And yet that testing leaves me baffled. Mac users liked it the best, but surely they would prefer Macs? Windows users found it confusing at times because it was more strongly based on physical manipulation of GUI objects rather than text. It’s totally unsurprising from that that the hardcore UNIX crew would hate it.

Which immediately calls to mind:

“In 1979, when I was working at IBM, I wrote an internal memo lambasting the Apple Lisa, which was Apple’s first attempt to adapt Xerox PARC technology, the graphical user interface, into a desktop PC. I was then working on the development of APL2, a nested array, algorithmic, symbolic language, and I was committed to the idea that what we were doing with computers was making languages that were better than natural languages for procedural thought. The idea was to do for whole ranges of human thinking what mathematics has been doing for thousands of years in the quantitative arrangement of knowledge, and to help people think in more precise and clear ways. What I saw in the Xerox PARC technology was the caveman interface, you point and you grunt. A massive winding down, regressing away from language, in order to address the technological nervousness of the user. Users wanted to be infantilized, to return to a pre-linguistic condition in the using of computers, and the Xerox PARC technology’s primary advantage was that it allowed users to address computers in a pre-linguistic way. This was to my mind a terribly socially retrograde thing to do, and I have not changed my mind about that.”

@esr it’s only “completely botched” if it fails in the marketplace. (Arguably, it’s never succeeded in the marketplace but that’s being snippy).

It’s also questionable if you are the target user and there are positive reviews of 11.10/Unity.

I also don’t get arguments of sustainability either. Unlike RedHat, Ubuntu/Canonical doesn’t have a viable business model. Nobody is ever going to pay Canonical for support over RHEL. I say that even after paying Canonical for support on a project that unwisely went Ubuntu over RHEL. That project is going to suffer for choosing LTS over RHEL but at least I got them to use Ubuntu rather than Gentoo.

Normal users aren’t going to pay you Cliff just like they don’t pay Apple or MS. Businesses aren’t going to pay you either. They’re better off paying IBM or RedHat. The only way Canonical can make money via support is to get OEMs to pay you to provide support for their customers. That window of opportunity was largest in 2007 with Dell…and the Vista debacle.

Today? Meh. Android is likely the face of linux on the desktop (despite whatever Google plans for Chrome). The Asus Transformer tablet/laptop strategy is IMHO the most promising. It seems like 1 product rev (and a $100 price drop for the whole package) away from really kicking ass.

> You may not be, but the community as a whole better be if they truly want to advance OSS. If OSS gives users an experience that is (to them) worse than Windows, then it doesn’t matter how many config files they can edit, or how open their documents are, they won’t use the software in the first place.

If advancement means dumbing down interfaces to an insane degree that we lose configurability/features in the process, then I don’t want to be part of that advancement.

(
tl;dr -> use the right WM for the task; ‘user friendly’ really just means stable, sane defaults, ability to customize as needed. OSX gets 1 and 2, Windows only gets a half-point for all three. For friendliness, Consumer systems place 2 at expense of 3. Producer systems need 2 AND 3 with more weight on 3.
)

What I’ve distilled from this discussion comes out similar to previous discussions on the use cases of tablets vs. general-purpose machines. Namely, there are two core types: production and consumption. DE/WMs like Unity, Windows, and OSX tend to side more towards consumption with side-allowances for production. Everything else (sans the totally unusable stuff) is built for production, and consumption becomes secondary but part of the productive flow.

Everyone has their own way of working, so the systems built for production first have to support more options and configurability and have a set of defaults that present as sane for the greatest number of users. Consumption systems aim for general sanity without proper configuration ability, which can leave many from the class of production users in the rain.

Beyond ensuring stability, the real point of ‘user friendliness’ is finding the largest intersect of sane defaults (OSX gets this in spades; Windows doesn’t; Unity falls on its face) and providing an ability to change them as needed (Windows doesn’t make it easy, and OSX fails although for arguably-practical reasons). _Everything_ else is window dressing.

As for me, I can be productive in Windows out of habit but it seems the worst of all worlds, and I prefer OSX over it. But no focus-follows-mouse or windowshading (why did they take that out of OSX? It was my favorite feature of Classic), too much reliance on the dock, too much friction on cursor movement, the universal menu bar (I understand the logic but it is not for me), and too many other annoyances make my jaw hurt. I only use it for working in Photoshop on my Cintiq, because the wacom support there is boss.

Windows is for games, OSX is for general media and visual production, and I run Slackware for getting actual work done. I’ve tried just about every WM under the sun; I liked KDE3, dropped it for XFCE, stuck with e16 for a while, then ran through a gamut of others before settling down. My laptop, being my dual-purpose producer-consumer machine, runs a heavily-customized Fluxbox with DWM-style keybindings, dmenu, and Thunar for when I need a FM or to have an external drive auto-mounted. My workstations at home and work run DWM, dmenu, and a heaping score of urxvt terms. I use a very modified KDE4 for my HTPC, and run XFCE on all my remote desktop sessions.

A window manager/desktop environment is a tool, and I use the right tool for the way I want to get each job done.

>I was then working on the development of APL2, a nested array, algorithmic, symbolic language, and
>I was committed to the idea that what we were doing with computers was making languages that were
>better than natural languages for procedural thought. The idea was to do for whole ranges of human
>thinking what mathematics has been doing for thousands of years in the quantitative arrangement
>of knowledge, and to help people think in more precise and clear ways.

Wow… unless I’m completely missing the point (which I admit is possible) that seems like such a misunderstanding of how most people interact with the world around them. People operate in a visual world, where manipulations of objects and gestures convey far more information quickly than any mathematical formula. People are not magicians who speak magical words into the air to make things happen, they move and interact with the devices around them. It has noting to do with infantile desires and everything to do with an appropriate gesture can convey more information faster than any text or written or even spoken language can. For example, if I want to copy all of “these” files where “these” is some subset of the contents of a folder which are not commonly named, it’s much faster for me to lasso some files and drag them to their new location than it is for me to type “cp filex otherfile thisfile mydoc qdoc otherdoc megafile destination/”. That isn’t to say that a specific language can’t be helpful, for example I might want to copy all of “these” files, except ones that are MP3s, it’s handy to be able to gesture for the file group I’m interested in, and then convey through language “NOT MP3s” (although this could be accomplished by another gesture, but likely starts consuming more time if you’re not trying to discover your options). I guess what I’m saying is writing off gestures and “point and your grunt” operation is silly. It has its place, and should be supplemented with a precise language, not supplanted.

It seems to me *only* problem with Unity is that it isn’t finished yet. It feels like a project with lots of potential and will be terrific in a year or 2 when it is feature complete. My criticism is that Canonical is using it now instead of then.

And for all you fanboys out there, at this instant in time, the best and most most usable DEs out there stack up like this:
1. Win7 (rationale: works; excels at nothing, but isn’t broke either)
2. Unity (rationale: most potential, but doesn’t really work now)
3. OS X (rationale: broken and unusable in ways that aren’t really fixable other than a complete re-think, but at least they *attempted* to put a modern GUI on top of a Unix base and had the good sense to ditch X)

@Clint Byrum: “That said, I don’t understand *at all* the focus-follows-mouse idea, as I see touching my mouse/touchpad as a huge failure of the UI to expose the proper keyboard shortcuts to me.”

For me that feature is indispensable because, although most of the time I’ll just cycle to the window I want with alt-tab or whatever (the more a UI requires the mouse the more I hate that UI), every once in a while I’ll want to type in one window (emacs say) based on something I’m looking at in another window (terminal or pdf viewer or something). Specifically, the thing I’m looking at will often take up more screen real estate than the chunk of stuff that I’m actually typing: i.e., I want to be able to type into a window that’s _under_ the window I’m “looking at” (the one that’s on top).

Alt-tab is no good for this because usually the window you select for focus gets raised as a result. (Of course MS Win / OSX type systems where you _have_ to click on a window to give it focus, and where this necessarily entails the focused window going to the top, are even worse from my point of view.) And I’ll concede that the use case I describe (focus-follows-mouse without autoraise) is the one esr called “useless” in the OP; but under enlightenment, the wm I use, these were orthogonal options, I mean choosing one didn’t require or preclude the other.

The growth in users for the next decade will be in mobile touch devices. To get a part of this growth and stay relevant, GUIs have to go mobile. That is why they are all going down that road.

My desktop GUI doesn’t need to “go mobile”. I am perfectly content using one kind of GUI for my desktop things, and another for my mobile things. There is no need to MOBILIFY ALL THE THINGS.

Apple has done a good job of feeding back their mobile innovations (such as touch gestures) into a desktop-friendly context with Lion. GNOME and Microsoft? Not so much. Basically they gave desktop machines a shitty tablet interface which is even clunkier and more difficult to use with a keyboard and mouse than it was on the tablet for which it was designed.

Any of you guys have much experience with the new Enlightenment? I used to use it back in the day, but haven’t touched it in a good 5 years (since I went back to mack from linux) and its had quite a few complete rewrites since then.

@Doc Merlin
> Any of you guys have much experience with the new Enlightenment? I used to use it back in the day, but haven’t
> touched it in a good 5 years (since I went back to mack from linux) and its had quite a few complete rewrites since then.

Disclaimer: I haven’t really used E17 in about 3 years. That said, it’s pretty nice. Classic Enlightenment style eye-candy with a minimalist UI and sickeningly customizable. When I last used it, I found the configuration options to be a little painfully obtuse and clunky, but I can only hope it has since improved.

Thanks for the quote, Mike E. That is definitely a better name for GUIs, “the point-and-grunt interface.”

Pretty sure that’s a JARGON entry as well.

I’m also reminded of a quip from the late Dennis Ritchie in his preface to the UNIX HATERS Handbook, that GUIs would condemn us to “a future whose intellectual tone and interaction style is set by Sonic the Hedgehog”.

It’s wrong, of course. GUIs don’t infantilize users if well-designed; rather they expand users’ capabilities by allowing them to convey more information with less cognitive overhead.

As for the accusation that GUIs are prelinguistic, that may have been necessary because computers didn’t understand our language well enough. Now that Siri is here, we can talk to our computers and give them instructions in human language, and become H. sapiens again. :)

> OSX traces its heritage back to NeXTStep…which launched in 1989. Two years before Linux.
[…]
> There’s little in Linux UX that’s “original”. Not even virtual desktops that folks say that OSX “copied” from Linux.

Sorry, I meant “this feels like such a cheap knock-off…”.

I wasn’t talking about lineage, I was talking about refinement–it doesn’t matter whether Mac OS X actually *is* or isn’t actually a copy of Linux by lineage, it still *feels* like a *cheap* one. Pedigree is no excuse for a poor showing; actually, it just makes it all the more damning….

I bailed a few years back to Fvwm. It gives me control…well, over everything, really, can be configured to be plenty slick, and plays well with everything as well.

Whereas at the time, well, Gnome 2 was/is a drunk turd that won’t let you have control of keyboard bindings, and KDE also limits bindings -and- had stupid infinite loop CPU eating bugs.

The “desktop managers” have their place — but their place is to let me configure launch widgets and provide an automatically maintained tree of installed applications (what in the Windows world is called a “start menu”). I gain -no- utility by also using them as a window manager.

Fortunately I still -can- (although I have to wonder if my “put the gnome panel inside my FVWM startup” will still work with Ubuntu 11.10; haven’t upgraded yet) just run the shiny kde and gnome things, as needed, inside the latest version of the first Window manager I actually liked.

@Clint:
>But it does not, so Ubuntu really must move forward with Gnome 3 and all of its weirdness. Unity is just another shell on top of that, that I think makes it a lot less weird.

Not really. It makes the same mistakes. When I saw where GNOME Shell was going, I was delighted to hear that Canonical was starting with their own UI project. But after I actually saw Unity, the fork looked rather petty: A whole lot of duplicated effort that solved none of the problems of GNOME Shell.

The big mistake I see WRT to the development of GNOME 3 was that the development of the new GUI was done simultaneously with the development of the back-end infrastructure. As I see it, the goal ought to have been to build the new infrastructure and port the functionality of the old interface over to it, *then* to build a new interface. The GNOME devs did give us the Classic/Fallback mode, but it’s missing much of the functionality of GNOME 2.

>Unity’s keyboard shortcuts are intensely valuable and though I am a former OS X and XFCE user and was fairly happy with Gnome 2, I find Unity to be far more efficient. That said, I don’t understand *at all* the focus-follows-mouse idea, as I see touching my mouse/touchpad as a huge failure of the UI to expose the proper keyboard shortcuts to me.

I, on the other hand, am an incredibly mouse-centric user, and see having to touch my keyboard in interacting with the OS as a huge failure on the part of the UI to expose proper mouse-click regions for me. (That said, Unity may have advantages when the only pointing device available to the user is a touchpad, but touchpads suck so badly that I *always* carry a proper optical mouse with my laptop). The docks in Unity and GNOME 3 fail horribly at providing proper mouse-based window management.

I will agree with you, though, that I don’t really get focus-follows-mouse. I like focus to stay where it is until I explicitly change it.

>It seems a bit disingenuous to criticize the Ubuntu project for “chasing grand visions” when the project has a business goal, which is to achieve sustainability.

I’m not sure that exchanging one userbase for another really helps achieve sustainability. Sure: if the new userbase is significantly larger, this may be the case, but it will take time to capture that userbase, and meanwhile, a significant chunk of your existing userbase (the people who were attracted to GNOME 2 as the best mouse-based interface on the planet) is feeling utterly abandoned and will not be very inclined to recommend you to the new userbase.

And apparently learned a really bizzare lesson: “Accordingly, Windows users will need to be encouraged to manipulate icons and to develop a more physical relationship with Unity than the more text-heavy relationship they have with Windows.”

Interaction with Unity is a lot more text-heavy than for any version of pre-Win7 Windows. Text searching for applications, needing to use keyboard shortcuts because mouse-based window management is utterly broken, etc.

@esr:

>Fine. What in the bleeding hell is a completely botched window manager redesign going to do for “sustainability”?
>
>This is absolutely the worst blunder in Ubuntu’s history. It won’t do fuck-all for end-user appeal and it’s alienating your existing base.

I think there is a group of end-users that it does appeal to, otherwise the *whole frakkin’ industry* wouldn’t be moving to similar interfaces (OS X has always had such an interface, though implemented a bit better, and Win7 defaults to such an interface, but can be configured to something at least half-sane for those of us that hate the defaults). But there’s at least a sizable minority of users like you and me for whom such interfaces are horribly broken. Microsoft has avoided angering such users by providing configurability back to an interface we can work with, the GNOME project and Canonical have not. (Apple didn’t really cater to us in the first place).

> True, they should not be. But they will be. Systems architects who live in reality plan to cope with highly-probable failure modes and minimize the costs they incur.

I’d say the bug is that using a text editor is seen as an acceptable or even preferable means of editing configuration files, rather than as a sub-optimal work-around that points up the need to debug and improve the GUI Configurator – or even to build a GUI Configurator in the first place.

I don’t see GUIs as a “caveman interface” but rather as a “foreign market where you don’t speak the language” interface. In theory, you could learn the language but this is simply not a practical option for most people. So instead, both the user and the computer learn a few words of pidgin, the user points at things on the table, the computer picks them up to display them, and both sides use the few words of pidgin they know to haggle. It isn’t as elegant as having the user become a hacker and learning to speak fluently in the computer’s language. However, it is far more efficient, once you consider the very high cost to the user of learning to speak fluent Computerese rather than just a bit of GUI Pidgin.

I don’t see GUIs as a “caveman interface” but rather as a “foreign market where you don’t speak the language” interface. In theory, you could learn the language but this is simply not a practical option for most people. So instead, both the user and the computer learn a few words of pidgin, the user points at things on the table, the computer picks them up to display them, and both sides use the few words of pidgin they know to haggle. It isn’t as elegant as having the user become a hacker and learning to speak fluently in the computer’s language. However, it is far more efficient, once you consider the very high cost to the user of learning to speak fluent Computerese rather than just a bit of GUI Pidgin.

Even for us hacker-types, it’s useful to have GUI’s, because in some instances they’re faster, and in other instances, they provide superior information density. This is not universally true, but is in many cases. I use a graphical file manager for some things, and a command line for others. It’s a combination of my comfort level/expertise, the task at hand, and whether it’s easier to hack together something using sed/awk/shell/perl/whatever than it is to click and drag things in the GUI.

Plus, properly designed GUI’s are discoverable in ways text-based interfaces aren’t. Tasks that I do only on occasion, therefore never really develop the expertise to learn to do it the “right way”, I prefer using a GUI when available. My time is limited, and no one can be an expert at every facet of configuring/administering software, so it’s often better to use GUI’s for things that you aren’t going to master rather than Google the task, look for the command-line tools to do it, read the man page, type in the command, figure out what you did wrong, etc. when all you wanted to do is log onto your dad’s WiFi.

Your description of the “foreign market” could double as one for a “caveman interface.” But more importantly, the original post by esr was about his experience, and he hardly needs to restrict himself to a pidgin interface. And this has always boggled my mind somewhat. I don’t know what esr’s computer habits are each day, but surely many stable UI’s exist in maintenance mode to meet those needs with an afternoon of reading the docs and scripting the interface in a way to put the issue to rest for the next decade.

> I’d say the bug is that using a text editor is seen as an acceptable or even preferable means of editing configuration files, rather than as a sub-optimal work-around that points up the need to debug and improve the GUI Configurator – or even to build a GUI Configurator in the first place.
After all, “GUI Configurators are for users. And users are lusers.”

I’m not sure what you’re arguing against. I don’t see anyone claiming that there is anything wrong with GUI configurators. Obviously there are cases where they are the only way to do any meaningful configuration (sliders for hue, brightness, contrast etc..). The point is WHAT they are editing. A GUI screen that edits a text file is a fine thing. What people here have a problem with is configurator that stores the settings in a BLOB. There are plenty of reasons, besides, wanting to edit them in VI for storing configuration settings in text files.

Try backing up a blob in code repository like CVS, Subversion, or GIT and then looking at the incremental changes.

Thomas Padron McCarthy said: I agree. I happily ignore all Linux desktop issues, since my ctwm has looked the same since it was twm. The only major changes to my .ctwmrc is that I put the WorkSpaceManager vertically and to the left, to fit with the new lowscreen format, and I had to add a “kill” button since nowadays many windows don’t have their own close button.

I still have a great fondness for fvwm, when I’m forced to run X. (Or olwm, to take me back to 1991 and CS classes before I switched majors…)

(As an aside, this made me look up fvwm on Wikipedia, which led me to XFCE, which is said to have an fvwm-based WM.

This led me to “The name “Xfce” originally stood for “XForms Common Environment”, but since that time Xfce has been rewritten twice and no longer uses the XForms toolkit.”.

@ Jeff Read: “Penelope bought a MacBook Pro round about 2006 when they first came out.”

If you believe that, you don’t understand what I mean by a “Penelope.” A Penelope, for purposes of this discussion, is any intelligent non-programmer who wants to have a computer that is configured, or easily configurable, to get real work done without having to learn programming or deal with the OS. If every Penelope had bought a Mac back in 2006, Apple would have a much higher share of the personal computer market than it does.

“…technology’s primary advantage was that it allowed users to address computers in a pre-linguistic way. This was to my mind a terribly socially retrograde thing to do, and I have not changed my mind about that.”

–Eben Moglen

Yes, early man was drawing beautiful pictures on his cave walls long before writing was invented. I take this as evidence of the importance of graphic communications over linguistic ones.

I also had issues with the new desktop environments (actually because of issues with the nvidia optimum card driver, Unity won’t start) and Gnome 3 is unusable, but I find Unity 2D usable. Individual terminal windows can be switched with alt-tab. The lack of indicator for the top bar, as well as the window buttons on the top left when a window is maximized, and on the right when minimized are puzzling…

Good. If hackers dont like something, it means users wil love it. Seriously now, Ubuntu has something unique other distros dont have, a Software Center with paid apps inside. This means Ubun may have started to care about preserving binary compatibility and making it easier for software devs (because every other distro certainly doesnt care). This could make Ubuntu the first distro suitable for public consumption. Unless they break binary compatibility again, in that case major lulz will ensue, with all those users asking for refunds for apps that dont work anymore.

@catherine I believe that he means that the Penelopes of the unix crowd have moved to OSX. There are many unix folks that have moved to OSX because a) it’s unix and b) it requires no hacking to make work. If I want a shell, it’s right there. But I sure don’t need one to get useful work done. I can even get nicely written git clients for OSX like gitx, gity, gitnub and Git Tower. Really tricky stuff I might shell for but day to day activity I can code without hitting the command line.

For the real Penelopes, there is now the iPad. I dunno why you think Ubuntu is moving away from that direction as it most certainly is being influenced by both Mac OSX and iOS. What folks are decrying as “dumbing down”.

What most folks fail to understand is that “dumbing down” is actually very hard to do well and takes both commitment and practice. This is partly why Apple is so successful and others less so.

The other thing I like about OSX vs linux is that the apps (open source and proprietary) tend to have IMHO more attention to detail because the native Apple apps set a fairly high bar. User expectation is much higher and the dev tools do seem make it easier.

Extrapolating from the portrait of Penelope that Eric drew in TAOUU, I assumed you meant technical users like scientists who just weren’t computer techies. These people are increasingly turning to the Macintosh. If you are a computer techie, or even just a writer (William Gibson is devoted entirely to Apple kit), chances are you’re taking a look at the Mac too. Mac desktop share is as high as 25% depending on who you ask; even at 15% that’s enough to account for everyone at least as smart as Penelope and then some.

Where Linux wins on the desktop, it wins due to ideology. Where Windows wins, it wins due to inertia. Only Apple is winning on its merits.

> I’m sure that ESR could beat this distro into submission if he felt like putting the time in, but the very fact that such remedies are needed at all is why Linux is not suitable for users who aren’t computer experts.

And esr declares that the reasons why linux users must endlessly struggle with manually editing configuration files to be off topic and out of scope

If you want open source software to succeed, it needs to be on topic and in scope somewhere.

>And esr declares that the reasons why linux users must endlessly struggle with manually editing configuration files to be off topic and out of scope.

Yes, it’s off-topic in a discussion of whether config files should be textual, because that discussion assumes that you’ve identified a requirement for a config file. When you have correctly identified such a requirement is a different question.

Look, *I* don’t need any persuading that Unix has too many config files — I built GPSD, precisely the sort of tool that traditionally has a huge nasty one, to completely autoconfigure itself instead. All you accomplish by trying to drag the conversation onto that issue is to look stupid and obstreperous.

If advancement means dumbing down interfaces to an insane degree that we lose configurability/features in the process, then I don’t want to be part of that advancement.,

Ever get out a set of plastic screwdrivers and adjust every pot in a CRT? Did you enjoy setting the pincushion, keystone, hsize, vsize, gain, white level, black level, vhold, hhold, and all the other things you could change?

I have LCDs now, and I don’t have to fuck with that anymore. I don’t miss it one bit.

You are taking things too literally. Obviously a degree of ‘dumbing down’ is necessary, I mean who wants to edit config files just to change the default text size? However you shouldn’t have to install a special program just to change the bloody screensaver.

I think that if config files are necessary, they MUST be text files. In case something goes wrong, you can manually edit it if you have the skill to follow instructions. I cant think of a case where I fixed ANYTHING by manual configuration, but I CAN think of a time when I COULD HAVE.

Overclocking my GPU made my Windows 7 crash. Instinctively I went to Safe Mode to fix it, but the overclocking software would not load. Also it stored the settings in a binary blob. If it had instead used a plain text file I could easily have changed the GPU Memory Clock settings… but I couldn’t. So I had to reinstall the whole OS.

By the way I’ve stopped using linux since ubuntu 11.04. Can anybody recommend a distro which I can dual boot with Windows 7, and has a strong support base and documentation similar to ubuntu?

You are taking things too literally. Obviously a degree of ‘dumbing down’ is necessary, I mean who wants to edit config files just to change the default text size? However you shouldn’t have to install a special program just to change the bloody screensaver.

This was my point too. I never said dumbing down is bad. I clearly stated that insane level of dumbing down is bad.

You make a good point about text configuration files. They are a great fall back in situations where there is no interface at all to work with.

“There *are* no factory defaults, each setup is a wonderfully unique snowflake. We agree to anything to get the sale.”

That’s the perfect recipe for disaster. Been through that as an employee of several companies that don’t exist any longer, because of exactly that mindset. There always comes the day where you are supporting yourself to death instead of developing something new.

I just run Gentoo for these sorts of reasons. It is pretty agnostic to any of the big component choices, and has decent support for most of the major ones. It also tends to stay close to upstream.

Since it is source-based library dependencies are pretty flexible. That means that you can run stable on all your packages, but build them all against the git version of libjpeg or whatever. This gives you the ability to keep older versions of things like KDE around while still updating everything else. Now, you can’t do this indefinitely as true compatibility issues will crop at some point and the distro won’t help you out. However, it does give you the ability to buy yourself time without giving up firefox security updates or whatever just because you want to stick with Gnome2 a little longer or whatever.

Gentoo definitely forces you to get your hands a little dirtier, but for the most part it automates most of the mundane, and has been getting better about announcing changes that require manual intervention (like the recent libpng upgrade). The real power in Gentoo is letting you pick and choose what aspects of the system you want more or less control over, and providing reasonable defaults most of the time.

LXDE is awful. I mean, it’s fast, which is good, but it uses Openbox as its window manager and integration with said wm leaves much to be desired. PCManFM, the schizophrenic file manager on which the whole mess is based, can’t decide whether it wants to be slim and lightweight or a bloated mess built with support for the GNOME Virtual Filesystem (gvfs). Worse, PCManFM claims to support XDG/Freedesktop.org standards, but then does so only very poorly. There hasn’t been a release since 2009 and the current “stable” version has so many bugs it makes Windows looks stable.

LXDE is, IMHO, a solution in search of a problem. The solution space LXDE is looking to fill was filled a decade ago by XFCE.

In days past … I have used IceWM (and a straight Debian Testing release)… is that out of fashion entirely now, or was it ever in? I found it better than the blockier ones, and faster than Gnome and KDE.

I installed ubuntu 10.04 LTS on my new laptop yesterday and I refuse to go to 11.x until our sysadmin stops his swearing about those laptops in the office that have 11.x
I hope this does pan out in the end and does not just get progressively worse.

So, Nigel, are you suggesting that every single bit of software that’s configurable should come with its own config file unpacker to handle the unique config file resulting from dozens of different programmers writing different programs, or are you suggesting a system-wide unified binary database like the Windows Registry?

@steven or you could do what OSX does with their plutil and provide a common configuration format for all apps. I can unpack any binary plist file by typing:

plutil -convert xml1 /path/filename.plist

I can also open plists in many editors and there’s a script for vim that converts binary plists into text (piping to and from plutil)

No insanity and it seems to work well.

@darrin in the case of OSX, many apps will regen a default plist file for you if you delete it. With Time Machine I recover reasonably easily from many screwups so recovering a mangled plist file easier than in most systems.

Jeff, I thought Apple went with binary formats to improve parsing times, so I assumed plists were just serialized configuration objects. How would compressing configuration files improve efficiency (except by improving storage space and I/O access at the cost of CPU and RAM usage)?

Since XML files, however, are not the most space-efficient means of storage, Mac OS X 10.2 introduced a new format where property list files are stored as binary files. Starting with Mac OS X 10.4, this is the default format for preference files.

Nigel, a universal binary format that can be used to compact what’s already in a text file already exists. It’s called gzip.

Plists are one area where Apple went backwards instead of innovating. Especially since we have this wonderful thing called JSON and guess what it looks exactly like.

Compression isn’t the only use a binary format; configuration files are usually so small that saving an extra 4k is irrelevant on modern systems anyway.

Text files are definitely the best choice on existing (especially Unixy) systems, but a platform with a universally consistent data model all the way down can use a binary backend without incurring the headaches that accompany all the special-purpose formats out there. Hierarchical configuration is particularly irritating to express in text; OSGi’s Configuration service (which backs configuration on standard Java objects) is much simpler to use than XML.

>a platform with a universally consistent data model all the way down can use a binary backend without incurring the headaches that accompany all the special-purpose formats out there.

This sounds nice in theory. It never works in practice. The reason? Undocumented extensions!

These always arise, and their complexity impact on a supposedly universally-consistent binary format is very different from their impact on a textual one. Because a binary format cannot be eyeballed, it is far more likely that an undocumented extension will break the format in some subtle but difficult way even if the originator was trying to conform.

esr Says:
> This sounds nice in theory. It never works in practice. The reason? Undocumented extensions!

Although I mostly agree with you, I think it must be said that it is the culture, the gestalt, of Unix that keeps those text files clean. It is perfectly possible to make a dog’s dinner out of a text file format too, and include unpredictable, difficult to handle undocumented extensions. If you doubt me, try parsing an rtf file. Heck you can save a Microsoft word document to xml.

The alternative — tyrannical central control — might work too, as might be the case with Apple files. The Borg follow where the Borg Queen leads.

This sounds nice in theory. It never works in practice. The reason? Undocumented extensions!

These always arise, and their complexity impact on a supposedly universally-consistent binary format is very different from their impact on a textual one. Because a binary format cannot be eyeballed, it is far more likely that an undocumented extension will break the format in some subtle but difficult way even if the originator was trying to conform.

Extensions to what? A self-describing metamodel (e.g., serialized Java classes or Hessian data) can and routinely is stable enough that undocumented-extension issues can be avoided for (in the case of Java) at least 14 years.

In fact, the dichotomy between “text” and “binary” files is an arbitrary one—a “text file” is generally assumed to be ASCII, but some are UTF-8, and even today Unix and Windows can’t agree on a line delimiter. The “text” is simply a well-understood interpretation of binary data according to some encoding. And that’s not even addressing the fact that the “text file” is a abstraction of bytes placed on some physical medium, often broken up into discontiguous blocks, kept track of by one of dozens of binary databases called filesystems.

A system with a cleanly- (and simply-) specified syntax for “binary” files with a common parser is not any more difficult to work with than plain-text files, and can prevent the sort of type mismatching that routinely trip up hand text editing.

Since I want a system that is user-friendly and JUST WORKS, I have gone radical and replaced my Gnome DE with OpenBox and FBPanel. Of course they required me some… thirty minutes… to configure. But now they JUST WORK. With the right choice of apps (Sylpheed instead of Evolution because I don’t like HTML mail, for instance) my system is useable beyond belief. Eye-candy? No, thanks. Candy gives you cavities and cost you money. You must have them moderately, and carefully, strictly for pleasure.

> In fact, the dichotomy between “text” and “binary” files is an arbitrary one—a “text file” is generally assumed to be ASCII, but some are UTF-8, and even today Unix and Windows can’t agree on a line delimiter. The “text” is simply a well-understood interpretation of binary data according to some encoding

That is not the case at all. What matters is the abstraction with which we “humans” perceive the data, not the actual representation of the data itself on the disk.

> A system with a cleanly- (and simply-) specified syntax for “binary” files with a common parser is not any more difficult to work with than plain-text files

It is more difficult to work with. You need one specialized added layer (some specialized veiwer/editor like the registry editor in windows) which takes the binary structures and ultimately puts them into a form that we can interact with through a GUI. Ultimately what we humans enter into the GUI widgets to edit the binary values is gonna be text so why not just stick with text.

Building parsers for text config files is easy. Whether you build the parsers in C, or in a much better suited dynamic language (e.g. a lisp) that compiles into intermediate C code (e.g. chicken scheme) it is ultimately a straightforward job. There is simply no excuse for not using text.

You’re missing the fundamental difference. Text can be edited by hand and eyeball; binary cannot be.

There isn’t a fundamental difference; it’s one of degree. “Text” is simply the interpretation of binary data as one long string. I doubt you edit your config files with a hex editor or (as Wally claims) with hand magnets; instead, you use a text editor, which is essentially an editor for one specific binary format.

While I very much appreciate the value 40 years ago of Unix’s everything-is-a-bag-of-bytes model, we reached the duct-tape-and-baling-wire point ten years ago. Reiser4 had some very interesting Plan9-ish ideas of exposing a file’s contents as a sort of pseudo-filesystem, but no production Unix has anything comparable. This means, among other things, that editing a tinydns ‘data’ file is drastically different from editing httpd.conf, and I the user am responsible for understanding and complying with the requirements of the format, while the byte stream I’m working with is internally unstructured.

It is more difficult to work with. You need one specialized added layer (some specialized veiwer/editor like the registry editor in windows) which takes the binary structures and ultimately puts them into a form that we can interact with through a GUI. Ultimately what we humans enter into the GUI widgets to edit the binary values is gonna be text so why not just stick with text.

For starters, because it’s easy to type ‘maxsockets=ohlookastring’ into a text config file. As much as I loathe the Registry, a typed system could throw back in my face the fact that that wasn’t an integer—without requiring a hack like running ‘apachectl -t’.

Building parsers for text config files is easy. Whether you build the parsers in C, or in a much better suited dynamic language (e.g. a lisp) that compiles into intermediate C code (e.g. chicken scheme) it is ultimately a straightforward job. There is simply no excuse for not using text.

No question that yacc was a major step forward in dealing with text input. But the mere fact of having to do this every time you want to define a configuration language is wheel-reinventing. At least an XML DTD or schema can define what types and attributes are allowable where, and a number of editors, such as Eclipse’s generalized XML editor, will validate your document for you. My claim is that a parser for a well-defined typed limited binary format is simpler than an XML parser; Hessian is an excellent example.

That is not the case at all. What matters is the abstraction with which we “humans” perceive the data, not the actual representation of the data itself on the disk.

Forgot to mention: I absolutely agree with this statement. My claim, though, is that even a “plain text file” has at least two nasty gotchas (encoding and line delimiter); Moses didn’t bring a particular text-file format down from the mountain. I believe the tradeoff for adopting a uniform typed storage system outweighs the slightly-more-complex-but-generalized tools that would be needed to edit objects.

>I believe the tradeoff for adopting a uniform typed storage system outweighs the slightly-more-complex-but-generalized tools that would be needed to edit objects.

35 years of field experience tells me this is exactly dead wrong. The tools for any “uniform typed storage system” are brittle and the knowledge about how to interpret those opaque blobs easily lost on 5- to 10-year timescales. Been there, done that, scars still itch.

35 years of field experience tells me this is exactly dead wrong. The tools for any “uniform typed storage system” are brittle and the knowledge about how to interpret those opaque blobs easily lost on 5- to 10-year timescales. Been there, done that, scars still itch.

Where do you draw the line between the uses for text and binary formats? Although I don’t think they’re particularly elegant, both ASN.1 and XDR sit at about the level of abstraction where you would rather switch over to text, and they don’t seem to have bit-rotted. I’m also of the impression that you think binary data structures are justified for filesystem databases and for network packet headers, even though these are usually one-shot binary formats.

>Where do you draw the line between the uses for text and binary formats?

When the data objects are so large that compression actually helps measurably – that is, gives you significant reductions in network latency or startup time, or storage economies you can actually cash out.

As hardware resources get cheaper this threshold rises. And in many of these cases compressing a textual format is a better idea than using a binary one.

>a “text file” is generally assumed to be ASCII, but some are UTF-8, and even today Unix and Windows can’t agree on a line delimiter.

UTF-8 is designed so that ASCII is a subset of it and that the bit patterns for a given ASCII character are identical to the corresponding Unicode character. Thus ASCII is completely forward compatible with UTF-8, and UTF-8 is back compatible with ASCII when only characters that are in ASCII are used.

You would have done better to say “some are EBCDIC” or “some are Windows-1252”. Of course, EBCDIC is vanishingly rare, and Windows-1252 is (like UTF-8) a superset of ASCII, so that a text file that only uses those first 128 characters will be readable on a system whether it uses ASCII, UTF-8, or Windows 1252.

> Moses didn’t bring a particular text-file format down from the mountain.

Nevertheless:

1) This discussion is centered around system configuration files. In designing an operating system, it is pretty simple to ensure that the text editor that ships with it defaults to the same encoding as is used for the system config files. Encoding and line break issues will be a bit more of an issue when you’re trying to develop a cross-platform application, but…

2) Unless you’re working on an IBM mainframe, you’re overwhelmingly likely to be using an encoding for your config files that is based on ASCII.

The other day I had to edit the registry of my work-provided windows xp machine. I needed to swap the control and caps lock keys (requires a registry edit in windows!). After doing the edit, I wanted to add a comment that I edited so and so on so and so date. Guess what: You cannot add comments. This is but one example how binary fails.

If you go to parsed binary stream (as opposed to some kind of indexed database binary format where the record format is predefined) and decide to incorporate streams of text (comments) inside the stream of binary then all of a sudden your ‘faster parse times’ will become slower and slower because even in binary parsing you’d have to cycle through every single character.

Weird. SQL databases support comments to their columns and tables without a problem. It’s not that hard to put a fixed length pointer or reference to the comment instead of the text string next to the data, if you plan for it.

Otherwise, you just said “text is already slow, with or without comments, so it’s better”.

@ Morgan Greywolf
> LXDE is awful. I mean, it’s fast, which is good, but it uses Openbox as its window manager and integration with said wm leaves much to be desired.

That’s one of the reasons I like CrunchBang. The OpenBox implementation is pretty shmick and it uses Thunar which I’ve liked from XFCE. The default theme integrates nicely with GTK apps (which I favor due to historical use of Gnome rather than KDE).

It also ships with sensible default keybindings for shortcuts (meta-w for browser, meta-t for an xterm) and is very kind to low-RAM machines.

The current release seems to play nicely with Debian Squeeze (and backports) which is handy too for apt-goodness.

> Weird. SQL databases support comments to their columns and tables without a problem.

Sure they do. I guess MS did not plan for it or did not think it was important. Even if they did, I could not figure out how to do it inside the specialized registry editor regedit. Would you not want to support comments in your binary blob config file? Would you not want your comment to appear in a nice/different color in your specialized binary-blob editor, if you ever have to go back to view that same entry ? Text editors have been doing that for quite a while.

> Otherwise, you just said “text is already slow, with or without comments, so it’s better”.

Slow and fast are irrelevent here. Not in this day and age of unlimited computing resources. The advantages, durability, and power of text far outweigh the overhead incurred by parsing.

@uma
“Would you not want to support comments in your binary blob config file?”
You are complaining about a specific fault of the Windows registry format as if it were a general disadvantage of binary formats. I guess that it is, in the sense of “if you didn’t plan them in advance, you can’t have them”. To which I’d say “Bull. Versioned binary formats do exist”. That’s what I’m pointing out.

“Slow and fast are irrelevent here. Not in this day and age of unlimited computing resources. The advantages, durability, and power of text far outweigh the overhead incurred by parsing.”

If they’re irrelevant for text, why mention them as a disadvantage for binary formats?

“Slow and fast are irrelevent here. Not in this day and age of unlimited computing resources. The advantages, durability, and power of text far outweigh the overhead incurred by parsing.”

If they’re irrelevant for text, why mention them as a disadvantage for binary formats?

Go back and read what I said. You misunderstood. THere are two cases in binary:

case 1: Parsing a binary stream and inferring the data from it
case 2: binary database

Examples of “1” are packets of data that are routed over the internet which are parsed as they arrive. Examples of “2” are the windows registry. A database does not require parsing. The fields (and types of those fields) are all preset upfront.

@jeff so you’d rather incur the overhead of unzipping then parsing vs just binary deserialization into a usable format?

Eh. Arguments that machines are so fast today that this is not a penalty simply makes for systems that are slow despite the huge number of cycles we enjoy. My belief is that Apple went this route because OSX is the foundation on which iOS is built. Smaller file read, faster processing, etc all adds up on a hand held device. It also allows desktop OSX to be a little snappier.

Folks that believe that we have unlimited computing resources are likely the same ones bitching about how slow computers are from all the bloat.

I know. You said “If you go to parsed binary stream (as opposed to some kind of indexed database binary format where the record format is predefined) and decide to incorporate streams of text (comments) inside the stream of binary then all of a sudden your ‘faster parse times’ will become slower and slower because even in binary parsing you’d have to cycle through every single character.”

I thought “You want to have your cake and eat it”. You want a fast, stable, streamed binary format with sudden arbitrary blocks of text in it. You might as well ask the TCP authors to consider that, not comment here. Otherwise, why would a binary configuration file have this problem? Wouldn’t having a streamed configuration binary file format fall into the “utterly braindamaged” bin by default?

Its been mentioned a few times but it bears repeating. Xmonad. Once you have it setup and working how you want it there’s no going back. (admittedly getting to the ‘just how i like it’ stage can take a while). If you want to get stuff done then get a tiling window manager.

awesome wm is good as well, and the config is lua based rather than xmonad’s haskell config.

I’m saying why not go to classic plist format, no gzipping or anything? It was good enough for a 25 MHz 68k-based NEXTstation. Parsing text is cheap — in practice no more expensive than parsing a rich, hierarchical binary format. The speed gains of a binary format over text for something like config files are so dwarfed by the readability and ease-of-implementation advantages of text as to be virtually negligible.

XML, yes — as I said, XML is a clusterfuck and it’s understandable why you would want to streamline your format when there are complex parsing, validation, and DOM generation steps involved just from reading a config file. But old-school plists are literally so simple that it lends credence to the hypothesis that Apple switched to binary plists just to be dicks.

@fake account
Thanks for the info…
I am sure I could find that if I looked (I assume by ‘loudly’ you mean ‘publicly’)–only your response lacks two important things as far as answering my question goes..
1- you are not Eric ;)
2- it doesn’t answer “why ubuntu?” — it answers “what did you used to run before ubuntu?” ..

So… First, let me start with a quick history lesson. The genesis
of “BSD for PCs” (before which, unless you had a VAX or one of the
many BSD-derived 680×0 workstations, meant BSD Unix was effectively
out of reach of the common man) was actually not 386BSD or BSD/386,
as many believe, but the UX single-server which ran on top of MK
(Mach 3.0), the combination being referred to as MK+UX or “LITES”.
This work was spearheaded by Johannes Helander, a European University
student at the time and now at Microsoft Research (poor bastard),
and it was essentially the very first incarnation of BSD Unix on
the i386 architecture if you don’t count some early prototypes being
hacked on at the time by BSDi (who, of course, also had a proprietary
business model and were lumped in more with the SVR4 clones than
BSD, at least from an ideological perspective).

The rest of the world, as most of you will recall, was still chasing
the illusory notion of unification at that time by backing AT&T
System V, on which a number of entirely proprietary i386 Unixen
were already based (SCO, Interactive, Unixware, etc). Lots of folks
were obviously highly allergic to the notion of running a proprietary
(and not inexpensive) Unix also shepherded by an entity as utterly
clueless about what to do with it as AT&T, and those of us already
connected to the European computing scene (as I was, living in
Munich), became familiar with Johannes’ work and, how shall I put
it, “bent” our research licensing agreements with AT&T just a bit
by spreading MK+UX a little more widely than it was perhaps permitted
to spread, leading to the formation of a small user’s group who
quickly got X11 running on it (I think this was around 1989-1990)
and had a serviceable workstation running on the PC well before
Linux was just a gleam in Linus Torvalds’ eye. Yay us, right?

Well, like I said, we were still operating under the dubious umbrella
of a Unix research license (even though MK+UX had no discernible
AT&T code in it, Mach displacing a lot of the traditional layers)
and, as such, were essentially forced into exchanging distribution
floppies in dark alleys and otherwise hiding in the shadows. Hiding
in the shadows does not a viable developer community make, and it
was only once the irascible Bill Jolitz and his terrifying wife,
Lynn, suddenly appeared in the pages of Dr Dobb’s Journal with what
purported to be an entirely free and open BSD for i386, namely
386BSD, that we had the courage to stop hiding and see what might
be accomplished in public. This coincided with “the Rise Of The
Internet”, of course, and gave us a platform for reaching a lot
more developers than we might have been able to. The fact that
Bill Jolitz was essentially the spiritual predecessor of Theo de
Raadt, but without the charm and subtlety, became another impediment
to forming a community, so we were essentially on our second strike
even before Linus famously uploaded his “Linux 0.1”, which was
essentially two processes sending messages to one another using a
primitive kernel and runtime.

Today, of course, I think that pretty much everyone in the “Free
Unix” community (and I include Linux in that category, since it’s
more than Unixy enough for everyone but The Open Group) is pretty
much missing their boats and likely have less life ahead of them
than they have behind them. I’m not saying that there won’t always
be a place for a bunch of OS nerds to hang out on github / source
forge / wherever and create Unix clones, I’m just saying that it’s
becoming increasingly pointless to do so for any of the “traditional”
desktop / server markets and, unless they can really come at those
markets in some fundamentally different ways or discover/invent
some entirely new markets, what they’ve done to date is essentially
already “good enough” such that it’s just down to rock polishing
now.

Some of the “fundamentally different ways” in which they might
approach the technology and evolving needs of the market are certainly
worthy of discussion, but I’m not sure if anyone is really having
such discussions right now. How does the rise of virtual machine
technology affect how we define an “OS”, for example? Is it truly
a monolithic beast who’s outlines are “the distro”, to use Linux
terminology, or is it a set of lego blocks which simply haven’t
been adequately lego-ified yet? How about the desktop and embedded
worlds, where most of the user-visible value is actually in the
user interface and the applications and the actual OS, per-se, is
an implementation detail of interest to an increasingly small
percentage of folks? Those are the areas in which OSS development
generally falls short, since it takes a charismatic leader with big
balls and a lot of time to devote to really tackling such issues
until the proposed solutions gain enough critical mass and momentum
to become “obvious” to everyone else. Maybe when I retire.. ;-)

Bolunteer-driven, commodity OS creation has no future. To be sure,
anyone who gets paid to maintain or grind out subtle variations on
the existing BSD/Linux themes will still continue to do so because,
well, it’s a job. That means the Red Hats and the Oracles and the
Junipers of the world will still have some incentive to drive their
own variants, but the OSS world proper (and by “proper” I mean
“those idiots crazy enough to do this kind of stuff for free and
form active communities around it”) is only going to get bored with
doing it as soon as there are no clear and obvious worlds left to
conquer and it’s become manifestly clear to even the most rabid,
penguin-T-shirt wearing fan that the last few years of their lives
have been devoted to doing maintenance to an increasingly unimpressed
(or worse, 100% corporate) user base. Right now there’s still a
substantial amount of momentum and the launch vehicle is still
traveling upwards even though its primary stage may have exhausted
its fuel and gone out, it’s hard to say, but I do know that a huge
part of the attraction to doing *BSD and Linux lies in the sense
of shared potential – the notion that you might be able to take
over the world, or at least some notable corner of it, through sheer
dint of innovation and hard work. So, that said, what world are
we talking about?

Embedded? Linux and *BSD are already doing that better than anyone
expected (I never expected to ever be able to ssh into my phone and
do shell programming on it, for example) and I’m not sure what’s
left to do purely at the level of “the OS” there. Maybe a period
of power reform, where everyone starts measuring joules of energy
consumed per unit of work, for arbitrary values of work, and tries
to get those numbers down in a game of power limbo dancing, but
that’s fairly boring (to all but those who get paid to do it) and
probably good for only 3-5 years worth of incentive before it’s
“good enough” and/or everyone gets terminally bored with it. Server?
Totally a solved problem, unless somebody can figure out what the
next step for Virtual Machine technology is (and apropos of nothing,
VMWare has a chance to become the HAL for any and all future hardware
worth mentioning, but I doubt they have the balls to broaden their
strategy enough to pull that off to the point where they’re the
de-facto “Windows” for all Server hardware shipped, so maybe that
could become an OSS problem too). Desktop? I think that’s already
failed and I don’t see anyone in OSS having the vision to pend that
paradigm enough (or, for that matter, enough actual interest in
what “real, commodity desktop users” want) to pull victory out of
the jaws of defeat. Did I miss a category? Does anyone see
motivation coming from some left-field direction I haven’t thought
of? This is definitely one of those categories in which I would
be happy to be proven wrong, but I think the future of OSS lies
more in the next layer up, and I’m following projects like “Bullet
Physics” with definite interest.

All of these Unity and Gnome3 complaints have really caused me to start looking differently at the Linux community. I thought the community was full of power users. And most fast working power users that I’ve watched work don’t touch their mouse. In fact I knew one dev that if he ever had to come and help you would unplug your mouse from your computer. But all these complaints I’m seeing are about how the task bar works regarding clicking something.

Now from a culture of folks that love CLI I’m really dumbfounded on how now all of these folks all of a sudden need to click. I personally was learning to get away from using the task bar as a switcher because its slower. When I watch people work with blazing speed they alt-tab. And if anything Unity and Gnome 3 have led me to alt-tab quite a bit more. The “taskbar” has become a quick launcher or an aid if you just happen to be able to go to a particular app quickly. These power users also pull up a terminal and sudo apt-get install before they start clicking around the GUI. Many loved Gnome Do to start an app by typing or at least used alt f2. But now that this type of feature is baked into the OS all of a sudden power users use menus. And honestly I find typing to find the app faster than mousing through menus.

So honestly I just don’t get it. Gnome 3 and Unity seem to be right up the power users alley from my observation of what I’d call a power user. One of my only two problems with Unity was fixed in this release which was the poor alt-tab implementation. So tt seems to me that whats really happening is that the moves by Gnome and Canonical are showing that the Linux community may not be full of the efficient power users that one might think. The “power users” are still slowly clicking around their desktops just like the rest of us. In that case I can understand all of the outrage. But its not necessarily because the changes are bad but because people are set in their ways that could actually be inefficient.

The situation discussed by fake account is even more general than that. Even the for-profit software producers have to face it. Microsoft has to compete with itself constantly, desperately trying to get users to ‘upgrade’ from stuff that works well enough already. The hardware is already fast enough to do the vast majority of things that most people want to do. Basic computing is a cheap commodity now.

The future lies with applications, not system software. Computer geeks tend to gravitate to systems work because they’d rather work with machines than people, and it shows off their abilities to master intricate details, but that stuff is ‘going down’. You have to look outward. Work on things like gpsd (it’s a fine example of a useful contribution to the world).

I have been a happy Xfce user for some years now. Glad to hear you are considering it. I am especially happy that my old laptop’s fan stays absolutely quiet during regular use. Plus, it starts up when I ask it to, not a couple of days after I ask it to ;-)

Actually, fake account, the first Unix I ran on a 386 PC was a System V port distributed by Everex. Which would have been around the 1988 timeframe plus or minus. My memory is no more precise than that.

esr wrote:
> All you accomplish by trying to drag the conversation onto that issue is to look stupid and obstreperous

Why the personal attacks, esr? Are you terrified to have a commenter on your blog that is two percent more right wing that you are? Are you terrified that the wrath establishment might indiscriminately land on both of us?

@wlad It was largely a true statement and it was largely a given to most folks that it would be Windows.

OSX might have gained a little share as a by product of the Vista debacle. But nearly as much as the iPhone/iPad halo is currently providing it. And Windows 7 is a nice OS despite what detractors say.

Linux? Not so much. As much crap as Eric got for the linspire fiasco it was the right move if there was ever to be a year of the linux desktop. But one smallish company wasn’t going to get that job done so he burned bridges, said really unpopular things AND was unsuccessful. Ooops.

Which is why Eric is so fixated on Android dominance, not just victory. It’s vindication that open source + proprietary is the decisive combo. it’s not just enough for Android to win, it needs to crush all its opponents, see them driven before it and hear the lamentations of their users.

> Which is why Eric is so fixated on Android dominance, not just victory. It’s vindication that open source + proprietary is the decisive combo. it’s not just enough for Android to win, it needs to crush all its opponents, see them driven before it and hear the lamentations of their users.

Wait, did you really just say that Eric is fixated on Android ‘winning’ because he’s backed losers to this point, or because his fundamental thesis of open source “winning because it’s better” (as stated in CATB), is still unproven nearly 15 years later?

If you have spend your life creating and advocating OSS, this is a very fascinating development indeed. One that is much more important for everyone than getting some company another billion dollar for their bank fault.

And, yes, Eric made mistakes in his life. People who work make mistakes, people who work a lot make a lot of mistakes. I know people who hardly make a mistake.

It’s better than that. The desktop is almost the only arena where open source hasn’t already won. But it was already clear that everywhere-except-desktop win was going to happen by about 2003, to anyone watching the trend curves.

@Life The CatB thesis is already debunked with OSX, iOS, Windows…and Android. Android is cathedral developed as can be seen with the code release cycle. CatB isn’t open vs closed sales models but centralized vs decentralized development models.

Android source code is kept to the inner circle until released. It’s not developed in the view of the public. Also, open source still hasn’t beaten closed source in many arenas.

@winter The more balanced view is that open source is a powerful tool and will have large shares in some areas and small in others. There is no need for OSX to “lose” for Android to “win”.

The need for Android to dominate drives the faulty analysis of iOS future US market share and it’s susceptibility to “catastrophic disruption from below”.

Frankly, as long as Duarte stays I think Android will evolve into something really nice. ICS is a huge improvement design wise from the mess it was before. Whether it looks better underneath the covers remains the be seen…I’ll have to restart my Android development and see. I need to get a decent ICS based Android tablet but in a few months a few things might line up and make an app I’ve wanted to build possible.

Open source has won the “behind the scenes” war. But the cathedral looks to have won the user interface war. Maybe this will change in the future, definitely the really innovative stuff in UI is open source, but right now the UI is dominated by the Cathedral.

I don’t hate unity as much but I am still not happy… I just don’t get what they were trying to do, it does take longer to multi task (more clicks) and for some reason the stupid visual effects (which I cannot get rid of) make my brand new quadcore 4 gb RAM machine sluggish…

I put up with it in 11.04 and had hopes for 11.10… I think they have improved Unity but it is still not ready and the new bugs I experienced are more annoying than the ones they fixed (again the sluggishness)…

One thing though… I have a very old netbook for which I tried Lubuntu… it is rudimentary in many “advance” configs but it is all text configurable and so far EVERYTHING WORKS for me…

I know Lubuntu is designed for old, low-end PCs but I am trying it on my big desktop and loving every minute of it… if you are distro hoping you should give it a try

Yes, somehow the Android market reminds me of the days when I used Windows and found only low quality shareware and ad-based “freeware” programs. You have to really go through too much of this before discovering some decent apps without advertising. This is not a complaint though. I fully recognize that developers want to be paid for their efforts, but quality of software is a concern with Android after my experience with it.

Add that along with other issues like too being heavyweight and slightly unstable, not supporting/poor support of local languages in the system, not all apps supporting the screen orientations/autorotation properly, I am really not enamoured of Android.

Not that I am a fan of Apple either. Generally “smart” devices annoy me. I am happy with so-called dumb phones doing what they do best: making calls and sending messages.

Routers, security cameras, and digital picture frames are much more likely to run something like VxWorks.

As to the rest of it, open source folks know how to scratch developers’ itches. What they’re crap at is scratching anybody else’s. Until they learn how, this is Apple’s industry and everybody else just plays in it.

I’m currently running the 10.4 LTS with Gnome 2. I did however this morning install the KDE and XFCE environments on my 10.4 laptop and desktop. I’m trying to decide between Xubuntu and Kubuntu now since vanilla Ubuntu isn’t an option at 12.4 LTS with Unity which I named Pungent Polecat myself. ;)

In all seriousness I’m really going to miss Gnome 2. I left KDE for that and now KDE is looking better then Gnome 3 and Unity.

The nail-biting transitions to Unity and Gnome 3 are behind us, so this cycle is an opportunity to put perfection front and center. We have a gorgeous typeface that was designed for readability, which is now available in Light and Medium as well as Regular and Bold, and has a Mono variant as well. That’s an opportunity to work through the whole desktop interface and make sure we’re using exactly the right weight in each place, bringing the work we’ve been doing for several cycles fully into focus.

We also need to do justice to the fact that 12.04 LTS will be the preferred desktop for many of the world’s biggest Linux desktop deployments, in some cases exceeding half a million desktops in a single institution. So 12.04 is also an opportunity to ensure that our desktop is manageable at scale, that it can be locked down in the ways institutions need, and that it can be upgraded from 10.04 LTS smoothly as promised. Support for multiple monitors will improve, since that’s a common workplace requirement.

During UDS we’ll build out the list of areas for refinement, polish and ‘precisioneering’, but the theme for all of this work is one of continuous improvement; no new major infrastructure, no work on pieces which are not design-complete at the conclusion of the summit.

While there are some remaining areas we’d like to tweak the user experience, they will probably be put on hold so we can focus on polish, performance and predictability. I’d like to improve the user experience around Workspaces for power users, and we’ll publish our design work for that, but I think it would be wisest for us to defer that unless we get an early and effective contribution of that code.

What google is doing with Android goes against the principle of open-source. The principle is that by releasing code to the public you will get contributions from others (an extra set of eyes) that will make the project overall better.

But to accept this principle to have to also accept that some people will take your code and use it in ways that you don’t approve. The hope/belief is that the good will float to the top. If you don’t accept this premise then you don’t really have an open-source project.

I can’t figure out why Android open sources its code, ever. They don’t seem to care about contributions from the community, and they obviously don’t want people using their code in ways they disapprove of. It seems that the major reason for open sourcing their code must be something else. What I keep coming back to is that it somehow protects them (in most cases) of patent violations.

What are Andy Rubin’s open-source credentials? What open-source projects has he been involved in during his long and successful career?

@Christopher Smith, excellent point that text files should be compiled in order to enforce some universal typing. Thus configuration files should be computer language files. They can then be JIT compiled, or precompiled. Agreed, we are in the process of moving past the duck-tape form of extension.

@Esr, so which language best handles extension and is or can be ubiquitous (and open source).

> Routers, security cameras, and digital picture frames are much more likely to run something like VxWorks.

Or ecos, but “yes”.

The open source crowd won the battle but lost the war with the wrt54g, by creating a huge support burden when they bricked / returned the units, and enriching the FSF and SFLC via a huge lawsuit settlement. Afterwords, Cisco released a version with vxworks and 2MB of flash (insufficient to load embedded Linux), and then rereleased the original wrt54g as the wrt54gl, with a 20% price increase.

@Marco> We also need to do justice to the fact that 12.04 LTS will be the preferred desktop for many of the world’s biggest Linux desktop deployments, in some cases exceeding half a million desktops in a single institution.

Meanwhile Apple sells 4.9 million Macs per quarter.

> So 12.04 is also an opportunity to ensure that our desktop is manageable at scale, that it can be locked down.

Oh, that sounds *lovely*. The iPhone/iPad is closed, and you say that’s bad, but when Ubuntu does the same thing, it’s good.

I tend to agree. The question is why Eric is cheerleading a cathedral model effort.

We cannot all be androids (no pun intended) like Stallman, for whom it is always 100% communitarian, share-alike-licensed open source or nothing. For Raymond, an open source bazaar beats an open source cathedral which in turn beats a proprietary and closed cathedral. For the pragmatic case of building an app and developer ecosystem of critical mass, an open source cathedral (Android) with massive vendor backing and commitments from major handset vendors beats an open-source bazaar (MeeGo) doomed to obscurity with confused, schizophrenic vendor backing. This despite the fact that he has said many times that MeeGo wins on technical merits, due in part to leveraging the standard, well-established GNU/Linux userland.

Oh, that sounds *lovely*. The iPhone/iPad is closed, and you say that’s bad, but when Ubuntu does the same thing, it’s good.

Free software proponents support the notion that the owner of a system should do what they want with it. If they owner of a system is a corporation, then that means they too should be able to do what they want with it — including control the access privileges of its employees.

“Locking down” an Ubuntu desktop means the system’s owners control what may run on it. The iPad and iPhone are locked down in a way such that Apple controls what may run on them. Which raises the difficult question of whether the buyer of an iPhone or iPad really owns the device.

“I think people had very clear and concrete visions about Android and its strategy, but from a holistic design perspective — not just the look and feel — what does it mean in your life? Why are we doing the things that we’re trying to do. That was the question I wanted to ask.”

This question sparked deep user studies at Google on mobile phone use, what Matias described as “Serious baseline ethnographic research which hadn’t happened before.” He tells me that the company spent a great deal of time and effort watching how and why regular people used their smartphones. Not just Android phones, but all smartphones. The company even had employees “shadow” users, visiting them at their homes and workplaces to watch how they interacted with their devices. Matias wouldn’t share numbers, but intimated that the study was a significant undertaking.

“With Android, people were not responding emotionally, they weren’t forming emotional relationships with the product. They needed it, but they didn’t necessarily love it.”

@fake I think he did a good job in starting to clean up the android ux mess. While some folks are criticizing roboto as a frankenfont ICS, his other changes are getting praised by the design community.

In a rev or two, assuming he has free reign, Android will have a UX on par with iOS 5. Theres still a bit of work to do.

It should be interesting to see if Siri is skating to where the puck will be. If it is, Apple has another head start on UX again.

I find some of the various Ubuntu derivatives to be more valuable to me than the stock releases. My current desktop distro is Pinguy OS; the latest version is based on Ubuntu 11.04 with a customized GNOME interface that’s pretty useful. My current “pocket” distro (the bootable disc that I carry around with me) is WattOS because it’s fully-featured on a CD (Java, Flash, and codecs) and comes with a usefully-configured LXDE interface.

For the same reasons, I upgraded my Ubuntu 8.10 to Sabayon 6 (now 7). I’m using LXDE, but XFCE was a close 2nd for me. The beauty of Sabayon is it’s a rolling distro, and in theory (haven’t been with it long enough), I wont have to do any more installs on this machine.

Heh. I’ve been saying Ubuntu has jumped the shark for over a year now. I still use 10.04LTS, chiefly through inertia. I’ve been looking at Debian unstable as a possible next step, and I should probably try Xfce at some point. However, my long-term goal is to create a usable, useful Linux system *without X*. It just feels silly that I’m running X and gdm and gnome and a wm and a decorator, just to get a desktop that consists mainly of terminals. Unfortunately, gnu screen is still unusable, I haven’t found a text-mode browser or MUA I can really get on with, and I occasionally want to watch videos or play Wesnoth.

When I look at Metro, I see gaudy colors, boxy designs, applications that can either run as a small tile or as full screen with no way to re-size or move windows. Where have I seen this before… Wait, I know! Windows 1.0!

No, I’m not kidding. Let’s take a look at Windows 1.0:

…..

Twenty-five plus years of user-interface development and this, this, is what we get!? Scary isn’t it?

If you want an interesting take on a universal interface, take a look at Ubuntu’s Unity desktop. Metro? It’s klutzy and even people who love Windows admit that “the jury’s still out on the touch/no touch question.”

@fake account
> I can’t figure out why Android open sources its code, ever. They don’t seem to care about contributions from
> the community, and they obviously don’t want people using their code in ways they disapprove of.

Hm, interesting. I seem to be very happily running CyanogenMod on my phone, a distribution that Google has accepted parts of back into their code base, as I understand. And whose team was a pretty big presence at the Android BBQ event here in Austin a few weeks ago. And Cyanogen himself has been hired to do Android work at Samsung, one of the major partners.

And, gosh, I’m running it on my Nook Color, too! With the full Google software suite, which it did not have beforehand. Sounds like open source is a positive for Google, to me. And the ICS source has been confirmed as coming very soon. I’m excited.

@Nigel
> given I can do whatever I want to my iPhone I sure do own it.

And, you do understand that Apple actually fought to prevent that, until the courts told them no?

@jsk Apple never did anything about jailbreaking even when it was “illegal”. All they did was object when the EFF asked for an official exemption for jailbreaking under DMCA. As is, there’s no requirement that jailbreaking be allowed. Just that hacking the security controls isn’t a DMCA violation.

It would be easy for Apple to make jailbreaking much harder. As is they only close security holes that might affect normal users. Likewise the same for jailbreaking Apple TV and for making Hackintoshes. As long as you aren’t trying to sell the OS they don’t bother the community. They don’t even bother Cydia much if any.

@fake account
> Who else remembers that Honeycomb source code was promised once upon a time?

It’s important to understand the actual words of what was said there. To wit: “As I write this the Android team is still hard at work to bring all the new Honeycomb features to phones. As soon as this work is completed, we’ll publish the code.”

Bringing ‘all the new Honeycomb features to phones’ is what Ice Cream Sandwich is. So long as they do release the source to ICS in a timely manner, then the ‘promise’ from April still holds.

I use Xfce everyday and it is fine and highly configurable.
Thanks a lot for adding to what I noticed with Unity of Ubuntu and Gnome fallback.
And indeed it is not serious and it is less than it ought to be for day to day work.
I stay with Mint.

As Nick Burns, Your Company’s Computer Guy might say: Ell oh ell, semicolon right parenthesis! That UI lag and jitter that I and others discussed — and Eric saw as only imaginary — is one of the key reasons why Android feels wrong according to this ZDnet guy (hardly an Apple fanboi).

Now say it with me: DIRECT MANIPULATION. UI gestures aren’t metaphors. They don’t merely symbolize the moving and manipulation of objects. The things on the display are objects, and they should behave as such. Think of “haptics” from Rainbows End, in which enormous mechanical and computational complexity is devoted to making the user believe he/she is thumbing books or swinging swords or petting exotic creatures in a realm purely of imagination. That’s the goal. Anything less breaks the illusion and frustrates the user. Apple is literally leveraging millions of years of human evolution in order to bring computing within reach — and to the benefit — of ordinary people. That’s why since the very first release of iOS in 2007, 3D compositing has been used to draw UI elements to afford the user a seamless experience. What’s Android doing? They fall into the classical Unix-nerd practice of “well if the end goal is the same the specifics don’t matter.” No. Specifics are EVERYTHING.

Android is nothing more than a coalition of crap phone manufacturers who knew they’d have to band together in order to survive in the future that Apple had created. Matter of fact, it was precisely Android’s dedication to crap phones that motivated the decision to not use 3D hardware compositing and live with shitty, laggy UI response and screen scrolling! Lawl, indeed!

@Jeff
> Android is nothing more than a coalition of crap phone manufacturers who knew they’d have to band together
> in order to survive in the future that Apple had created. Matter of fact, it was precisely Android’s dedication to
> crap phones that motivated the decision to not use 3D hardware compositing and live with shitty, laggy UI response
> and screen scrolling! Lawl, indeed!

And so thank goodness that ICS will require a phone with hardware acceleration for 2D. The overall UI of Android has been enough of a win for me to overlook the graphical stutters and low framerate, but I don’t quite understand anyone who says they can’t see it. I don’t expect ICS to be the Second Coming, but if they just fix a few of the minor problems then a whole class of complaints can potentially vanish.

Plus it ships with a very competitive feature set. One hopes that this time we won’t end up in the mire of phones that were stuck on 2.2 long after 2.3 shipped; that was an unfortunate and very stupid blow to too many phones, and definitely a case of ‘what were the manufacturers THINKING?’

> What’s Android doing? They fall into the classical Unix-nerd practice of “well if the end goal is the same the specifics don’t matter.” No. Specifics are EVERYTHING.

Its X11’s “mechanism, not policy” philosophy, all over again.

The X mantra of “mechanism, not policy” resulted in a prolific, but anarchic collection of APIs, window managers, and look and feel decisions. Like Unix (or even Linux), X11 has evolved into a “sea” of window systems, each with its own vagaries.

Apple’s frameworks may be appallingly complex, but they are internally consistent, logical, and produce consistent results on both the desktop and iOS. Where Microsoft Windows is simple (nay, simplistic), consistent, and ugly, X11 is powerful, inconsistent, and ugly.

This inconsistency, appearing at all levels of X11, means that users cannot build reliable habits. Apple’s developer community has spent a great deal of time and effort on making sure that OSX apps act in a consistent manner; the X11 developer community has not, and it shows.

So? Eric himself has stated that not ALL projects are best done fully bazaar-style open source, on a case-by-case basis. Just that it can provide some very nice economic advantages at scale, with a slick little bonus to the end-users as additional payoff. Google has a reason for developing the way they do, and if it works for them, great! And look, I’ve still got an open source phone. Cool beans.

> > And so thank goodness that ICS will require a phone with hardware acceleration for 2D.
>This is so very, “fixed in the next release”.

It’s also ‘continual improvement.’ For me, and for everyone I know with an Android phone (who, incidentally, are not all hacker types, for you repetitive pedants out there), the UI framerate/jitter hasn’t been enough of a deal breaker. There was a reason for the previous behavior, and I’m glad for them finally addressing it properly now.

Beginning with Android 4.0, hardware acceleration for all windows is enabled by default if your application has set either targetSdkVersion or minSdkVersion to “14″ or higher. Hardware acceleration generally results in smoother animations, smoother scrolling, and overall better performance and response to user interaction.

If necessary, you can manually disable hardware acceleration with the hardwareAccelerated attribute for individual elements or the element. You can alternatively disable hardware acceleration for individual views by calling setLayerType(LAYER_TYPE_SOFTWARE).

For more information about hardware acceleration, including a list of unsupported drawing operations, see the Hardware Acceleration document.

Starting with Gingerbread, Google enabled the first kind. Now the OS uses the GPU’s ability to do things like transitions and probably some font anti-aliasing. This is about what Windows XP had.

Some of the programs (notably Opera Mobile and Samsung’s Touchwiz browser) enable the second kind. What they basically do is act like video games on the phone- they render their interface with the GPU. This is not blessed by the OS other than it gets out of the way, and you don’t see this sort of thing usually on desktop OSes because it requires full screen rendering (not a big deal on mobile devices) and it eats more memory than proper composite. The only desktop OS I have ever seen with something like this is the mid-00’s Linux desktop with Xglx.

The third kind is an outright composite based OS where the OS takes over and renders everything offscreen on the GPU. This is what iOS does today, and I think it is also what WM7 does. Windows Vista brought about this for Windows desktops, OS X had this from day one. This is why those OSes seem so “smooth,” as this is considered the modern way to do things. I know Honeycomb is not composite-based by messing with it, or its task-switcher would utilize live previews of Windows instead of screenshots ala OSX’s Mission Control.

Nowhere have I seen anything that implies that Google is moving to a composite-based OS. Nor do I blame them for not targeting same. It is a terrible unavoidable transition going to a composite OS when it didn’t start that way.

Only Apple has a composite-based OS on their phones because millions of early OS X users (like me) suffered through countless composite bugs all the way through OS X’s first four versions. Apple took that knowledge and applied it to the phones, which is only possible because they support such a limited phone hardware platform and because Apple has the world’s only decent software compositor (again thanks to guinea pigs like me).

Google lacks this advantage. In fact, Google finds itself in the same shoes as Microsoft in the early 2000’s- its platform is fragmented with trillions of different hardware combinations and most of the available GPUs can’t handle full composite. You can’t just force a composite-based OS in this situation because then you end up with Windows Vista.

Despite all the heresay, the real problem with Vista is that it forced a composite interface before the applications and the hardware were ready. Applications not made for composite had bugs aplenty, and only the highest-end hardware when it was released could actually handle the composite interface. That is probably why MS demands such a strict and higher-end baseline for WM7- they learned their lesson. Google also probably learned from Vista that if Android is EVER going to be composite based it won’t be till years from now when 90% of hardware sold can handle it.

Google’s attitude is, “Why do we need a composite interface for Android? Doing most GUI calculations on the CPU is compatible with every Android phone out there, and next year when quad core phones hit there will be enough extra CPU power that brute force will fix Android’s smoothness issue. All a move to composite would do is make millions of current Android devices (that have weak GPUs) obsolete, it would royally screw up the app market until developers could redo their programs for composite, and at least one version of Android would be trash as they went though the composite growing pains that EVERY composite OS (Windows, OSX, Linux, etc.) has dealt with.

Google lacks the advantage that Microsoft and Apple posess having worked out those bugs on their primary OSes, so why even bother? “

The problem is that at virtually every phase of the personal computer industry, Apple and only Apple did the hard work of figuring out what the consumer wanted and what it would take, engineering-wise, to get there, such that it was virtually impossible to deliver a consumer-appealing product without appropriating in some form or fashion an Apple technology.

Hence, the current patent lawsuits. Apple is well within their rights to sue. And that’s going to be a problem for the Linux desktop in the future because Apple will literally hold the IP rights to the Right Thing. Need proof? Fonts were shite in Linux for so long because non-shite font rendering was patented Apple IP.

In conclusion, you want a good, usable Unix workstation where everything works as it should? Cowboy up and buy a Mac.

@Jeff
> In conclusion, you want a good, usable Unix workstation where everything works as it should? Cowboy up and buy a Mac.

Unless the Mac doesn’t work the way you want it to. Following your logic backwards, it means that if you want something good but not the way Apple does it, then TOO BAD because Apple has a patent on good. Pretty frustrating.

I’m curious. There is a lot of vitriol in your posts, in your word choices, your phrasing. What is your actual stance? That everyone should own a Mac? That Linux/Android/Etc is awful and everyone who likes it is an idiot? That’s the vibe I get from you. Not sure what you’re actually going for. I can understand attacking esr for his opinions; he invites it. But you jump on everyone. Just trolling, or what?

crap phones that motivated the decision to not use 3D hardware compositing and live with shitty, laggy UI response

80/20 rule (Pareto principle) wins.

Anything necessary for the 80%, eventually gets incorporated, but later when it is cheap (economy-of-scale, mature technology) to do. Billions of people buying dumb phones and now upgrading to Android, with no iPhones even for sale in these thousands of dumb phone retail outlets spread over the developing world. Dozens of iPhone stores in China is insignificant.

> Thanks for the clarification without actually answering the question.

To be clear, up until about 2001, I was responsible for one of the largest deployments of linux (certainly one of the most wide-spread) in the world. I’ve written articles (cover articles even) for Linux Journal.

I’m probably in the top 10 for FreeBSD shipments (on hardware)/month these days.

I mostly run on Macs these days, though I do run FreeBSD in a vm for development. I no longer have any linux boxes in co-location.

No, I don’t think that everyone needs to own a Mac. I do think they’re superior to Windows or Linux as a desktop for many, if not most people.

I own three Android phones (a G1 and two Nexus Ones (one for AT&T, the other for T-Mobile. Both Nexus phones were given to me by Google.)

Fake Account is actually fairly typical for the average hacker, I’d say. Most hackers these days use Macs as personal boxes, particularly laptops, because the hardware from anyone else is shit and the software… well,in the time it’d take you to strip off Windows, install some flavor of Linux, and then iron out all the remaining annoying little issues and bugs you’d be off to the races with a Mac, and even if you did that stuff you would still not have as nice an OS to use. Plus you can always run Linux in a VM.

And Macs work the way you want, for almost all values of “you”. Seriously, try one. Odds are it is easier, and makes you more productive, than whatever you think is your ideal WM setup. (Remember that your brain can trick you! Usability research has found that mousing is always faster than keyboarding, even when you think the keyboard is faster. Guess who discovered that.)

One more thing: It is enormously difficult to get even a Linux box to work the way you want, even if you like tiling window managers that are activated with meta-key chords. The reason is simply because hrdware support on Linux is spotty, and getting your 3D video card, sound card, wireless network, Bluetooth, etc. going can be a phenomenal pain in the ass. On a Mac, it all Just Works.

> And Macs work the way you want, for almost all values of “you”. Seriously, try one. Odds are it is easier, and
> makes you more productive, than whatever you think is your ideal WM setup.

I use MacOS. On my main workstation, when I’m doing graphics or video work. It does not make me more productive; a lot of the time it does a good job of annoying me. I get bad repetitive-strain when using a mouse, and it seems worse in OSX due to how the cursor acceleration works. And, for whatever reason, I am awful at staying organized with a graphical file manager. I like OSX, though, and never said I didn’t. But not for getting Real Work (read: programming) done. I know it’s popular in the pro hacker crowd; I follow Plan 9 development, and a lot of those guys are Mac users.

> One more thing: It is enormously difficult to get even a Linux box to work the way you want, even if you
> like tiling window managers that are activated with meta-key chords. The reason is simply because
> hrdware support on Linux is spotty, and getting your 3D video card, sound card, wireless network, Bluetooth,
> etc. going can be a phenomenal pain in the ass. On a Mac, it all Just Works.

It can be. It also gets better every year. Nowadays it takes me all of an hour (including install time) to go from bare hardware to my complete setup, in Slackware. Most of that is improvements in the back-end, with driver support and whatnot. And I haven’t had to touch xorg.conf in years. Meanwhile, the last time I installed MacOS (on a non-Apple x86 about two years ago, I admit), I spent two weeks trying kext combinations and it still doesn’t work quite right. I’m not yet willing to shell out the $$ for a full-on Mac just for the few things I like using it for.

Why is it that, when various Linux components actually show improvement, they are derided by detractors as ‘well you shouldn’t have needed to improve that in the first place?’ It’s blind goalpost moving and pretty frustrating in discussions.

Here, let me try:
Statement: iOS now has a thriving App Store that people can submit things to!
Response: Whatever, they shouldn’t have had to ADD that availability in after the fact.

“Jobs at first quashed the discussion, partly because he felt his team did not have the bandwidth to figure out all the complexities that would be involved in policing third-party app developers.”

So they didn’t have to, they changed their minds, once they found a way to police the app store. What has changed is Apple’s treatment of app store developers, but that doesn’t really fit-in with your little missive, now does it?

Even Eric agrees with “THERE SHOULDN’T BE ANY GODDAMN HIGH PLACES!!!” He’s expended major efforts to make gpsd self-configuring.

We understand it takes time and effort to drive a software architecture without sharp edges. What we don’t understand is why the linux crowd is so damned slow at it, but then we remember that most linux developers still live with their mom, and fap a bit too much for their own good.

@esr
> fake account is a haterboy of the most unusual and interesting type, the Embittered Old Fart.

Sounds about right. I hate to killfile, as there does seem to be some interesting content. But, just not worth my own eyeball time.

Incidentally, I’m planning to clean up my killfile script, and add a couple minor features, over the weekend and make it available if anyone is interested. It’s a Chrome extension, and it’s only meant to work here. Dunno if it will work on other wordpress installs.

“OEMs have the ability to customize their firmware to meet the needs of their customers by customizing the level of certificate and policy management on their platform.”

But just wait until Microsoft starts paying OEMs (as part of its branding program) to only allow Windows 8 to boot on your new machine. That’s right, no linux, no freebsd, and not even so much as an upgrade to a future version of Windows. Just buy a new machine, dear.

> fake account is a haterboy of the most unusual and interesting type, the Embittered Old Fart.

I prefer the “drooling idiot” interpretation. Just look at this:

> But just wait until Microsoft starts paying OEMs (as part of its branding program) to only allow Windows 8 to boot on your new machine. That’s right, no linux, no freebsd, and not even so much as an upgrade to a future version of Windows. Just buy a new machine, dear.

On what planet is Microsoft going to exclude themselves from the possibility of additional revenue from upgrades? Does he actually think people are completely innumerate and can’t count the difference between the cost of an OS upgrade and the cost of a new PC?

That may actually happen with Windows 8 tablets in particular. Wouldn’t surprise me a bit.

Peecees it would cause more of a backlash. But we are living in the post-peecee era. Going forward only office drones, hackers, and gamers are likely to own one. (And gamers will suck hind tit compared to consoles.)

> Usability research has found that mousing is always faster than keyboarding, even when you think the keyboard is faster.

Um, yeah. That’s why I always type with my mouse.

Seriously, it’s not “always”. Even in the article that tmoney cited, there is this:

Not that any of the above True Facts will stop the religious wars. And, in fact, I find myself on the opposite side in at least one instance, namely editing. By using Command X, C, and V, the user can select with one hand and act with the other. Two-handed input. Two-handed input can result in solid productivity gains (Buxton 1986).

and this:

I don’t think most folks can touch-type 75-100 command-keys per minute, particularly with the weird layout of the keyboard we’ve adopted, which often requires the user to curl the thumb underneath the other keys.

In any case, I would love to see the details of this UI research. Was the UI research done with that keyboard with the weird thumb curl requirement? Did Apple ask people to do certain things? Or did Apple ask people to go about their day, and they try to pigeonhole the things the people did into developers vs. users, for example, and then say that mousing developers were faster than keyboarding ones? Did they measure how tired people were at the end of the day? Did they take into account that different people might have different work modalities, just as they have different learning modalities? Did they look at people working on PCs vs. Macs? If they didn’t specify the tasks, how did they quantify the amount of work done? If they did specify the tasks, what kind of hidden, inherent bias was in the specification?

On what planet is Microsoft going to exclude themselves from the possibility of additional revenue from upgrades? Does he actually think people are completely innumerate and can’t count the difference between the cost of an OS upgrade and the cost of a new PC?

I’ve seen invertebrates display better reasoning.

Most, practically all people I know, neverupgrade Windows . They use the OS their PC came with. They get a new version of windows whenever they buy a new PC (or laptop, whatever).
The difference between the cost of an OS upgrade and the cost of a new PC is irrelevant : they’re not buying an OS or an OS upgrade, they’re buying a PC.

I’m sure Microsoft is aware of this, so having no option to disable ‘secure boot’ on most PC’s wouldn’t harm them, and would hamper competition
Sounds like a plan – if Microsoft was the kind of company that would use that sort of tactics.

>Most, practically all people I know, neverupgrade Windows . They use the OS their PC came with.
>They get a new version of windows whenever they buy a new PC (or laptop, whatever).

This has been true for most computer users. Heck other than linux users, mac users are probably the next most likely to upgrade their OS, they did it all through the 90’s when updates were $99, and they did it from 10.0 to 10.5 when they were all $129, Yet Apple now sells OS upgrades at $29. If Apple doesn’t see money in upgrades, it’s doubtful Microsoft will either. I think the days of paying more than a token amount for major OS updates are behind us.

I laughed outloud at that and the guy that still uses 9.04 as well. I was using 9.04 for a very long time myself but finally decided to get with the times. Tried Unity, I am using Kubuntu now. Seems that many people here like XFCE so I might try that. Unity was fucking horrible. I could not get 11.10 to go back to gnome2 (not even the version esr was talking about). I felt like an idiot upgrading from 11.04 because at least I got gnome2 working on that one.

Switching from Gnome to KDE or XFCE sends a message to Gnome and maybe some of the developers will take Torvalds’ suggestion and fork gnome.

> If Apple doesn’t see money in upgrades, it’s doubtful Microsoft will either. I think the days of paying more than a token amount for major OS updates are behind us.

There are accounting rules that force these companies to charge something.

There are also strategic reasons to reduce the barrier to upgrades.

Apple’s iCloud service requires Lion on the desktop, and iOS 5 on the iPhone / iPad / iPod Touch / Apple TV.
Getting the installed base to iOS 5 is strategic, as it increases the friction of moving away from iOS to Android. (Apple doesnt want to see 100M iPhones that are ready to upgrade move to Android.)

So the lion price is as low as possible. Apple changed the way it accounts for revenue on the non-iPhone devices so it could give free iOS updates a few years back.

If Apple can get most of its installed base on iCloud, then it only needs to worry about fighting Android for new sales. The $0 iPhone 3GS and $99 iPhone 4 (8GB, not the 4S) are part of the war plan for new customer acquisition. Again, once acquired, and on iCloud, the customer is very unlikely to leave, due to increased switching costs.

Simultaneously Microsoft has increased licensing costs for Android. Few seem to remember that one of the first things Jobs did when he returned to Apple was to cross license patents with Microsoft, and take a $100M investment from them. Apple and Microsoft are unlikely to sue each other. Of the major Android OEMs, only Motorola isn’t paying Microsoft to use its patents.

Motorola, of course, is now in the process of being acquired by Google.

Meanwhile, Jobs’ good buddy Larry Ellison acquired Sun, and turned around to sue Google over Java. I understand that Eric and other Android supporters dismiss the lawsuit as baseless, but this conclusion is not supported by subsequent events. Should Oracle prevail, Android will get very expensive until Google can either change Dalvik to not infringe, or eliminate Java from Android.

Meanwhile, Apple sues Samsung to keep them from building a phone that looks like an Apple phone.

The net-net could be that Google ends up building its own phones via MMI, as Motorola’s patents probably match Microsoft’s in throw weight. Add in a painfull transition to eliminate Java, and you’ve crippled Android.

@tmoney Apple sells hardware. MS sells software. Unbundled OSX, if there were ever such a thing, would cost a lot more than $29.

@inkstain Your assumptions are completely wrong. The hardware will allow the install of any correctly signed OS. Which means Windows 9 would install just fine. OSX and BSDs wont. RHEL will given that RH will likely pay for valid certs with the major vendors: Dell, HP (whomever), Acer, etc. Assuming Dell still sells pre-installed Ubuntu machines, these will have valid certs as well.

Now, if MS makes it a stipulation that OEMs only get ad $$$ if Secure Boot can’t be disabled AND only has MS certs then yeah, these machines would be locked down. Of course they’re also likely to get slapped silly by the EU for pulling that kind of stunt.

The office is the only market where MS still has solid control. If an enterprise with 10K users suddenly discovers that they can’t upgrade to Windows 9 but need to buy new PC’s, they will simply upgrade much less often.

@ nigel

Not my assumption. Read fake account above: “But just wait until Microsoft starts paying OEMs (as part of its branding program) to only allow Windows 8 to boot on your new machine. That’s right, no linux, no freebsd, and not even so much as an upgrade to a future version of Windows. Just buy a new machine, dear.” I’m debunking it.

Folks, you need to remember that Microsoft is a software company. Apple sells integrated products. MS makes money on every Windows license sold. Apple, on every Mac or iPhone sold.

No, not even there. The commercial world is tired of continuous upgrades for no good purpose. Microsoft tried to kill off XP and there was such a howl they had to back off. (Of course, Vista’s problems didn’t help.) Still, most executives are well aware of problems caused by upgrades, and don’t want to mess with something that’s already working.

@fake account
Because the EU believes markets are not perfectly efficient and market parties are able to wield power to reduce competition. This is seen as severely damaging to consumers. There is ample evidence of such damage in Europe.

As we will all be dead in the long term, when the invisible hand works, the authorities do not wait and act in the short term.

Sorry dude, only the most deluded think that markets are perfectly efficient. Believing they are not is an indicator that the individual has actually existed dreamland. Perfectly efficient markets are not a requirement for increasing competition.

You might be interested to know that there is a proof that if markets *are* perfectly efficient, then P == NP, but try getting the consequences of that through to the many sophomoric “economists” here and elsewhere.

FOSS hobbyists are free to acquire computer hardware that doesn’t have a locked-up boot loader, of course. They represent a market, do they not? Surely someone will see the obvious niche in the market, and act to supply them with PCs that don’t have locked-up boot loaders. VA Systems rides again! (Now where did they put that corporate conscious?)

Of course, it would be much more European to simply tax all PCs, and use (part of) the revenue to supply FOSS proponents with government-supplied computers and salaries. Stallman recently proposed taxing all Internet connectivity to pay artists for their work. He’s proposed a “Software Tax” for nearly 20 years. Software Socialism, anyone?

Suppose everyone who buys a computer has to pay x percent of the price as a software tax. The government gives this to an agency like the NSF to spend on software development.

Several governments (including various state governments in the US) already tax all electronics in order to pay for their eventual disposal.

If you’re going to buy a PC with unlocked bootloaders, expect it to cost maybe 1.5 to 2 times the cost of a Windows-locked PC. The economies of scale aren’t there when you’re talking a out a market as small as the krelboyne demographic, and the savings in support calls achieved by locking down the platform are passed along to the consumer.

At those rates, it’ll make even more sense to cowboy up and buy a Mac.

This will work on desktops, laptops, and netbooks alike. It can be
adapted to fit tablets and sizeable smartphones. (We do have a problem
if you’re looking to go back before 24-bit color LCDs and 800×600 screens.)

The complete OS will have a POSIX-compatible backend. It will have a graphical
user interface similar to that of Chromium OS, but will not have the lack
of personal computing applications. It should be written in Common Lisp or
Scheme, but as much of the OS should be as text-based as possible.
This means that as much as possible needs to be implemented in W3C standard
languages (in other words, as web pages), such as HTML, CSS, ECMAScript,
SVG, MathML, X3D, and perhaps much more. Of course, items such as the shell
and the parser need to be written in a compilable language.

Ideally, every URI scheme needs to be supported, and that means setting up
as many programs as possible to support them. For instance, an irc:// URI
should immediately redirect to a built-in client (NOT Mibbit). A feed://
URI should open an embedded aggregator (NOT Google Reader). About: URIs should
open up a tab reminiscent of the Windows Control Panel. Any other unrecognized
commands should open a shell.

This OS should not (and I hope will not) be a source of incompatibility.
I believe virtually everything can be written in Common Lisp, Scheme, or
ECMAScript, so naturally there will be compilers for different languages
written in them. Should there be something which _must_ be compiled in
assembly or another incompatible language, there will be ways to evade
the problems (additionally through an extension language similar to GNU
Guile).

This OS, in theory, should be more of a success than Windows, OS X, or even
desktop GNU/Linux. The size of the OS will be small (as well as the source
files), the compatibility with standards will not be broken, and both
languages which the OS is written in can be interpreted or parsed, meaning
easier debugging. Even better, the OS, in its entirety, will be free software.

I have conceived this OS design out of the concerns that the Free Software
Foundation has raised over the SaSS of Chromium OS. RMS has never declared
his assent to SaSS, and he has been a supporter of Common Lisp for years.
This should not take too much time to implement (maximum two years with
approximately ten programmers), and there is sufficiently enough free software
written in the two languages that combining them all is not quite a
burdensome task.

> If you’re going to buy a PC with unlocked bootloaders, expect it to cost maybe 1.5 to 2 times the cost of a Windows-locked PC. The economies of scale aren’t there when you’re talking a out a market as small as the krelboyne demographic, and the savings in support calls achieved by locking down the platform are passed along to the consumer.

Not just support calls. Imagine a locked down PC that comes loaded with Windows 9 – Starter Edition.
Now, the only way to load applications on Win9 – SE is via the Windows App Store, where Microsoft curates the store. Want Office? Only the latest edition will do. Oh, you wanted Star Office? That’s not available.

Microsoft gets a 30% cut, of course.

Now, Win9 SE is licensed to PC OEMs for $99 per year, but Microsoft will give you a $99 credit per if you only put the Microsoft keys in the bootloader. Effectively, it’s free in exchange for a non-open futuro.

Hey kids, let us block google for a year, and your second year of windows is free!

It’s going to make me smile when the open source crowd becomes the market for used Macs.

I had not installed Gnome 3 after reading many negative comments. But ignoring it without trying it is not good. Installed Gnome and i am liking it. I like the simplicity and singularity. As a person who gets distracted very easily Gnome 3 has forced me to focus on my current Task. I have spent just a day have to spend more time before commiting on it.

@fake account
“FOSS hobbyists are free to acquire computer hardware that doesn’t have a locked-up boot loader, of course.”

Why the rant? Touched a sensitive point? Like, consumers are not free to buy the hardware they want, but have to jump through hoops and pay extra?

It is possible you are not following the literature about the effects of oligo/monopolies on market efficiency since, say, the 18th century. I have found the account of Adam Smith very readable. You might try it.http://www.gutenberg.org/ebooks/3300
Note that he chose “An Inquiry into the Nature and Causes of the Wealth of Nations” as the title. He sees oligo/monopoly actions as detrimental to the wealth of nations. The EU agrees.

MS have consolidated their monopoly for 15 years now. That is a lot of damage to innovation and consumers. On the other hand, you can easily evaluate the effectiveness of EU and USA anti-trust policies by comparing prices between the EU and USA. Say, where can you get cheaper mobile phones and subscription rates?

Libertards will always rail against the very regulations that promote and foster competition because they conflict with the libertardian dogma that more government == BAD. Meanwhile, here in the US people take to the streets in droves because permissiveness, deregulation, and huge bailouts have allowed big business to make off like the bandits they are; the rest of America… not so much.

Hell, there are authoritarian regimes less dysfunctional than the United States. Singapore comes to mind…

@Jeff Read
“If making the lives of millions of people easier and more pleasant through developing beautiful, well-designed technology that people want to use is bad, then baby, I don’t want to be good.”

I was talking about MS. But Apple is a good second (or first, depending on who you talk to).

It’s always amazing that folks can compare MS to Pol Pot, et al with a straight face. Hint: MS never killed anyone. On the scale of evilness they are a teeny tiny smidge more evil than Google. As in not very.

The FOSS worldview is, in my opinion, bankrupt because of this pervasive idiocy. They are not the “bad” guys because they are a proprietary company (they were bad guys for other reasons). You guys aren’t saving the world and closed vs open source is not an ethical dilemma.

@Jeff Read
“If Microsoft threatens to sue a handset manufacturer and then offers Windows Phone licensure as an alternative to lawsuit, well, that’s an easy way to get new Windows Phone “partners” on board.”

Sounds indeed like the bad guys winning. You also one that enjoys that?

I was saying that idolating robbers and psychopaths is a deeply buried aspect of the human mind. You even find it for the worst of the worst. So it is no surprise to find it in those who get filthy rich with much lesser crimes. I could have used Berny Maddof or Mexican narcos as examples. But that would not convey the message as well.

MS has been doling out a billion a year since the early nineties in fines and settlements for persistent law breaking. They make even more money than Mexican drug lords (and have better margins). So it is only natural that they attract admirers.

Sounds indeed like the bad guys winning. You also one that enjoys that?

No. But the solution to the problem is not to pretend that these patents just don’t exist.

You can be like Jessica Boxer and say that patents shouldn’t exist for software or anything else. Hell, you might even be right. But the fact is that these patents are legal in most regimes of the industrialized world. Sadly, even the EU. So other technology companies have to either play by the rules and license the technology that Microsoft owns or get out of the game. It’s really that simple.

Apple played by the rules in the late nineties, accepting considerable Microsoft investment and a cross-licensure arrangement, and still managed to outmaneuver them and surpass them as the world’s #1 technology company. That’s what I was talking about when I mentioned the difference between Steve Jobs stealing and everybody else stealing. It’s not about how stylish his thefts are. It’s the fact that he appropriated the ideas of others while still playing by the rules.

The really disappointing thing is by the time all the Apple, Microsoft, and Oracle IP is factored out of Android it will be quite unrecognizable as itself. Long-term, Android is not a stable or viable platform.

@Jeff Read
Read MS’ statements. Never do they say that they have patents covering Android (the OS). It is always about the phone manufacturers. In all of MS’ history, if they did not said it black on white, it was not true. So if they do not say outright that they own patents that cover Android the OS, then they do not own such patents.

They are talking about medical and epidemological data here, but why not also the tools used to analyze that data? If you have free access to the data but the techniques used to draw conclusions from the data are behind a paywall, how do you know those conclusions can actually be derived?

It’s the same issue Eric was dealing with during “Climategate”. “Climategate” turned out to be a tempest in a teapot — there is oodles of climatological research supporting the AGW theory if one would care to but look — but the principles of open data and open analysis tools are still sound.

This may not affect Aunt Tillie on her desktop. But it is a serious issue, one that cannot be ignored.

Didn’t the SEC or somebody start requiring the source code to certain types of derivatives calculations be registered with the government, in Python?

In order to assert the patents against Android itself, they would have to go after Google. Even if they were utterly victorious, Microsoft would wind up on absolutely the wrong end of a David vs. Goliath narrative. Why buy the cow when you can get the milk for free? Hence the litigation against hardware vendors, not Google itself.

Microsoft is going after ChromeOS notebook manufacturers, too. THEY MAKE BLOODY NETBOOKS. There’s nothing special of Microsoft’s that isn’t in every other netbook. Except for ChromeOS, which is based on Linux, which infringes on Microsoft’s patents.

It’s actually in Microsoft’s best interest to ensure Android and ChromeOS continue to exist. They can then have something to point to when hauled before the court on antitrust grounds and say “See? We have competition.” At the same time, using their IP portfolio they can menace hardware manufacturers into integrating Windows because integrating Linux puts them into scary infringer territory.

It’s all very strategic and nasty, but it’s also legal, so what can you do?

@jeff, research costs money. If it is paid by public funds then the research is freely available to the country that paid for the research (or world organization). Research paid by private funds aren’t and might live behind a paywall.

Companies that organize conferences also put their proceedings behind paywalls but public sponsored research authors publish elsewhere as well as is required.

The software to analyze the data is treated the same way. If public funds are used then the source is/should be available. If not, then no. The algorithms represent significant investment and value to the company and researchers that created it. These are the foundations for their life’s work and the reason why they get the research grants rather than the other researcher in the next university over.

The fact is much software is available but it’s not under “open source” because of the licenses are often non-commerical. I recall some FOSS proponent roundly castigating MS on licenses-discuss (or perhaps license-review) for releasing their bio analysis suite under their academic non-commerical license because it wasn’t “open source”.

FOSS has a very very narrow definition of open.

“Open tools” do exist. Researchers do often collaborate. But it is not unethical if they choose not to do so. It’s simply one decision among many others that a PI makes to further his/her research.

I tended to prefer collaboration and sharing but that because it’s the way I like to work. Other PIs felt differently. Did that make me more ethical? Nope. Did that make me more effective? Sometimes, but not always. Relying on others or having them rely on you sometimes means waiting around for stuff promised. I was late delivering a piece of research code and that screwed up the other team that had to scramble. Fortunately not a critical item but when the IRBs are signed off, the subjects lined up, etc, the last thing you want to be mucking about with someone else’s buggy code.

A large amount of time Open Source = Someone Else’s Undocumented Buggy Code. Academic/research code even more so.

Again, it’s not an ethical dilemma but a practical one. If the research must be able to be replicated then the general algorithm needs to be supplied and typically it is in the paper. The code? Eh, it would be nice but not required.

AFAIK, the companies who are paying Microsoft for Android fall into two categories: (1) those way too small to spend much on legal fights; and (2) those which also ship MS products such as Windows or Windows Phone. In either of these cases the short-term incentives weigh heavily in favor of paying the Danegeld.

It will be very interesting to see how MS v MotMobility and MS v B&N pan out.

One thing that I find somewhat surprising in the comments from Jeff Read and others defending Apple or Oracle going after Android and various handset manufacturers is the sentiment that when Apple used prior inventions, ideas, or creations from other companies they did it the right way and that going ahead and suing a competitor out of the market using the incredibly dubious mechanisms of patent law is something to be cheered on or supported. At worst it is something to be ignored, or brushed off. What truly boggles my mind about this attitude is the lack of vision into the future and how these same mechanisms, the ones that are currently no big deal when aimed at Google, will likely be used to crush Apple.

You know who currently runs the largest computer science research initiative in the world? Microsoft, aka MS Research. If that graph that fake account posted from twitter continues its trend do you truly think that Microsoft will not look to use the patents acquired from billions of dollars worth of research each year to extract some sort of fee from each and every iphone, imac, and ipod sold? Or if it gets truly pissed off enough/wounded enough, to block the sale of OSX? At the moment Apple is a company with deep, deep pockets, and a legal battle will likely be protracted and expensive, so I can understand why MS is content to sue comparatively much smaller handset makers. But the thing is that cheering for “your team” when they win with a pretty dubious mechanism isn’t a victory, because it means that they just might face up to an opponent using the same tactics the next time. So think a little bit next time you cheer Apple patenting a basic multi-touch feature with prior art going back decades, because as goes Android, so may go Apple.

Seriously. YOU. DO. NOT. FUCK. WITH YOUR USERS’ DATA. This should be the Prime Directive, tattooed on the foreheads of every software and systems engineer, everywhere. Everything else that has to be said about user-friendliness and making people’s lives easier, stems from this. As the great Orson Welles said, “Come on, fellas, you’re losing your heads!”

@jmg Apple would likely pay the license fees to MS. Yes, MSR has some world class research. Bill Buxton is there. They also do a lot more of the basic usability stuff (not at MSR IIRC but elsewhere) that Apple used to do.

Whoa. That is egregious, even for Apple. They’ve always had a dark history of being cavalier with users’ local data (see, for instance, the various iterations of their e-mail clients changing mailbox format and breaking everything), but actively deleting data? As part of normal operations? Holy crap.

@Jeff This is a design bug that they’ll have to iron out. The solution is straightforward: another directory that is permanent but not backed up via iCloud. Marco (Instapaper) posted about this a while back.

It’s not some nefarious plot and likely to get resolved correctly. Apple typically doesn’t respond as it figures out how to fix its mistake. This is sometimes infuriating but just part of the culture not to shoot from the hip with a fix or comment.

Okay, apparently iOS 5’s new Grim File Reaper only wipes out cache and temp files. Problem is, anything an app downloads must, according to Apple’s guidelines, go into the cache folder. So local copies of ebooks, Wikipedia or magazine articles, etc. may all wind up in the crapper under space pressure. Same is true for navigational charts, hence the article in an aviation magazine. So it’s not as bad, but still, the same principle applies. If the user downloaded it, he shouldn’t lose his copy of it. I don’t care if it’s been iClouded. Fuck the cloud. I download books onto my Nook precisely so that it will have them where and when I want them, whether WiFi is available or not. If my Nook pulled this shit, I’d take it out and blast it with a shotgun.

I was all ready to switch to a nice iPhone 4S as my next smartphone, maybe save up for a MacBook Air to go with it. Now this shit. Fuck everything about this, fuck Apple, and a posthumous fuck you to Steve Jobs. Imma stick with my geeky rooted Android kit for as long as there’s meat left on Android’s skeleton that hasn’t been pecked off by patent buzzards.

But it’s probably a generational issue. Some young turk decided “the cloud” is where data lives, and the flash on the iWhatever is simply a cache… This, they will probably be re-taught, and then remember for at least a few years. Either that, or corporate has made a conscious decision that it should be painful to be an iWhatever customer unless you also embrace the iCloud. Hanlon’s Razor says to assume the former, but I’m not sure what Occam’s Razor says in this particular case yet…

Well, it’s the potential for shit like that that makes some people obsessive about control over their computers and their data,
and a lot of what open source and free software is about has to do with keeping that control where it belongs …

Re solution: The obvious solution is straight forward (add a directory). The problem is that if you have that directory cruft builds up and you depend on each app developer to correctly clean it out…which is not a good bet.

Most likely they’ll come up with a solution that supports both objectives…the ability to permanently cache data while being able to clean out cruft at the user’s discretion.

For the airplane charts, the app developer is just going to have to store that in documents and have Apple whine at them about too much stuff being backed up.

Perhaps that’s the second obvious solution with potential pitfalls. Allow files in the Documents directory to be flagged as “Don’t back up to iCloud”.

It looks to me like a very typical, and highly intentional, Apple move. Basically, to the app developers, “We have this new service. We want everyone to make full use of this service. If you do not want to use this service, you will get degraded UX because we made it that way. Change or die.”

“Here, let me just tidy up a bit before we get started with your test–er, app. Oh, I’m sorry. Were you using that? It didn’t look like the sort of thing you had any use for, so I threw it out. My bad…”

Nigel, the obvious solution is an OS that does what I call upon it to do and otherwise stays the fuck out of my way. I should have learned not to trust Apple to deliver on that very basic, fundamental promise.

@Nigel, it’s nice to think that Apple would pay the license fees, but what happens if they take the Steve Jobs approach? You know the whole “go nuclear on this” option, the “I would reject 5 billion dollars, I have lots of money I don’t need more” (both of those quotes have roughly the intent and main wording right but are likely not verbatim, I say this so that someone does not dig them up as evidence of intentional misquoting). I ask this because what if that market share graph continues its trend?

Again, as people cheer on Apple’s attempts to entirely block competitors from selling their product, not just settling for license fees, you have to ask yourself what your response would be to a desperate/appropriately motivated Microsoft doing things like changing aspect ratios on court submissions, display photos of competitors products without the distinguishing badging, etc, only this time it is to entirely stop Apple from selling you your “lickable” mac book air.

Look the point of these posts is not to say the patent system should be entirely abolished, or that Apple has not done any innovation. But it is to point out that software patents, and also in many cases design patents (at least of basic archetypal features) are incredibly broad and dangerous, and should be viewed as a different class that the classic patents on physical items we normally think of. And companies ruthlessly exerting the system today just might find themselves on the other side of the sword tomorrow. Why not instead cheer for Apple’s success in convincing customers through just plain better product?

@fake account, I had read that too, but I have not found any substantial corroboration and wikipedia has since been edited to remove mention of his passing until they could receive the same. Do you have something you could share with us to either confirm or assuage our fears?

jsk, Apple is not above giving developers the buttshaft when it decides it wants to push out some new paradigm shift, and “change or die” is always the expectation. I should have learned more deeply the lessons of the Great libstdc++ Kerfuffle of 2005.

@jmg Then one hopes that Apple has sufficient patents to warrant a cross license agreement…if not, oh well. They’ll have to live without the patents and find a work around.

Folks cheer Apple because they’re carrying baggage from prior years. It’s human. It’s sometimes true that what what goes around comes around.

As for your implication that Apple is changing aspect ratios on court submissions, etc deliberately as shenanigans I find that disingenuous. Samsung clearly has a strategy of mimicing Apple in terms of design and advertising. Sufficiently so that one judge held up both tablets and asked Samsung’s attorneys to pick which was which.

Epic fail for the lawyers. The tablets SHOULD be distinguishable but evidently to non-geeks, it’s not immediately obvious. Which is the whole point of the lawsuit. Heck, Samsung even copied the box design. How lame is that? When I unboxed my TouchPad there were a lot of things that had an Appleish feel but not an obvious copy and some uniqueness.

Apple isn’t winning preliminary injunctions based on bumpkin judges and bad photos. These judges have the physical devices in hand and some handle a lot of tech cases.

If Apple was slavishly copying MS’ designs to ride their coattails than any MS legal ass stomping would be well deserved. They need to either license or stop.

Folks ARE cheering Apple’s success in convincing customers through better products. Its just that some fans are ALSO tired of Apple being KIFR’d so blatantly by a company that has the resources to come up with something original.

Look at the Nokia N9. It’s a very nice looking phone and different from the iPhone. The Nexus Prime is a very nice looking phone that doesn’t look like a 3GS or 4 ripoff. It’s not going to get blocked.

jmg, Apple successfully sued eMachines on trade-dress infringement grounds for marketing a computer that was much uglier than the iMac and even the wrong shade of blue. Didn’t matter. It was close enough. By that standard, Samsung’s tablet infringes on Apple’s trade dress. That should be obvious from a mere glance at it.

@Nigel that’s a fair point, well said. I don’t want to come off as blindly supporting Samsung, as it is clear a fairly good case can be made of the blatant similarities in their devices. The aspect ratio flap and such was real, and was somewhat bush league, I don’t think that can be argued, but then again this is law and their are no points for good sportmanship.

I agree with you about Samsung needing to make more original products, but I am still fearful of the methods apple is seeking remedy. Some of the design patents seem incredibly broad, a tablet is a generally rectangular device, probably black due to how the device will wear with use contact (white or light colored plastics can show dirt/smudging more easily). A rectangular grid of application icons is something my Palm tungsten had back in ’03. Obviously Samsung goes further and if Apple was suing over the much more specific aspects of their aesthetic I think it would be more warranted. If you can point me to some sources that would show my understanding is incorrect I would appreciate it and retract my statements.

But my current understanding is that Apple is pursuing their largest competitor and also their most egregious copy-cat using pretty generic and broad claims. Going after the worst offender will make it easier to win the case and generate some precedent for these claims, at which point one has to ask if they will go after a less blatant case in another competitor (see HTC or Motorola) strengthened by their precedent. The genericness is what I object in both design and software patents, and I think that is not something that should be cheered no matter how we may feel about the two sides involved.

Confirmed: John McCarthy, Father of Lisp passed away Oct. 23
I have called Stanford’s media relations and engineering numbers and have confirmed through two people at Stanford that Professor McCarthy has indeed passed away. There is no details available yet. A formal obituary will be coming.

The wikipedia page has been modified back (after being locked, it’s now unlocked) confirming the death of John McCarthy.

The fact is that the more any business is regulated, the greater the advantage the larger organizations have. Banking is a case in point: it’s one of our most regulated industries, right up there with medicine and operating nuclear power plants. If you imagine that deregulation is how we got into this mess, then you’re nuts.

And what Apple is suing over doesn’t encompass functional features. It covers the ornamental fit and finish of the iPad. There are plenty of tablet designs out there that don’t look like the iPad. Most of them are crappy but still. The Nook Color is a fine looking little tablet that is definitely not an iPad; even if scaled up to 10 inches it would still not be an iPad. Even accounting for the aspect ratio difference, Samsung’s tablet looks an awful lot like an iPad. They infringe.

That may be true for some of what Apple is suing over, but, for example, making the front seamless so it doesn’t have cracks for stuff to get into is functional, and the easiest way to do that is to make it out of a single sheet of glass. This wasn’t that practical before touchscreen advances, but is pretty bloody obviously a good functional thing to do once the underlying touchscreen technology lets you do it.

Jeff Read, aspect ratio is clearly functional. It can’t be part of the trade dress. Further, just “looking like” something is not trade dress infringement. Apple has to actually establish that the similarity references elements that are indicia of source as well as not functional.

fast
configurable
unix-like
no .desktop directory, desktop objects are virtual like good old CDE so you can pin
what you are currently working on without moving the real file
directories are not called ‘folders’

Aspect ratio DOESN’T MATTER. That’s what I’m saying. Nerds have attempted to argue that Apple deliberately made Samsung’s tablet look more like an iPad by fiddling with its aspect ratio in Photoshop. And that may be true. But I am saying it doesn’t affect the substance of Apple’s case, which is that Samsung illegally tried to capitalize on the iPad’s popularity by releasing a product that looks too similar — aspect ratio notwithstanding. And the proof that Apple has a case is their victory in Apple v. eMachines, which was over a computer that bears less resemblance to an iMac than the Galaxy Tab does to an iPad.

IIRC, some of Apple’s own filings referenced the aspect ratio, and it was a big deal in the German court. So, while in your eyes, Samsung’s kit looks to much like Apple’s regardless of the aspect ratio, I think that was part of the argument.

And once you run the equivalent of an abstraction/filtration/comparison test on the devices, I think you will find that the elements that eMachines added to make it look more like an Apple were more unambiguously non-functional than the elements that Samsung added. But that’s just my opinion.

Just chiming in a little bit late; I was using MATE for a little bit, a GNOME 2 fork, which allowed me to continue on with a DE based on GNOME 2.32 on Arch Linux. Though after finding and reading a blog on customozing GNOME 3’s fallback mode (alt-right-click is weird, but I don’t frequently change the panel, so it’s ok), I reinstalled GNOME 3 and have been using it in its “fallback” mode. The panel does not behave 100% like it did with GNOME 2, but I actually like it better, since it’s keeping everything snapped into place. I’m now on GNOME 3 and I’m happy again, since I’m not using GNOME Shell :)

All things considered, GNOME 2 forks are the wrong way to go, they bite off too much of the applications and ignore all of the work into underlying libraries and applications, pretty much on the premise of disliking GNOME Shell. If a fork is necessary at all, it should be a GNOME 3 fork; though making GNOME 3 itself behave better for users that just want the old desktop concepts still is ideal. If GNOME 3 just provided a clearly visible “Run exactly like GNOME 2 did” option, I don’t think as many people would be upset; Debian is doing just this in Wheezy, as a matter of fact, they have GNOME 3 but the login screen lets you pick between GNOME Shell and GNOME Classic/Fallback.

This rant, and Jason Perlow’s recent ZDNet rant (http://www.zdnet.com/blog/perlow/why-ubuntu-1110-fills-me-with-rage/19103), capture most of my issues with Ubuntu 11–and Ubuntu 11.10, specifically. I tried to give the 11.10 Unity an honest go, but it sucks. And, since I use a Mac alongside my Ubuntu system, the comparison between Unity and an actual, usable interface, is all the more starkly apparent.

So, like you, I tried to fall back to Gnome, only to run into a host of problems. Gnome 3’s rendering is buggered on my machine, and I don’t have the inclination to figure out why. Gnome Classic is, as you point out, a turd. But worse, on my machine, unless I have a window bumped right up to the edge, Synergy fails to cross the barrier to the connected machine. WTF?

Eventually, I just bit the bullet and cut over to XCFE4. It isn’t perfect, but it’s easy enough to use, and it’s a whole hell of a lot more memory-friendly.

The reason many of us switched to Ubuntu in the first place was because it provided a desktop that looked great and “just works.” Now that they’ve taken that away, I see no reason to stay with Ubuntu at all anymore.

Xubuntu does provide a quick exit strategy (apt-get install xubuntu-desktop) but my experience mirrors yours — there are a few things that aren’t quite right, that have been broken by Unity having been present.

I simply wiped out my desktop system (except for /home which was on NFS anyway) and installed Debian. It turns out that the only thing Ubuntu has over Debian these days is a better installer. Once the system was installed, the Xfce experience was superior to Xubuntu and my system was no longer littered with Canonical Crapware.

So yes, it is clear that Ubuntu has jumped the shark. It was a great run (five years in my case) but it’s over. And by the time Shuttleworth stops being openly hostile to Unity critics and realizes it’s all been a big mistake, it’ll probably be too late to save Ubuntu’s enviable but already declining position as the favorite Linux desktop.

IGnatius – I thought of going to straight Debian, but Ubuntu has two useful things: (a) taking care of my wifi without me having to argue with Debian (an experience akin to having a small RMS on your shoulder lecturing you, complete with smell) and (b) PPAs. Xubuntu has quite enough of the seat HOWTO experience for those who like that sort of thing.

What really troubles me is that there are no distributions willing to maintain GNOME 2.x. It’s all going to be 3rd parties. Dropline, Mate…

RHEL will be the litmus test.

I don’t think anyone understands that this is truly the death of Linux on the desktop. Mom and Pop aren’t going to use Gnome3. It’s just a resume` pumper. KDE 4 isn’t any better with it’s control bugs. The themes are terrible. I don’t even think there is a compliant standard to which these developers follow. Think about the visual imparied having to use this shit.

I dropped back to an older Ubuntu for my media centre. Unity was even a pita for that and that’s the type of application they seem to be aiming at.
It seems to me that the problem is that Linux has become too popular for its own good. Developers got caught up in a competition for users. The beauty of linux was that it wasn’t aimed at the lowest common denominator. Aim for the lowest point and you get crap. First KDE, now Gnome. Think I may just use LXDE more.
Linux was supposed to be an alternative to Windows, not a competitor. They’ve lost the plot. Screw the dumbasses – give me functionality. If I want to customise my computer let me – otherwise I may as well just use windows.

Ubuntu/Unity has a very bright future. It’s has exceptional management vision and corporate investment focussed on “consumer experience” above all. While such things may not be what opensource is all about at its roots, in terms of consumer votes, it’s worked for Apple and Ubuntu, and will continue to do so. Ubuntu also has what drove Windows’ success: it’s affordable and practical.

Unity is innovative and practical, and most of all, it’s got Ubuntu going for it. It’ll be on your TV, tablet, smartphone and desktop.

So I’m sure Unity will become the most popular Linux “desktop” before too long e.g. as 11.10 and especially 12.04 are deployed on millions of desktops.

Mind, there are things about Mint you might find objectionable (not sure how distasteful you’d find having some proprietary stuff like Adobe’s Flash player built-in), but losing access to Ubuntu’s repos is not one of them.

So, well over a year after this post was published, I stumbled across it. I saw the mention of .config/dconf/user. I took a look at that file on my system. My face contorted into a wordless scream of terror. I’ve been trying GNOME 3 on my Arch Linux laptop because damn it, I’m going to give GNOME 3 an honest try! This little revelation, though, may very well be the straw (or, perhaps, tree trunk) that broke the camel’s back. I’ll probably end up purging most, if not all, of the GNOME stuff on my laptop and setting up something like what I have at work: the Awesome window manager with whatever I need to make it complete. Or, perhaps, I’ll go back to good old KDE.