Honest question. Do you think the GNOME project is as healthy today as it was, say, 4 years ago? Benjamin Otte explains that no, it isn't. GNOME lacks developers, goals, mindshare and users. The situation as he describes it, is a lot more dire than I personally thought.

Nice rant, but I can't agree with most of it.
Unix programming (and philosophy) have not attempted to produce anything significant on the GUI/productivity front.

Philosophies don't write code; developers write code. It's up to Linux developers to apply Unix philosophy in their designs, not the other way about. And where they fail to do so, they've no-one to blame for the resulting monolithic monsters but themselves.

I think part of the problem is that the early Unix designers were forced to employ humble parsimony by the extreme scarcity of hardware resources back in their day, but modern generations are spoiled by the massive glut of such resources in today's systems, allowing for the establishment of fat, grandiose designs that lack any hard checks against their own short-sightedness. They need to force themselves to be humble and parsimonious from day 1, so that by day 4000 they have not sunk beneath their own vast weight.

e.g. Me, I'm forced to employ such parsimony in my own work simply because I am not all that bright and my memory has all the capacity and reliability of a wobbly 16K RAM pack. So when I write a complex system, I put most of the effort into reducing or eliminating complexity as much as possible. Modern technology, highly reliant on high-level IPC and component-based construction to divide and minimise the solid core so it's as small and manageable as possible. e.g. When I needed to implement my own scripting language, I based it on the sort of simple Scheme interpreter they used to teach CS students to build in college; very high power-to-weight ratio as a result. Only the core data types are built in; all the actual functionality exists as plugins, even fundamentals like defining variables and performing flow control. And I'm just an art school drop-out and general bum, so the rest of you should have no excuses whatsoever.

It's best in network services and HPC but the computing world have long moved forward from these basic concepts in the last 30 years.

Pish and nonsense. Unix philosophy just doesn't favour Highly-Paid Enterprise Consultants who build their fortunes by creating their own problems to solve, and big-name proprietary OS and application vendors who benefit from the user lock-in business model, is all.

And even if we do limit ourselves to applying Unix philosophy to networks, what is a component-based framework such as OpenDoc if not a single-purpose network in itself?

I've been posting even longer rambles over in the hierarchical file system thread, most of which boil down to "So why exactly are we treating local (disk) data storage and remote (network) data storage as two utterly different things?" I frequently see very smart programmer types mentally slicing up service provision requirements according to technical implementation. Me, I approach the same challenge as a UI problem, and look at it in terms of overall usage patterns. They see the all the little differences: "disk vs network". I spot the broad commonality: "data storage". Which is actually the more crucial of the two? Ruthlessly consolidating overlapping ideas and squashing them down into their simplest common form is one of the core activities of Unix philosophy. So, for example, the Unix file system interface does double-duty as local storage and IPC mechanism. That's the sort of brilliant craftsmanship and creative efficiency that all OSS DE inventors should bring to their own work.

You miss DEs that are not slavishly trailing Windows / Apple lead and look their own ways. There's plenty of them (e.g. GnuStep, E17), and they have their loyal following (that's as you admitted is enough to sustain them). But they are still the minority after years.

But how many of these DEs adhere to Unix philosophy throughout their construction? External appearance is irrelevant; it's the rules they set down on how each brick should be placed together that makes all the difference. And even that by itself is not enough. A DE that achieves Unix philosophy within itself but still only runs the same old monolithic slabs of apps that infect all the mainstream DEs has also failed to bring enlightenment to the OSS desktop as a whole. Remember, a DE's value resides in the apps it runs. If the apps are all rubbish, the DE's ultimately worthless.

It's the old 'work smart, not hard' mantra: OSS simply doesn't have the vast time and manpower resources that Adobe, Apple, MS, etc can use to brute-force hammer their own giant monolithic apps into being somewhat decent.

I second the OpenDoc argument. A huge missed opportunity by IBM, who could donate it do community 15 years ago, letting talented people like Miguel de Icaza to focus on writing functionality people actually care about.

Another anecdote: I have, for my many sins, just spent the last decade mastering Apple event IPC. (I'm perhaps one of a dozen folk in existence who truly grok it all the way down to its very bones. That's not a good reflection on me, mind; rather a poor reflection on everyone else.)

Amusingly enough, right after its birth in System 7, Apple shot the whole AE system in the kneecaps just so they could send a couple of its engineers to go get OpenDoc "ready to throw away" (as one internet wag memorably phrased it). Under pressure from their publishing users (back then still a key market), they tried reinvigorating all the AppleScript/Apple event stuff for OS X, but still didn't really understand what it was about and unsurprisingly made a bit of a muck of it, again. Few years later, they invented Automator as a less granular approach to desktop automation (Duplo blocks to AppleScript's Technics, not that there's anything wrong with that). Futzed that too.

A large part of the problem: you can't add componentization onto the side of an already monolithic app and expect to achieve wonders. You have to follow that philosophy from the ground up. And rather than build one giant does-everything app, build several small, simple ones that do one thing apiece and do it well, and ensure they can easily and effectively talk to one another. (My favourite target of hate here: email clients with built-in address book and calendaring facilities. Should be three separate applications every time.) And that's just with your classical application-centric desktop; if you go full-hog along the OpenDoc route (as a truly *nix desktop should), that pattern will be implicitly ubiquitous anyway.

My comment to the situation, Linux users want a tweak-ability and choice but few on the realize that every choice increases support complexity and testing exponentially. There's simply not enough people in the community (and living in the planet earth for that matter) to test and polish all the combinations these people demand available.

Two things:

1. Linux DEs need to learn when to say NO to the tweakaholics. (GNOME, to its credit, did try.) When they whinge (as they invariably do), hit them hard with the LART stick. Repeat for as long as they continue to understand the value of everything and the cost of nothing. Simultaneous to this, be aware that having really a good, logical, intuitive default design in the first place really helps to take any legitimacy out of their complaints, and that this may require some hard investment too.

2. Linux developers need to remember that any time they're seeing exponential growth in something, it's a damn clear sign they're using an O(N*N) [or worse] algorithm instead of an O(N) or O(N*log N) one. Redesign the algorithm, which in this case is their whole DE construction philosophy. The CLI developers don't have it anywhere near that bad, and it's not like their habits are particularly rigorous.