Linux Future

Some time ago I came across yet another angry discussion[1] about systemd, and have been reading and thinking a great deal about the design of Systemd, and what it says about Linux. I’ve come to realize that the strife in the Linux community is because an active and well-funded group of developers who have been driving the direction of various core components are not building UNIX. They are building some other philosophically divergent system on top of the Linux kernel, with roughly the same relationship to UNIX as Plan9[2]. For convenience I’m going to call the non-UNIX environment they’re building FLOS for the remainder of this post (F since the FreeDesktop.org folks, and their backers in the Fedora project, are driving this, L for the Linux kernel, OS should be self-explanatory). I intend this term to be value-neutral[3].

To me, the core of a UNIX system is a philosophical matter. To quote Mike Gancarz’s The UNIX Philosophy from 1994, UNIX has 9 paramount precepts:

Small is beautiful.

Make each program do one thing well.

Build a prototype as soon as possible.

Choose portability over efficiency.

Store data in flat text files.

Use software leverage to your advantage.

Use shell scripts to increase leverage and portability.

Avoid captive user interfaces.

Make every program a filter.

FLOS is a nearly diametrically opposed design, with design concepts like the following:

The components of FLOS communicate over D-Bus rather than sockets and pipes.

FLOS is built on a core of monolithic programs which attempt to synergisticly manage multiple complex components.

FLOS leverages features specific to Linux and ignores portability.

FLOS prefers tightly integrated components to generic solutions.

I’m not sure that this is a bad design, but it is most definitely not UNIX or anything like it. I’ve seen some fairly convincing arguments that the FLOS design philosophy has serious benefits, and there are decades of convincing arguments that abandoning the UNIX way is the path to ruin. Systemd is the big realization of the FLOS design, but many projects, especially FreeDesktop projects, have been working this way for some time. I’m going to pick out a couple examples and talk about them under the fold.

I run Arch, so my examples will be focused there… although from the look of things the Fedora project’s choices are being taken as upstream gospel, so I’ll be pointing at them quite a bit as well. Allan Mcrae has been particularly vocal about whatever Fedora does being the right thing for Arch to do, such that no Arch-specific changes will need to be made for software that is compatible with FLOS. Arch’s own philosophy, The Arch Way was[4] a big part of what attracted me to Arch in the first place, but its execution is making Arch a particularly bloody place while the larger community decides if Linux will continue to be a UNIX-like system.

In retrospect, the first strong signs of the FLOS design showed up with Udev in 2003. Udev was easy to miss, because it was a necessary change – the old DevFS system did not deal well with dynamic devices, was largely unmaintained and ill-understood, and there is a cogent argument that it is safer and more flexible to perform device management in userspace rather than in the kernel. Udev initially demonstrated FLOS design in that it uses a fleet of logicless configuration files, but has since swallowed Linux’s old hardware abstraction layer, HAL, becoming a sort of monolithic hardware management process, and switched to communicating with other processes via d-bus. Udev’s source tree and development process has also recently been moved into systemd’s, to further integrate the pieces, as a FLOS design would advocate.

One of the earliest really apparent examples of the difference between UNIX and FLOS is the move to ignore the FHS. Ignoring the FHS is happening in two ways visible from my Arch system at the moment.

One form is that the Udisks developers (FreeDesktop.org folks, naturally) took it upon themselves to move automount paths from /media/$DEVICE_NAME to /run/media/$USER/$DEVICE_NAME. I’ve already bitched about that here, but to summarize it makes working with removable media from the command line a miserable exercise in digging around long unpredictable directory hierarchies, makes it much harder to script around removable media, and creates weird problems with device ownership and mount paths for removable devices attached when the number of users logged in is not exactly equal to 1 (0 being the most frustrating and common exception). I’d like to specifically quote the Udiscs2 command documentation, because it makes my point for me:

This program is not intended to be used by scripts or other programs – options/commands may change in incompatible ways in the future even in maintenance releases. Scripts and/or other programs should either use the D-Bus APIs of udisks2-daemon(8) or native low-level commands such as mount(8).

In other words, Udisks2 is not to be used with scripts or via a textual interface, you communicate with it via d-bus. It is tightly integrated with upstream (udev, and the *Kits) and downstream components (Desktop Environment) – FLOS, not UNIX.

The more drastic form is the effort to throw everything into /usr and symlink stuff out to where the FHS says it should be, and nearly every piece of software for a UNIX like system ever made expects it to be. The Fedora project has a vast document here, which makes circular references to the similar page at the FreeDesktop project defending and documenting this change.

Historically, the basic distinction was that / is required to bring up the system, and /usr can be mounted later, from the network, or a slower/more complicated storage device… or not at all if something goes wrong. This also means that a single image of the entirety of /usr should be readily shared between similarly configured systems, bolstered by a rule that things not performing system administration should never write to /usr. There is a reasonably recent spat about this stuff here, do not just read the first message that tries to trivialize the distinction despite supplying still-relevant reasons, a few messages later more counter arguments show up. I’m going to exclude a longer discussion of this, because it is instructive only as an example of “We don’t care about UNIX,”[5] rather than a good exposition of the FLOS design.

Another big piece of FLOS style design showing up on Linux was the rise of the *Kits
– PolicyKit, ConsoleKit, PackageKit, etc. These are now all but required to run a normal looking interactive Linux system. They are designed to be easy to use via an API, but difficult to use from the command line. They support configuration files and discourage scripting. They have basically built a FLOS-style system to supersede various pieces of UNIX, particularly around user management and permissions, which break various UNIX conventions in favor of finer granularity and binary APIs.

ConsoleKit and its successor, systemd-loginctl are particularly interesting. ConsoleKit is a cludgy abomination, built to bridge between FLOS-designed parts and UNIX designed parts – it spawns on the order of 60 idle threads, and manages the interaction of normal UNIX processes with FLOS style behaviors in session management, permissions, and hardware interaction. systemd-loginctl is the new mechanism, which is integrated into systemd and simply ignores the old UNIX way of doing things. systemd-loginctl is naturally much more elegant as it doesn’t have to bridge paradigms, but also leaves anyone not using a suitably enabled session type out in the cold.

I’ll only mention Systemd in passing, even though it is clear to me that systemd is the transition point from “Linux as UNIX” to “Linux as FLOS,” simply because I haven’t dealt with it enough to feel comfortable commenting on the implementation yet, and wanted to get this finished and posted. The simplest argument for Systemd as the realization of FLOS is to take a look at the list of features in this justification for Systemd. Even a Linux sysadmin/power user will be spending some time with google figuring out what many of the things Systemd does are – it doesn’t do one thing well, because it does fucking everything . It does init’s job. It does inetd’s job. It integrates a bootsplash mechanism. It talks over d-bus. It is configured with a pile of logic-less configuration files and binary tools. It integrates with the *kits. It makes heavy use of Linux-specific features like cgroups. It handles disc management. It handles user authentication. It has more bells, whistles, and kitchen sinks than many simple Linux based systems I use regularly, and it is just the init process. I find reading about systemd rather draining, because it consistently makes design decisions that are absolutely opposite of the UNIX way, backed by otherwise mostly cogent arguments. The argument is whether or not preserving the UNIX philosophy is more important than adding features and making ease and performance gains for specific use cases, not about systemd in particular.

It’s too early to tell if FLOS will fall on its face like all the other not-quite-UNIXy-enough OSes that have come by, or if mainline Linux is going to retain its position but become largely unrecognizable as a UNIX system. I’m not entirely decided if that is a bad thing (for the record: I mostly think it is, but there are a handful of convincing arguments to the contrary in specific cases), and I’m not sure what I’ll end up running in that eventuality. The various more UNIX-like *NIXes (the *BSD flavors and Minix and such) are third-class citizens, and having nothing that behaves like UNIX around dramatically reduces the joy of It’s a UNIX system! I know this!.

Now that I have the product of some weeks’ musings and readings out in a wall of text, I’d like to know if my interpretation of the situation makes sense to anyone else.
—

[1] Anyone who objects to most of what I’m talking about on the internet seems to be immediately, and ironically, accused of ad-hominem attacks on Lennart Poettering, so I’ll go ahead and address that – I’m interested in the philosophical situation, not the individual developers. Pottering has been a prolific adherent to the FLOS philosophy in words and code, so he naturally comes in to the discussion, but this isn’t about him.

[2] I don’t mean to compare what Linux is turning in to to Plan9. Plan9 is, frankly, more UNIX than UNIX in its philosophy and adherence to that philosophy. Plan9 was also very carefully, if you’ll pardon the pun, planned, while FLOS looks to be very much an ad-hoc effort, making a sort of Cathedral/Bazaar example (Link is to a recent discussion, not the old ESR book).

[3] W’re talking about nerds and philosophy, so things are obviously contentious. FLOS suggests both “FLOSS” (Free/Libre Open Source Software) and “F*ing Loss”, so the reader is welcome to hear the value distinction they want in it.

[4] As much as I love Arch, some things really piss me off about the community, in particular that development decisions for some years have been made by insular cabals on high-volume mailing lists, and if anyone in the user community complains about decisions on the nice modern managable BBS that looks like the center of the community, they are told they “Should have read the mailing list a year ago,” while non-members of the cabals are angrily dismissed if they speak up on the lists.

[5] The merged /usr (which is an ass-backward name, btw) proponents point out that Solaris has been working this way for 15 years. Solaris has always been an odd duckling among the *NIXen, which eschews many conventions that most others follow. I for one find the hall-of-mirrors nature of the Solaris file system really fucking aggravating – last time I dealt with Solaris boxes, I was working on a system configured from a pile of directories and subdirectories haphazardly cross-linked between /etc/ and /usr/lib, which is absolutely fucking insane.

Since you appreciate typo spotting, a couple of more it’ses to itses: ConsoleKit and its successor, to retain its position.

This was very interesting to read, thank you. I wish more people could see these issues as resulting from conflicting design philosophies and would put their effort on improving implementations of their choice, rather than arguing that the other choice is inferior (let alone those ad hominem attacks against the other camp). I for one believe both designs have their uses, and that we shouldn’t be advocating either as the be-all end-all.

Elsewhere I’ve used the phrase “trying to make that kind of design fit in to Unix just makes both worse.”

The fact that I think BeOS is one of the more beautiful pieces of software ever written is the perfect contrast, it did a lot of things in un-Unix ways, but it didn’t claim to be Unix, and it didn’t subvert any traditional functionality. Far more importantly, what it did was transparent and consistent and elegant. Someone in the /r/linux thread brought up the BeOS API with the various *kits (which I’m reasonably sure is what the FreeDesktop folks are trying to invoke with their naming, even though they have little in common) – in my opinion that was better than the conventional Unix way, because it was cleaner and even more powerful (to put that another way: shared libraries changed the world in ways that are hard to reconcile with parts of Unix, BeOS found a nicer way to handle them than anyone else I’m aware of). They even had a nicely-supported compatibility layer for doing UNIXy things when you wanted to. (Can you tell I’m rooting for the Haiku folks?)

I’ve yet to decide where I stand on this. I’ve read a lot by the systemd team over the last 6 months, and many of the things they’ve accomplished are fantastic, but it’s such an all or nothing system and a bit difficult to interact with that I’m still unsure where I stand with it.

I worry that the core *nixs will be so fragmented in the next few years that advances in *nix software will need to be rewritten instead of ported. I don’t think that’s good for the community as a whole but I don’t know how to address it either.

I think part of this has to do with many current Linux dev’s being big fans of OSX, so they tend to treat Linux like ReactOSX. They want a Mac, but they want it for free or without Apple’s strings attached. I think that’s one of the core problems that has created this desire for vertically integrated parts of the system that don’t play well with the unix philosophy. (systemd [and other things Poettering has written] share a lot with similar Apple technologies)

I think Marcus makes a good point above (ReactOSX), and I’d like to broaden it to the discussion in the post: FLOS seems to be competing with non-UNIX-like systems. And they’re “fighting fire with fire.”

My understanding of the UNIX philosophy is that it transcends the competitive fray. Its purpose is administrators’ sanity, portability… freedom. It’s sound engineering that helps everybody – even competitors – because the goal is larger than the success of any singular OS.

Thing is, Linux systems generally no longer have administrators. Linux is likely already the most widely deployed OS kernel of personal and embedded systems. I’m assuming here that there are more Android phones than Windows PCs in use. Lennart P et al. are designing for the Android world, not for the workstations that stay put on top of their desks, have fixed network connections and either a professional administrator or a knowledgeable power user.

I can understand the points against what is called FLOS design in the blog post. At the same time, I think that the Strong Belief in the Unix Way makes people look at the System V style init through rose-tinted glasses. It has a *lot* of problems in current usage scenarios, where mobile machines with all kinds of detachable bits and pieces are a lot more common than workstations and servers.

Honestly, as far as I’m concerned, X11 is a crufty piece of shit, and if they do a good job with Wayland and it is feature-equivalent before it becomes hard to avoid I don’t have a problem with it in principle. I’ll certainly miss the network transparency (and yeah, it is something I use fairly regularly), but there are very good reasons that everything is trying to get away from X.

There is NOTHING in Wayland that is even remotely attractive. Just because it is new does not make it better.

The ideal step forward, as with all of the other problematic packages you listed, is to watch what the FLOS guys do well, then take those lessons and improve X11 and help move us to X12. The X guy have had a list of all the things that we can improve X11 with, taking all of these lessons and getting some funding to get the right solution is the right way forward.

I haven’t played with Wayland enough to have a strong opinion about it, most of my perspective here comes from the Mameo (with an X server) / Android (with SurfaceFlinger and Skia) contrast, where there is a tradeoff but no clear loser.

Do you have a link to the X developer’s wishlist? I’m sure I’ve come across something like that before, but it would make interesting reading to see how it is now.

I did a quick search for said document but the new x.org wiki is horrible and I cannot find it right now.

I remember seeing this document about 4 years ago, I will keep on looking to find it. I hope they did not loose it when their wiki crashed a while back, it was incredibly useful as it painted a terrific road forward for x.

I am all for trying new ideas, it just sucks when we throw out the baby with the bath water.

I’ve been following Wayland development since it started; it’s not new, and it’s not revolutionary.

A good description of Wayland is “Take X11R7. Remove the bits that aren’t used by modern applications (anything that uses GTK+, wxWindows or Qt to render). Having done this, fix the problems in what’s left, including known design flaws that cannot be fixed in X11R7 without breaking compatibility with X11R6”.

In many senses, Wayland is X12 – it’s the same people as have been working on X11 implementations for the past decade, fixing the problems that they’ve not been able to fix without breaking X11 compatibility. Often, in the process of doing this, they find ways in which X11 is already broken, and no-one has noticed – if you followed X.org development, you’d know that the Wayland developers are significant contributors of bug fixes.

GNU = GNU is not Unix
Linux != GNU
Hurd = GNU
Linux = Unix Clone
Linux is a kernel that is pretty much Unix. Linux has claimed to be 99% compatable with Unix many times over.
GNU/Linux is starting to become a pain in the …

I am currently running Fedora 17, If I were to desire a more Unix Standard OS where should I go? Please recommend a Linux distribution (preferably a root distribution for example Debian rather than Ubuntu) and also which BSD would you suggest for a laptop workstation?

I’m not all that intent on “Unix standard,” I just want transparency and composability.

Pretty much any distro is more “traditional” than Fedora, there are good things to be learned from Debian (about stability/staleness and portability), Gentoo (about the trade-offs for choice and tuning), Arch (about stability/staleness and definitions of simple), RHEL (in the form of CentOS or Scientific, I certainly don’t like what RH is doing lately enough to pay them – the lessons here are similar to what one used to learn to hate on Solaris) and Slackware (about the tradeoffs on the competing definitions of simple), but naturally nothing is exactly what I want… and I don’t think it could possibly be.

There really isn’t a BSD that works for my every day needs, because the software and driver support is just too lacking, especially to be fun on laptops. I think a lot of the things in NetBSD are beautiful (particularly the way most of their configuration is carefully designed such that the manual and permanent syntax is exactly the same), and a few of the things in FreeBSD are really interesting (devd makes interesting contrast for udev, they both have problems).

If this only would be true. Want to run Gnome in the future? Need systemd, hard dependency.
Want to use udev? “Oh, bad luck, we don’t develop it further for use without systemd, eat it or die!”.

The problem with FLOS is not that it exists, we couldn’t care care less about just another init-system, the problem is that the proponents want to force it in everyone else, any distro that does not want that is deprecated and deserves to die.

I think that the real problem is not that “any distro that does not want that is deprecated and deserves to die”. The problem is that nobody (yet) has questioned this seriously enough (i.e. with enough money, propaganda and developer talent behind him).

Yeah, I’ve been watching the Congress streams for the last couple years, I remember seeing that in 2011. I didn’t clue in to why it was such a shitshow until later.

I have a general respect for people who put up with code for their ideas, even if I don’t like those ideas. I also don’t have a huge problem with, uh, “strong personalities” in the software community. Lennart manages to push the limits of those positions.

Lennart seems to take issue with anyone that doesn’t agree with his vision. If you read his Google+ pages it’s full of mockery and disdain for anyone that doesn’t immediately take his projects and delcare them genius. I find this amusing because at the same time he seems to have a disdain for the tone of the LKML and is open about that. So apparently you’re allowed to mock people and be smug, but just being blunt about things like Linus makes for hurt feelings. His attitude certainly doesn’t help with the public perception of him or his projects.

The systemd mailing list is much the same (reminds me of the pulseaudio mailing list) several bugs that are reported are immediately blamed on other projects.

I actually read the systemd mailing list and follow his G+ feed as well.

I call bullshit on your accusations. The systemd mailing list is one of the nicest development mailing lists I’ve read. Furthermore, in his feed Lennart actually talks about specifics.

For your “several bugs are immediately blamed on other projects”: I’ve seen loads of submitted patches which have been applied. The only thing I notice is that it sometimes takes a week before they respond.

In any case, think I’ve been specific enough. If systemd mailing list is oh so terrible, then please take the last 10 threads and show that bugs are blamed on other projects. Should be easy enough right? 😛

I am aware that there are some factual inaccuracies in what I presented. The problem is: I did my research and usually what I found was a serious lack of information. Here’s a suggestion: Try to figure out what ConsoleKit really does. Or PolicyKit. There’s absolutely no documentation on what they do on a system administrator’s level. Oh yeah, there’s the API documentation, but that is even thinner in information density than the MSDN docs on Win32 API security attributes.

The example I pulled with several redundant conversion steps between PulseAudio and GStreamer was a extreme one. But it can be reliably triggered. All it takes is a buggy ALSA driver (IMHO the whole Linux audio architecture development is a fast moving trainwreck watched in slow motion).

But so far nobody did send me some erratum on where I was completely and utterly wrong. I’d be nice to get such feedback though, like “on slide … you stated … while on the contrary it’s …”. If somebody points out errors on my side I’d put that on my blog – unlike other people I do, and actually enjoy it, admitting, when I was wrong about something; after all it means I learnt something (new).

For example when I wrote about my concerns about Wayland even the core developers themself did admit to me (on the Wayland mailing list BTW) that I pointed out some sore spots, that need(ed) to be addressed.

PolicyKit authenticates and authorizes dbus calls on a per-user, per-call basis, allowing for fine-grained control of who gets to do what.

And ConsoleKit provides a dbus interface to assign ownership of a seat to a user for the length of a session. It’s basically a more universal, more powerful, login(1).

The key thing here is that ConsoleKit and PolicyKit are structured around dbus. Once you accept that dbus — not files, pipes, or sockets — is the central integration tool for Linux, everything else falls into place. Including systemd which is a general event-driven dbus-based system management framework.

Wow, this really covers a lot of my complaints with modern Linux systems. I’m working mostly on Lubuntu now, but with systemd’s proliferation there’s just no escaping the scripting-unfriendly nature of the environment. I’ve been a Unix fan since 1984, and this ain’t the Unix I remember. (Or the one that I want.)

I do appreciate a system that works well as a whole, but to throw out so much of what I like ( cli ) to obtain it: no way. I fled back to FreeBSD for some of those reasons , as well as others ( like zfs and dtrace ).

Anyway, thanks for addressing the deteriorating state of gnu/linux as unix.

People who think that Ubuntu is relavent and Debian less so as a result simply don’t understand the last 10 years of Debian community history at large.

Roughly speaking, until Ubuntu came along, Debian was suffering from not just the kind of stupidity we see from the Unix haters handbook types like Leonnart, the project was suffering from being overrun by newbies that really have no clue or understanding and no intent to learn or gain understanding.

Before I get too far, let it be said that there is nothing wrong with the Leonnarts of the world giving us the finger. It’s all good, they are free to explore and find a solution that works for them, I support their freedom and I respect that they are speaking with code. Likewise, no one really blames newbies for wanting to stay newbies and have everything sort of just work for them with no effort on their part. Newbies staying as pure lazy users is totally cool and fine. The problem is that both of these communities created so much noise and hate that it created the idea that Debian was dysfunctional when the project was working really well.

What happened around the time that Ubuntu came along is a kind of interesting actually. I was not sure what to make of Ubuntu at first. What actually happened was a ‘fork of the community’, not a fork of Debian (although it is now that as well). The Ubuntu project basically pulled all the noisy assholes out of the community and into this new project where they were GOING TO FIX EVERYTHING THAT IS WRONG WITH THE UNIX WAY. You could see this right away in the #ubuntu channel on freenode, and you see it now metastasizing as really dumb things like Unity.

Well, as we have all seen, they have fixed nothing really. Most of the early Ubuntu users went to Mint and what remains is a large base of know nothings. The Unix haters went who knows where but I don’t see too many hateful posts in the various Debian community areas these days. Somewhere along the way the Leonnarts of the world ended up at Redhat and Ubuntu as well as various other projects and are experimenting with new and sometimes interesting ideas.

The reason Debian is more relavent today than ever before is that in being stable, in being steadfast about free software, they are providing a stable base so that we can breathe a sigh and give us time to analyse all of the ideas out there. The Leonnarts are trying to solve a real problem. The problem is they are simply solving it the wrong way. The newbie communities are also trying to solve a real problem, unfortunately, they can really only consume new ideas and complain when they don’t like what they taste.

Debian is more important and more relavent than every SIMPLY BECAUSE the community is now far more focus with a lot less noise from communities that, in the end, deserve to be in their own communities.

You have done a masterful job articulating the vague thoughts floating around in my head for many months — and from the other comments here, it looks like several others have these same thoughts too.

So — while leaving aside the debate of the pros/cons of FLOS … can both FLOS and traditional Unix-style userspace (“ULOS?”) continue along in parallel, allowing time to tell which is “better”? Or could/will the momentum of FLOS force technical changes that will make it infeasible for the Linux ecosystem to maintain a parallel traditional userspace?

Partially, it is the friction between well funded support of bad ideas rubbing against the the unfunded well reasoned ideas, partially it is the frustration at watching all of these conversations and distro gatekeepers being dominated by 12 year olds whos only mantra seems to be if it’s old it sucks.

I do think that one important thing to remember here is that it probablyt isn’t an all or nothing situation. One can look at each of these terrible projects, splay them out and look at what has been done right, what has been done wrong. The Unix haters are trying to solve things they consider as problems. Perhaps there are good ideas we can keep, throw out the bad and rework it into something good.

I really have no idea how it’s going to settle out. An awful lot of funding and developer resources have been redirected into FLOS, and it (as a point) doesn’t play well with others, so it will likely be around for a while. Unix has survived 50 years, and the BSDs (and to some extent a few of the Linux distros) are still pointed that way, so it isn’t going anywhere either. We may even (this is really likely) have a situation where there just isn’t a terribly cogent complete Linux stack for a while, because everyone is dependent on some parts of each.

I’d be much happier with a legacy-free OS (like Haiku) and a more traditional Unix as the dominant breeds in the free software ecosystem, so the border was better defined and both communities were more free to do the right thing under their model then cross-pollinate after the relative merits have been evaluated. Also if everything were under share-alike licences so it could all cross pollinate, but that is very much personal preference.

[troll]
Universal linux distributions (like Debian and Gentoo) that want both GNOME and, say, Nginx is what prevents ULOS and FLOS userspace to exist in parallel. Dear world, please just accept the fact that there are going to be two GNU/Linux-based operating systems, one ULOS-based, and one FLOS-based. Don’t try to build distributions trying to be both.
[/troll]

Please consider choosing some other acronym that doesn’t increase the existing level of confusion.
* OSS – Open Source Software, a development system
* FOSS – Free Open Source Software[*1], a development system *and* a license. Because so many people can only ‘think’ of one meaning for “free” (only concerned for their wallet) the term FLOSS is employed.
*FLOSS – Free Libre Open Source Software

Linux is just a kernel, as is kfreebsd and HURD. Kernel development and OS development have as much in common as “specific” and “The Pacific”.

systemd says *nothing* about GNU/Linux. Feel free to ignore Fedora, you probably should. Try insserv and a sensible, Universal, OS 😉
I seriously don’t understand the obsession some folks have with this “only one way of doing things”…

I just made up a term to avoid the post having a “Those assholes” tone to it; I agree that it is terrible to overload an already confusing pronunciation. It wouldn’t have gone any further than this one post, but then there were many thousands of hits in the last 24 hours… I think this is how most unfortunate terms come into being.

“Tightly Integrated”? As far as I can tell, the philosophical difference here amounts to “do you want highly scriptable, shell-oriented, traditional UNIX tools, while often sacrificing tight integration”, or “do you want tightly integrated tools that fit together really well as long as you don’t try to pick and choose and don’t care about shell scripts”. Upsides and downsides to both, but I think “tightly integrated” nicely contrasts with the “loosely coupled” traditional UNIX mindset.

I think you should’ve tried to understand Systemd before attacking individual pieces of it. If I for example take this excerpt:

… what many of the things Systemd does are – it doesn’t do one thing well, because it does fucking everything . It does init’s job. It does inetd’s job. It integrates a bootsplash mechanism. It talks over d-bus. It is configured with a pile of logic-less configuration files and binary tools.

Do you really think that a program XYZ is not implementing function ABC “well” if program XYZ at the same time implements D-Bus? Are therefore all programs that use D-Bus not “well” ?

Also, “pile of logic-less configuration” sounds to me not value-neutral. Maybe this is because I’m german and don’t know english good enought, but “pile” has for me a similar connotation as trash, garbage, chaos. This non-turing complete configuration files are, from my point of view, much less chaotic than what they replace.

Finally, systemd’s configuration is not configured by binary tools. If you really think that something like “systemctl enable foo.service” is configuring via a binary tool, then all of Linux is configured with binary tools, e.g. “vi”, “cat” etc.

I’m wondering how well you know the components if you write:

The components of FLOS communicate over D-Bus rather than sockets and pipes.

Don’t you realize that this is a contradiction? D-bus uses sockets (or FIFOs) for communications, it “just” puts a protocol format on top of it and allows not only for App-to-App communications, but also to App->DBusDaemon->App communication.

So, all in all I was, from a technical point of view, very disappointed from this article.

It would have been better if you would have really shown what part of systemd isn’t “well” implemented. For example, is it the dependancy handling of the jobs? How systemd detect which PID belongs to which job? Is the D-Bus protocol not thought well throught, do you miss something? But a statement like “doesn’t do one thing well” without facts is just FUD. Also, if I find just one thing that it does well, your sentence becomes false. As the meaning of “well” isn’t well defined, your statement is therefore for many people false since conception. 🙂

I think you didn’t get the meaning of the Unix mantra “Do one thing and do it well”.
That does not mean that systemd can’t do anything well because it does so much, it means that systemd is contrary to the Linux philosophy because it does so much.

Its indeed easy to misunderstand what is meant by this blog. The technical reasoning written by papp is soo high level that its hard to apply to things like systemd. This blog doesn’t really tell you how to improve systemd.

What is really hard to understand from the blog is that I think pappp is not saying the problems systemd solves are solved badly. Systemd probably does its job and is functional.

I do fully agree with the points made by pappp, and this is that the solution chosen introduces a lot of new problems.
New problems introduced are not easy to describe, its like telling a software developer he needs unit tests. How do you explain to him that his software will be better if he does?

People that have been around for a long time have learned the hard way that one tool that does a lot will eventually be replaced with something better and smaller and more on-focus.
Same for all the other things that systemd makes as a hard choice which go against the unix way.

My favorite point is the (kind of true statement);
“Another thing we can learn from the MacOS boot-up logic is that shell scripts are evil. ”
causing the conclusion that;
“Most of the scripting is spent on trivial setup and tear-down of services, and should be rewritten in C”

The C language is probably the single choice that is worse than shell-scripting. Sure its faster, but its much less accessible. It requires a compiler, it requires a lot of non-trivial training. There are dozens of other choices (busybox, python, etc).

” The technical reasoning written by papp is soo high level that its hard to apply to things like systemd. ” I’m guessing that was sarcasm? At any rate, claiming that something is “high-level” does not reduce the blatant factual errors.

Blatant factual errors are irrelevant to the relevance of the idea(s) communicated by this article.

People seem to think that if they can find one factual error in a thesis, the entire thesis is invalid. If I tell someone “Move because a squirrel is about to hit you”, and they retort “Squirrels are not found in this area”, then the car that I failed to correctly identify will still injure them…despite my mistake.

Is it possible, that the Linux user- and developerbase has grown so enormously in the last years that there is enough support for the development and maintenance for several competing solutions? But some people might be afraid that there might not be enough man-power to support several solutions in parallel?

There already are quite a number of userlands built on top of the Linux kernel – most of the desktops are running GNU/Linux (w/GNU Libc and udev), most of the appliance-like devices are running Busybox/Linux (w/ uclibc and mdev), and most of the mobile devices are running Android (w/ Bionic). I actually rather like this situation despite the duplication, because the userlands are well optimized for their intended use cases. The FreeDesktop folks seem to think they are working toward a unified userland, which develops all the baggage of all the cases to get in the way of what you want to do, and because it is “unified”, they justify integration up and down the stack which makes it difficult to extract. Coming back to my un-favorite rationale for decisions: who cares if your router or server or personal laptop supports multi-seat interactive logins, but now it has the warts that come with making that work. Maybe, MAYBE there is a case for that in entertainment devices (nope, everyone I’ve ever seen either shares their accounts or is doing demanding things with the system that take the whole machine over anyway) and multi-personality phones, but I’m dubious.

I am Linux user, not a developer. I contribute to wikis of Debian, Arch.
Recently I moved to Debian. The community is very friendly and welcoming than Arch Linux is to new comers. I did not like the software direction that Arch recently took and the iron fist attitude by the mods show in forums is certainly not welcoming. Arch linux is not a community distro at all. AUR is a nightmare with very less package quality by any standards. Its is bunch of packaging work done by 60 odd devs and they basically don’t want any kind of discussion by the users about the technical design.
They generally don’t fix any bugs and they just direct you to deal with the upstream.
they don’t want to maintain much of arch specific stuff like initscripts.
Its just my way or the highway attitude in Arch.

When it comes to Debian, it has a balance of new and old stuff and it values the users more than anything else. Whereas Fedora is just mostly red hat’s beta platform. Arch linux devs think they just can piggyback Fedora’s work.

This is a lot harder on the Arch devs and community than I would be, you are just describing side effects of The Arch Way (like all such documents there have been bitter arguments of interpretation, particularly on the definition of “simple,” and the precedence of the parts).

Sometimes (and I will admit that this happens distressingly often) the “learn by doing” philosophy gets really inappropriately aggressive, but Arch is, in part, conceived as targeted at the DIY audience, so the answer to “Why isn’t it spoon-fed like on Ubuntu” is “because that isn’t who Arch is for.”

The AUR is community space — if you happen to want some weird piece of software, or an unusual configuration, and have figured out how to built it, the AUR is a place to share. Thinking of the AUR like a curated repository on another distribution is entirely the wrong idea, it is somewhere between a repository and a forum for sharing software, that works in part because PKGBUILDS are so much easier to deal with than most packaging systems, and Arch’s user base tends to be more technically inclined (see previous point).

“Deal with it upstream” is also (usually) a feature in both directions: Arch by policy ships exactly as upstream, which is good for users because it means no weird distro-specific behaviors (which are often hard to override), and good for developers (then users) because it means there are users running exactly the software they distribute, so there are uncontaminated bug reports coming back, and thus bugs get fixed upstream for everyone. This breaks down a bit when you have obstinate developers (see the udisks2 example – iirc Debian patches around the /run/media/$user/$label bullshit because they 1. have standards documents that contradict it, and 2. realize that optimizing for multi-seat graphical use cases over single-seat command line use case is completely insane, and the developers refuse to make it optional in mainline) but is still generally good policy.

Yes, great post and good explanations.
Arch is going the wrong way :..(, I had to install a broken arch system but gave up with arch’s iso ( the installer now bone native !?) and ever more un-user friendly !? that is a step down the ladder in my mind !!
That’s why I love to install OpenBSD one simple script 99% of the times it works.

That pretty nicely summarizes what I’ve been thinking about Linux for the last few years. Much better than my “BSD is for people who love Unix. Linux is for people who hate Microsoft.” I don’t get the feeling that Apple is abandoning the Unix philosophy quite so much as the Linux people are, but I actually choose to use Mac’s, whereas Linux I’m forced to use.

It’s unfortunate that the article is full of disdain, and inconsistencies. I’m surprised just one person took the time to write about that.

Frankly, all that this article achieves is a further schism between people who do not understand the design choices made by the udev/systemd team, and others.

Nothing in the article brings any criticism to the systemd development team (or any developer for that matter) that can be used to enhance it or make development for others more accommodating (by e.g. making it easier to only use parts, or stay compatible).

I’m reminded by this article that even amongst software developers people tend to think “blue” vs. “red” and stop being open minded. This article certainly displays many of the same aspects of arguments you will find between conservative and progressive groups that clash. Political smear? Check – Lennert’s Wikipedia page gets vandalized. Misrepresentation of facts? Read the above article. I’m just waiting for someone to mention “but think of the children”, or “if you’re not with us, ….”.

This article actually convinced me that the arguments against systemd really are mostly gut-reactions against change, from people who don’t know what it involves, since the arguments were so vaguely written that I had to look up the facts for myself (“D-bus instead of sockets” hehe that was the best one).

I’ve recently converted my two home Arch boxes to systemd, with no hitch at all. Yes, I had to actually learn something new, but once I looked at the .service files, I really understand some of the reasons for using systemd. The way daemons are dealt with in sysvinit was just horrid, pid files and forking is such a hack. If you want to be able to kill a pid safely, the only safe way is to be the parent of the process. Otherwise, Linux gives you _no_ guarantee that the pid won’t be reclaimed. Systemd does this just correctly, by letting you execute a process without forking off, keeping it as the child of systemd until you need it stopped/restarted. Seeing as it’s even so much simpler for daemon developers to write code that doesn’t need the pid+fork mess, I don’t understand why we didn’t have this much earlier. I’ve been running personal daemons and development servers in screen sessions on startup for dogs sake, not knowing I was working around the lack of something like systemd. Now I can create a daemon running as any user I want, to run on startup or whenever I choose, with 7 simple lines in a plain text file:
[Unit]
Description=My development test server
[Service]
User=test
ExecStart=/home/test/testserver/run
[Install]
WantedBy=multi-user.target

Then, to enable on boot, either just “systemctl enable” it, or if you don’t trust those scary binaries, simply create a symlink from /etc/systemd/system/multi-user.target.wants/

I still haven’t seen an actual argument against systemd. And “It’s not UNIX” is not an argument. And “it doesn’t use sockets” or “it’s monolithic” is just plain wrong.

(reposting here) The one bold piece of text in my original post is “I’m not sure that this is a bad design, but it is most definitely not UNIX or anything like it.” right after I define UNIX as aligning with Gancarz’ 9 precepts, then work by points. Somehow, Lennart (and about 1/4 of commenters) automatically read that as “Everything the FreeDesktop folks make is terrible and everyone should hate it, also the developers are dumb.”

I wrote that because I’m really not sure. There are clear tradeoffs, and it’s a question of relative values and use cases. Differing ideas about “What UNIX is” was the whole point.

And it *is* one of the most popular arguments people throw against systemd – the idea that it’s anti-UNIX in terms of design.

But that’s somewhat a matter of interpretation – Lennart has commented before that “do one thing and do it well” is exactly why systemd is assimilating functions like cron/at. Starting other programs is the most fundamental thing systemd does, and if it does it well, why have other services that just duplicate that same core function?

Is it really the UNIX way to have many different daemons, depending on whether you want to run something on boot (sysvinit), on connection (inetd), as a one-off scheduled task (atd) or as a recurring task (crond)? That’s a lot of redundancy, and a lot of different tools that need to be learned, compared to one daemon that runs stuff based on any of that set of conditions.

I’ve tried reading your article, but bit awkward to understand what your point is. You’re talking for a long time about ConsoleKit, while that is deprecated. You talk about /run and how that was forced, while that was actually agreed upon between loads of distributions. You change an existing acronym (FLOS), just confuses things. You then make all kinds of claims which are just wrong. Nothing is really factual or accurate.

e.g. “FLOS prioritizes ease of machine manipulablity over human manipulablity.”
You mentioned systemd, but that is very easy to change by any person. Those changes can also be scripted. It just does not make much sense.

Suggest actually explaining what problems you face instead of being vague. Also, when talking about actual problems, please do your research. Meaning: bit pointless to talk about ConsoleKit for instance.

I’ve been noticing much the same for a number of years now, thanks for voicing what I have been thinking. Personally, I don’t have a problem with it — while I think the UNIX philosophy is excellent, the ways people have been using and want to use Linux have changed, and it’s not inappropriate for the architecture of Linux to change with it as well. Whether or not what you refer to as FLOS is the best way to go about it I can’t say (It has been an overall positive experience on my lone desktop computer, but that’s hardly an exhaustive analysis), but the basic shift away from UNIX design principles isn’t inherently bad, from my perspective. We’ll see over the next few years. Who knows, maybe the BSD’s will get an influx of new developers as people jump ship from Linux.

I disagree highly that d-bus incompatible with the nine bullet points of UNIX philosophy you lay out. You make other points which are solid, but much is wrong with your point of view if it hinges around d-bus not being a loosely coupled scriptable interface, and you owe it to yourself to human-up and equip yourself to deal with this new kind of loose coupling you seem to disdain.

In other words, Udisks2 is not to be used with scripts or via a textual interface, you communicate with it via d-bus.

Congratulations on this somewhat older interesting post making the rounds again: certain to be a touchstone for many. A pity you don’t have a better appreciation for how UNIX’y it is, but still a great post, thanks.

There are D-BUS command-line tools and even an implementation of a dbus client that runs inside a Unix shell. Everything with a D-BUS interface therefore also has a command-line one. I’ve been driving Linux desktop services from the command line from years now. Given the state and track record of the UI, who wouldn’t?

Yes, parts of D-BUS are terrible, but the problems it solves exist whether D-BUS does or not, and the alternative was implementing new IPC methods in the Linux kernel (or, ironically, having something like systemd running to do socket-triggered daemon spawning). Apparently you have to ship hundreds of millions of machines running patched Linux kernels to make a new kind of Linux kernel-level IPC happen, and it still takes years. Even if you are successful, just about everyone who wants software but not Linux will hate you and use something that runs in plain Unix userland instead…like, say, dbus-daemon.

The *Kit architecture is far more Unixy than what it is replacing. HAL was an abomination that tried to unify everything users might want to do with devices in the majority of use cases in a single daemon, which is a patently absurd idea. Replacing it with a bunch of distinct and purpose-focused D-BUS services (which can, at the implementer’s option, be implemented by one daemon, several daemons, or a lot of tiny daemons and shell scripts) is at least a better thing than the immediately previous thing. The *Kit approach gives an administrator the ability to kill the thing that messes up device permissions without killing the thing that lets non-root users mount and umount removable media and network filesystems, or vice-versa, as requirements and preferences dictate.

My recurring complaint is that the implementations of these brave new services tend to have lots of high-impact bugs. Just off the top of my head, I’ve had to disable or forcibly prevent the installation and invocation of fam, gamin, hal, upowerd, and udisks-daemon over the years, because each one had some easily-triggered bug that ruined an otherwise working system while providing no useful capability.

upowerd’s failure mode where it maximizes CPU power usage was particularly ironic. Not only did it consume egregious amounts of power, it also broke previously working ACPI sleep button behavior. I find myself asking, “Why?” To tell userspace the system was about to suspend? Properly implemented, suspend would be instantaneous, and userspace would receive only a single D-BUS message to notify it that the system had been asleep and was no longer.

I am usually introduced to these new services when I discover they are the root cause of some big-picture failure of a machine to do the job it was built for. This makes me extremely unsympathetic to the original developer’s intent. Instead of reporting the bugs or submitting fix patches, my least-effort response is now to apply malware countermeasures against the packages that present the problems. If others act as I do, nobody will volunteer to fix the default configuration because everyone capable of fixing the broken defaults will simply avoid them instead. I know more than a few capable fixers who have defected to Mac because of this, even in the face of considerable evidence that this decision isn’t rational. This does not bode well for the long term viability of Linux.

pulseaudio and systemd would be in that malware list except that they also implement features I actually use. This is where the lack of severability of functions becomes most painful. I have to run pulse under gdb all the time so I can debug its individually rare but collectively frequent SEGV bugs. I sabotage any attempt by systemd to use the cgroup filesystem, because systemd uses cgroups for evil, and I can’t be arsed to understand why the config file settings that Google and Lennart tell me implement correct behavior do not.

There are great concepts in FreeDesktop, like XDG directories for manipulating user directories like Music, Video.. Then software that honour this specification make more consistent integration with the system. It’s a good feature, I would like to download some file? Then it will go “by default” in the Download XDG Directory, and all applications should do that.

An other really important thing about FreeDesktop is the menu specificiation. Without that, it is awful to manage .desktop entries for each desktop. Now with that clean specification you can write a menu entry for your application and it will work under GNOME, KDE, Xfce.

About PackageKit, it’s not a bad thing neither. There are too many Linux distribution out there. With that specification, we can write an one and unique package manager and then write backend for apt, urpmi, BSD ports, whatever. It’s not bloat, it’s not ugly. Why should we rewrite a package manager for each distribution while we can just use a backend?

There were a similar problem about computer devices, HAL wasn’t much loved, but in the concept it is great too. You can mount devices from ANY operating system (BSD, Linux) and then the desktop don’t need to rewrite a device access layer.

I tend to agree. Many things have been bloated up or complicated, and have replaced older functionality that was a lot simpler and just worked. Systemd and Udisks2 are great examples. Not only are they troublesome, but they keep changing, and cause other issues with other software. Thankfully a lot of the core Unix functionality is still there. But with the pace that everything is changing, I am hoping that distros will keep to the simpler methods.

Echoing Scott’s criticism, this is a rather interesting article that tries to discuss an important issue that sadly I feel may ultimately be flamed or ignored by many all because of one single rather poor choice of abbreviation.

I mostly use LaTeX and a text editor. Different markup language, same workflow. So much less aggravating than fighting with an overgrown text editor that has a shitty typesetting system bolted on.
(Clever form fill there, although as I understand he kept using the “dmr@bell-labs.com” address until he died).

I was a bit lazy but I also wish to share some thoughts: Yes, I am experiencing the same issues with latest Linux systems. I especially dislike the fact, that as BSDs are not using D-Bus many functions in window managers are not working because of this fact. I also find it ridiculous, what use to work pretty easy and simple now it’s just overcomplicated, allows more possibilities for failure and make tracing of issues a lot more complicated.

On the other hand I enjoy using my Xubuntu and LibreOffice; luckily LibreOffice started cleaning up the code, now it works much better on older machines as well (had a T60 until this year and LO was really nice to be used). I really hope others will follow the same approach of simplifying code and cleaning and we can get rid of many unnecessary things.

Ps.: Someone mentioned systemd as a bloated idea. I tend a bit to thin of that a bit different. It solves some problems during startup, especially considering of NOT using shells but instead of an own method, which allows the integration of systemd to any UNIX like system. Of course I have not much experience with it, but I see some portions being useful.

I completely agree with your points. Including that it is not really clear whether the FLOS direction of development will be beneficial or not. Perhaps it will be, I’d be really sad if all that effort that has been put in it was wasted. On the other hand it is really amusing to me how the FLOS developers and advocates feel the need to criticise those who would prefer to keep the UNIX philosophy.

Ah, and you forgot to mention that the *Kits always get broken. I used Debian Unstable for years until this *Kits crap begun. Now I decided to stay with Debian Stable, because at least I know the *Kits will get broken only every two years.

Ah, and BTW, Debian Wheezy works awesome with init instead of systemd (and boots much faster than systemd if you use readahead-fedora or e4rat).

The argument is whether or not preserving the UNIX philosophy is more important than adding features and making ease and performance gains for specific use cases, not about systemd in particular.

“There is nothing more gray, stultifying, or dreary than a life lived inside the confines of a theory.” –Jaron Lanier

The success of Windows — and indeed the triumph of Mac OS X over any other user-facing Unix — seems to suggest that when it comes to developing modern applications, the developers of today adhere to a different philosophy:

1) Build components that are designed to work together as an integrated whole. For example, Microsoft Office, or the integrated applications in a GNOME desktop.

2) Each component should do one thing and do it well, but that one thing can be a relatively abstract concept. For example, systemd starts and manages background tasks — throughout the whole system, not just at startup.

3) Don’t repeat yourself.

4) It’s better, from a security standpoint, to confer the minimum amount of permissions and capability possible on any component or service while still enabling it to do its job.

5) Components should communicate according to well-defined APIs that are known by both endpoints of the API call. D-bus is eating all other forms of Linux IPC for a very good reason: it takes all of the ad-hoc parsing and guesswork out of IPC. It also brings Linux closer to Windows in terms of flexibility and power. Windows COM automation was doing things that Unix users could only gawp at in envy during the 90s, and inspired Miguel de Icaza’s 1999 call to rethink the primacy of Unix philosophy which was titled “Let’s Make Unix Not Suck”.

6) Components should use well-defined, universal file formats: for example, XML or JSON. In an open-source environment, binary file formats are to be preferred if they have a clear advantage, the binary format is well-defined, and the tools for handling it are open source. Systemd’s much-groaned-about binary logs are an example; they have tampering-attestation features that plain-text logs do not, and can be easily perused or searched with journalctl.

Like it or not, this philosophy dominates the hearts and minds of today’s developers. It has also proven more up to the task of developing complex, large-scale applications where security is a concern. So Linux has to adapt to it or risk marginalization. Traditional Unix, as a design philosophy, is dead. And systemd is a fait accompli; most of the major distros not made by Canonical have switched or are considering a switch.

Definitely agreed here. If I wanted a system for being a consumer and not a producer or developer, I’d use MacOS or Windows.
For a server, UNIX rules. Nothing else even comes close.
For embedded/realtime : UNIX mindset works ideally. Keep it as simple as possible and no bells/whistles unless explicitly designed in.

For UI:
command line is mandatory for development work. Not being able to communicate with devices over command line means it’s difficult to test code effectively.
Windowed environments are nice but sometimes expensive.
Touch interfaces are at this point consumer-only.

Had this in my “to read” file for ages till I finally got around to it. Wish I’d written most of it — great job writing up thoughts that’ve been percolating in my head while struggling with the platform I love seeming to go sideways, backwards and forwards at the same time. I don’t quite know how you got in there though, and assuming squatter’s rights don’t apply you can expect to receive an invoice for back rent soon.

My feelings exactly. I do not think this train can be stopped however. Not since Fedora bug 53407 https://bugzilla.redhat.com/show_bug.cgi?id=534047 and the ensuing discussion. (TL;DR: Turned out FC12 suddenly was allowing console users install signed packages without prompting for root password. The developers responsible for the change argued endlessly that the change was for the best, and that legacy Unix mindset was preventing them from doing cool things)

It’s “Because of the mouldy figs we cannot be kewl”, and “We should not be afraid of complexity” (these folks have never heard of the blob antipattern http://sourcemaking.com/antipatterns/the-blob). Too bad Redmond was not hiring more coders, the problem would have been solved.

What will probably stop this move to turn Linux into Windows is the growing irrelevance of the Linux Desktop will turn their sights on mobile development. Then the server people and saner attitudes will be given a chance.

(I didn’t know that systemd shared authorship with pulseaudio. Now that I know, I will have to ponder a platform transition from RedHat for my servers. Although it would have to be to some *BSD* as I am not very partial to the Ubuntu vagaries, either.)

Ok, while I buy the fact that systemd is giant, I just recently tried it on a fresh gentoo install on an old laptop. It boots in a half of a second. If I load an older Ubuntu, Gentoo (with OpenRC), etc on it it takes 4-6 seconds.

I’d like my init system to be simple, but if its technology or code from 1965, I’ll try something new. It only took 30 years for init to work out its bugs. I dealt with any initd bugs over the years.

Most of you sound like a bunch of grumpy old men (my guess is you probably our old, balding, grey hair pony tails, we have a few at work, and while their experience counts for like 10%, their inability to think outside the box, try new things, and innovate is depressing).

Why don’t all of you complaining contribute code, fix bugs, suggest new features and ways to implement them with initd, or really just keep quiet. initd is provability slow and lacks modern support so that each new piece of hardware doesn’t need 10 hours of custom shell scripts to get working. Wake up, its not 1992 anymore. At least the systemd guys are making an effort, and in the UNIX world, there will always be purists with loud mouths and nothing to back it up but more talk.

I wish I hadn’t read your essay. Do we want to go back to the ’90s and spend hours bringing up graphics cards for X in Linux or FreeBSD? Well, actually that would be easy. (Meanwhile I go work and cuss everything about Windows and their super crazy Office suite. Actually, I do this out loud, it apparently upsets my coworkers. Gee, what’s not to like about the I remember your user name/Switch User feature, 4 week password changing, wasting paper to get the printout to look like the screen, reading heart felt “reply all” emails, the “x-ribbon” and working with people that store everything on the desktop? Some folks seem upset that they can’t just click a “remember my password” button on the login screen. Okay… it so absurd… it’s funny) So, I was muddling along with Debian at home on a little tiny laptop my son gave me. But, then no, frustrating as Debian was, that was too easy, I needed to try Fedora 20. God, I wish I hadn’t done that. I’m tearing, up… okay, I’m all right, again. It took me a month to see that they have completely lost their minds in the last ten years. In ways Oracle/Solaris can only envy. Had enough of Gnome3’s let’s act like Windows in every possible way (especially by hiding everything and making it a big secret)? Poor KDE which is broken in sad ways but, at least provides configuration tools, some broken, some not? Now, I’m on Slackware 14.1 with Xfce4. And, building applications using autotools with qt everywhere I turn. And I have come to pity Patrick Volkerding and admire him, too. I’ve killed ConsoleKit. And, I threw away NetworkManager and I’m on the hunt for anything mono-cli based. Anybody silly enough to stick with that can’t possibly write decent code. Okay, so I won’t go there… language silliness is what it is. Even package-config seems just stupid. Am I supposed to build everything as root before it can see my new libraries in /usr/local/lib? Yes, I fixed the config file in /etc… Also, /run/media was already addressed but, what was wrong with /mnt? Whatis and apropos are still around, thank god, although they are ignorant of a lot of things running on my system. They’re no help as to why three instances of udevd and dbus –daemon are running. And, why on earth would someone port VNC to linux with X on board? Why do I even want a lib and a lib64? Where the hell did X go, oh that’s right we really needed to move it under /usr/bin, /usr/lib or /usr/libexec, and config files in /usr/share/something is that right, I’m so confused. UEFI was so necessary for Microsoft, not Linux, again, Microsoft needs help lets face it. And, forget it. The list goes on forever. There are no ad-hominem attacks here, no good technical arguments for this stuff. It’s either just silliness and maybe, as I age, a touch of paranoia. But, also with a lot of “dev’s” in the Linux world, what amounts to ignorant exuberance and corporate money being used to “help-out” in the Linux world. Heck, I boot to command line, log in to my account and run startx &. But, I’m pretty sure that, of itself, gets to the crux of the freedesktop.org, RedHat ignorance regarding interoperability, portability and most of all system security. UNIX and C would not have become so vastly important and popular and copied in the 80’s if not for the the fact that they addressed those issues at every turn. MIT teaches a graduate course, 6.828 using a version of System 6. It’s very logical and even simple minded. But, not as good as the huge success that System 7 was. And then there’s the BSD extensions, bringing loads more helpful facilities, SVR4 was a turkey because, they dropped the BSD extensions and SVR5 brought them back. Which when copied my Sun made SunOS so, popular. Anybody could learn UNIX and program and script and make stuff happen in wonderful ways. And, they did wonderful things for the web with X-Windows. Who needs this kind of logic, access to system information and simplicity? Everybody. The user, the administrator, the coder, the high and mighty architect. Heck HTML5 needs that. I’ll bet X-Windows’s cruftiness comes from nobody wanting to sign on to build or update device drivers or even take seriously that stuff from the 70’s. The so-called “modern” OS doesn’t want to go there at all. UNIX on a modern cellphone? Actually, it wouldn’t even take up a smidgin of the system’s memory, leaving plenty of room for device drivers and kernel modules built on that framework. Page swapping, shared libraries and numerous other things were created to address very costly memory space and optimize speed, problems well sorted, now. Why do we keep using them? With some people claiming execution speed and size of executable doesn’t matter at all because, their super-duper new paradigm/language is that good. It’s time someone did look at system level development to counter such claims. And too keep every new version of the desktop from going just as slow despite faster processors and huge amounts of unused RAM. Let alone the OOA, OOD, OOP crowd showing up with “hey, we’ve got everyones problems solved.” Um… look around the interwebs, no they don’t. There’s more traffic on how to make that stuff work than anything. No wait, I’m wrong, I forgot Microsoft’s yet to be fixed bugs. What you’ve got is a f-ing mess going. Developers want to have a stable career and have to worry how to develop for all the collage of UI and OOP linkages. I sat in a meeting and watched a developer explain to management why it would take a year to get a server working again because, IBM dropped a set of OO libraries in AIX. I check code into the ever changing software revision control systems and then check it out to rebuild all the servers. The only code we can count on is the stuff written in that horrible old misfit C talking to an AIX, HPUX or Solaris certified UNIX copy OS. And, it runs faster and just as reliably with each new OS update. Commercial Linux with udev, dbus, *kits and systemd take massive manpower and time to configure, test and maintain. And, signal a trend of change, change, more spiffy destabilizing change. It’s looking more like MS and Apple (oh, hi, we’re a home computer company from the eighties but, we’re super rich and we can do anything, let us completely blow you away with this expensive sales (mis-re)presentation). They have a business model based on change and new stuff to make money. And so now does RedHat. The thing that needs to be held on to is the oddly simple idea of cruising around the system with command line and checking, editing and fixing things. That’s what the whole free software movement is about. If we have to write code in an IDE (or eclipse) to try to debug our systems, we’re screwed. Might as well go work on your UML diagram in Visio ’til retirement. It’s all the way back to the bad old days of “we’re waiting for a fix from the vendor.” If managers and developers don’t know that MS and Apple copied UNIX(r) in order to make their systems better or even usable, they won’t get it. And if RedHat tries to play that game they’ll lose. Something other than Linux will take up where they left off a decade and a half ago. The nerds will inherit the Open Source OS world again.

Posix compliance is a good thing. The degree of it depends on what you want to do with an os. CUPS is a good example. This is what it means today. NIX philosophy means nothing today. I think it is allright with a cli and all os:s uses it today but most users do not want to use it. It is for sysadmins in the eighties with no interest in multimedia. This is what we want today. I do not think they sold many AT and T pc:s in Europe anyhow. Unix is for sysadmins. I am happy Linux choose to be a multimedia os. Real programs for users is what counts. Faster boot thanks a lot. Wayland instead of X 11 thanks a lot. If I want to record why use the prompt. If I want to watch pictures why use the prompt. Videos the same. I am a greybeard but I don´t want a nixy nix that suits some old nix philosophy.