Lennart Poettering, the author of systemd, has announced: "I just put a first version of a wiki document together that lists a couple of easy optimizations to get your boot times down to [less than] 2s. It also includes a list of suggested things to hack on to get even quicker boot-ups."

The document specifies that it works best on systems with SSDs. IOW computers that will boot in a few seconds using a normal init system. A 2-second boot on a machine with an SSD is not what I'd consider a big win.

I'll also note that

a) Last time I set up Arch Linux on my netbook - using sysvinit, and a normal hard disk drive - it booted to the login screen in about 10 seconds. This wasn't a major feat either, all I had to do was customize the initrd and background a few daemons.

b) Boot time contests don't matter, because desktop environments like Gnome 3 and KDE 4 have bloated to the point that login is now slower than the rest of the boot process. Right now Ubuntu can take 20 seconds to get to the login screen, and 30 to get Unity into a usable state.

(And I don't know about you, but I care about the login time more than the boot time.)

The document specifies that it works best on systems with SSDs. IOW computers that will boot in a few seconds using a normal init system. A 2-second boot on a machine with an SSD is not what I'd consider a big win.

I have an Atom netbook (maybe 4 yrs old now) with which I occasionally swap the storage drives. SSD runs Debian Sid and the stock 5400rpm HDD runs Arch Linux.

Arch + systemd alone wasn't that great, but Arch + systemd + e4rat (only works with ext4) gives me around 7sec from GRUB to prompt. Login time is a usually 1 - 1.5 sec after I do "startx", since I run lightweight tiling WMs most of the time.

You might want to check out the ArchWiki's e4rat page if you're on ext4 on a regular HDD.

I actually care more about my computers hibernation state even more than the boot time. I would rather not have to reboot my laptop, however after a few sleep sessions. My resources start to fade away.

I'm very interested in systemd, and getting boot time down to 2 seconds sounds pretty amazing. However, current version of Fedora is way slower, even slower than Ubuntu's Upstart. So I'm wondering if this new super-fast boot setup will find its way into the next Fedora release? I'm currently an Ubuntu user, but I'd give Fedora another look if they could come up with a 2-second boot-up.

If such amazing boot times can be achieved, maybe Mark Shuttleworth would reconsider systemd. It's kind of troubling that Linux initialization is becoming ever more fragmented with SysVinit, Upstart, OpenRC and systemd all competing for mind-share. That's got to be making things more complicated for developers.

I have had similar results. On my hardware Ubuntu (with Upstart) boots faster than Fedora (with systemd). Not by a lot, I wouldn't say the time difference is either large, or important. However, it does make me question whether systemd is worth the effort to implement and adopt.

If you read the list, you will notice that some of the things that slow down systemd boot time is support for advanced storage options in Fedora, like LVM, iScsi, etc. Fedora defaults are the problem (for a pure desktop user) not systemd

It's kind of troubling that Linux initialization is becoming ever more fragmented with SysVinit, Upstart, OpenRC and systemd all competing for mind-share. That's got to be making things more complicated for developers.

That's the default behavior of competing Linux distributions. Touting features here and there, but its nothing new, almost all of these features are only updated versions of software packages. Also on the GNOME camp, they are still debating of where to put the POWER OFF button, or to hide it with a modifier key. Instead of improving the desktop for "home users", "developers(ISV)", "business users" they are all busy of the things that only they themselves(internal developers) will only care. Save for Ubuntu, I think the strength of Ubuntu is a user-centric disto.

LXDE is quite good on that front if you don't mind your DE being fairly barebones. (Just make sure you check "Preferences > Desktop Session Settings" to make sure no undesired GNOME/KDE/Xfce autostart components came along for the ride)

Functionality is more important to me than raw login speed, so I'll stick with KDE. I admit that I haven't really investigated improving login times under Kubuntu, but it is something I maintain with Windows. I think I'll take a peek later tonight.

Windows is especially bad, since it seems that every time you install or update something, commercial software wants to add something that launches automatically, so every time I install something I fire up msconfig and check to make sure only what I want to launch at startup actually launches.
Apple is especially bad. I have banished it from my desktop. If I want to add something to my iPod, I fire up a virtual machine dedicated to iTunes.

Speaking of which, I just removed and added a bunch of software, so I'd better check...

EDIT: Then again, KDE can be overwhelming with features, so I think I'll check it out...

Functionality is more important to me than raw login speed, so I'll stick with KDE. I admit that I haven't really investigated improving login times under Kubuntu, but it is something I maintain with Windows. I think I'll take a peek later tonight.

[...]

EDIT: Then again, KDE can be overwhelming with features, so I think I'll check it out...

Since you're on Kubuntu, I'd suggest installing LXDE with `apt-get install lubuntu-desktop`. That'll give you a Lubuntu option in KDM with a more polished, comfortable default theme and configuration than bare LXDE.

I realize you're not being serious, but... That is not the same kind of functionality. Way too much stuff on Linux is tied to heavy desktop environments - networking and automatic power management in particular stand out. Getting this stuff working under a standalone WM tends to require extensive configuration or ugly, insecure hacks.

(And in the case of NetworkManager, it doesn't work in a command line environment, period. nmcli is a complete joke.)

Pretty annoying. Especially seeing as laptops, which could theoretically benefit the most from a lightweight environment, are currently left with crippled functionality.

I realize you're not being serious, but... That is not the same kind of functionality. Way too much stuff on Linux is tied to heavy desktop environments - networking and automatic power management in particular stand out.

Yes, it was mainly tongue-in-cheek as I understand most people don't like tilers, but I have to disagree with you on the networking and power-management. There's ceni, netcfg and good ol' wpasupplicant for networking and the glorious but little-known Linux-PHC undervolting (in combination with the usual cpufreq) which used to give me more juice than what the manufacturer stated on the box. I say "used to" because the battery is a bit old now, sadly. As far as I know, these work in almost any DE or WM, though not quite automated as what you might find in Gnome or KDE.

The only downside (for some people) is....

Getting this stuff working under a standalone WM tends to require extensive configuration or ugly, insecure hacks.

.... like you mentioned. You do need to spend a little time editing configs. No security implications that I've noticed (so far) though, but then again, Arch isn't exactly an OpenBSD rival on that front.

Oh, and some tilers come with systray functionality these days, so you can pretty much dock your nm or wicd or cpu-scaling applets there as well.

wpa_supplicant provides no simple way to connect to an encrypted network on the fly; you have to be able to write to the wpa_supplicant.conf file.

ceni requires the root password.

Suspending and hibernating the computer requires long, unintuitive dbus commands. Either that or sudo, which as I understand it is a security hole.

Mounting stuff in a file manager requires either a working consolekit session (which is not possible on many distros, ranging from Debian Squeeze to Ubuntu 12.04), or messing around with PKLA files (which is again probably a security hazard). Alternatively you can use one of the various immensely bloated login managers...

I actually posted a rant about this on the Arch forums, and a lot of people seemed to agree with me. Basically it appears to me that, while Linux based GUIs for doing this stuff have improved recently, the friendly CLI environment to back it up isn't quite there.

Edit: And I should point out that by "friendly" I mean "friendly to experienced users," not "friendly to complete novices." No CLI is friendly to novices, but a good CLI must be friendly to people who know what they're doing; i.e. it shouldn't make things more complicated than they have to be. And almost every Linux CLI thing that involves wireless, power management, or device mounting makes things more complicated than they have to be.

Use a tiler (Xmonad, dwm...etc). You'll get both speed and functionality. "

The funny thing is, I've actually been meaning to swap out LXPanel and Openbox for AwesomeWM for a while now... I just haven't had time to implement the hybrid tiling/floating mouse/keyboard interaction I want in Lua yet.

...and I'm not sure whether the Lua API for AwesomeWM is powerful enough to implement drag handles for adjusting the tile sizes. I might have to expedite my plans to learn Haskell and just de-GNOMEify Bluetile.

(Bluetile is a attempt to make XMonad friendly enough for the average GNOME 2.x user but it's too GNOMEy for me even though it implements more of the features I want in a tiler than any other tiler config I've ever seen)

Not sure what you meant by"drag handles", but if you you want varying sized tiles, than you're probably looking for a dynamic tiler.

No idea about Awesome. I'm not a fan of Lua.

You've basically highlighted the main problem with tilers. They need to fit you like a second skin, or they're bollocks. Probably why you see a lot of people WM hopping until they find one that works for them.

Not sure what you meant by"drag handles", but if you you want varying sized tiles, than you're probably looking for a dynamic tiler.

Bluetile can be configured to let you resize your tiles by grabbing the border between two windows and dragging. (Like the splitter widgets used to implement things like file manager sidebars)

What specifically do you mean by "dynamic tiler"? I've never heard the term before.

You've basically highlighted the main problem with tilers. They need to fit you like a second skin, or they're bollocks. Probably why you see a lot of people WM hopping until they find one that works for them.

Not necessarily. I'd settle for something that implements enough of the de facto standard interactions from floating WMs that I could ease myself into it. (Again, something Bluetile scores well on. dwm-derived WMs like AwesomeWM seem to revel in making sure as few keybindings are shared with floating WMs as possible.)

I don't have enough screen real estate to feel comfortable with a tiling WM.

Windows and KDE both snap windows to the left and right sides of the screen easily enough. That is useful often enough for what I mostly do at the moment. Most of my other tasks would tend to be maximized, even on high-res screens.

I miss having a 1600x1200 display, but my computer now is a laptop with a 1366x768 display.

Alot of people here think boot time doesn't matter. I know hibernation makes boot times less frequent and less relevant to some people. However sleep/hibernation modes represent a great deal of complexity between OS/BIOS/hardware, and is frequently the cause of driver bugs. If turning on a computer was as fast as waking it up from sleep, then it might eventually enable a OS to do away with the ugly complexities of hibernation.

In my opinion one should be able to turn on and use a computer much like they turn on and use a TV - only waiting for the display to "warm up".

If turning on a computer was as fast as waking it up from sleep, then it might eventually enable a OS to do away with the ugly complexities of hibernation.

Hibernation has other benefits like being able to keep all your programs/documents opened. Even if you have a session manager that saves state on shutdown, it still needs to reopen everything one by one as opposed to loading an image into ram.

"Does sleep/hibernation support in drivers really add complexity, or just require drivers to be written correctly?"

It's both actually.

Unless the system is placed in light sleep where peripherals continue to draw power, they will have to be reinitialized upon restart, but now we need new mechanisms to ensure the OS state that was saved to disk upon hibernation can be fully restored. This kind of state synchronization is far from trivial, especially when there are physical bus changes between hibernation sessions like USB devices being changed around.

"Also, saving the state of the 30+ terminal and app windows I have open is not simple for my desktop -- no desktop I've ever used has done it correctly."

Yeah unfortunately most applications and operating systems weren't designed to enable applications to save and restore their session state. I have read about an OS/API that does it though, despite searching I wasn't able to find it's name again though.

Hibernation adds shutdown delays, although these are less annoying that bootup delays. In theory though, a well tuned normal bootup should beat a hibernation bootup because hibernation saves and restores fragmented ram, which is wasteful.

Does sleep/hibernation support in drivers really add complexity, or just require drivers to be written correctly?

Yes, it does add complexity. Because when you have a device completely powered down, it's no longer maintaining state - e.g the graphics chip no longer knows what mode it should be in, the contents of video memory are gone, etc.

Which means that when it wakes up again, the video driver has to put it back into the state it was in before. Re-initialise the hardware (effectively a bootup sequence for the GPU), switch to the right mode, then ensure that userspace does a redraw to finish things.

Hibernation rarely works in Linux world.
It's always something, either your Wifi, soundcard or whatever you have connected to USB, but one single faulty driver makes the whole hibernation as bootup concept useless.

#1 -- don't care. I use my laptop like my phone, always has to be at the ready.

#2 -- not sure. I've never had RAM fail on me despite 24/7 operation. It's true that constant charge/drain on a Li-ion battery is not good. On my Thinkpad I set the charge thresholds using the SMAPI interface and leave the laptop plugged in most of the time.

#2 -- not sure. I've never had RAM fail on me despite 24/7 operation. It's true that constant charge/drain on a Li-ion battery is not good. On my Thinkpad I set the charge thresholds using the SMAPI interface and leave the laptop plugged in most of the time.

Well, the components which I have ever have to change late in a computer's lifetime, and thus consider vulnerable to aging, are...

-> Laptop batteries : Always, because most other computer components are designed to last more than 18 months.
-> Hard drives : From time to time, but these are not affected by permanent sleep since they are pretty much turned off.
-> Screens, RAM : Rarely but happens, especially to the cheap noname components that they put in commercial computers. Failing RAM especially is annoying because it is tricky to diagnose, though MemTest is helpful when it works.

#3, #4 -- not really a problem in Linux

On Linux, most background services are only restarted during OS reboot. So while their files may be updated, the running copy of the software isn't. Due to this, if you leave a computer in sleep or hibernate for months or years, you effectively take nearly the same security risks as if you disabled updates altogether (though "user" software will be kept up to date on its side if you regularly close it).

Linux also has its own issues with sleep and hibernation. Basically, some drivers can sometimes end up in a garbled state and freeze upon sleep and resume. If you can isolate which kernel module exactly is failing (in my case ath9k), you can ask the OS to unload it on sleep and reload it on resume, but this approach has its problems.

I've always found it bewildering how long so many commercial operating systems take to bootup.

The following bottlenecks probably deserve most of the blame:

1. Unnecessary Serialization.
Most devices are independent from one another and can be initialized at the same time, but many coders are lazy or not good at multitheaded/multiprocess programming and depend on critical sections, disabling interrupts, etc. Even drivers which support parallel execution might be prevented from doing so by the OS. Several linux distros suffer huge performance penalty because their init scripts load everything serially instead of in parallel.

2. Unnecessary delays.
I've seen this so often it makes me disappointed in my peers, but they often solve race conditions by adding arbitrary delays throughout the code. For example back in the modem days it was very common to see companies running code that delays between AT commands and even some cases where the code had hard coded delays between characters. These delays will become significantly less optimal as hardware evolves. One company thought they were being clever and made the delays configurable - how thoughtful. Removing these delays means solving race conditions that the original developer couldn't be bothered to solve.

3. Disk thrashing.
This one is easy enough to fix by using a SSD. For HDs, placing everything in an initd and/or rearranging startup apps linearly on disk will significantly reduce the need for numerous and slow disk seek operations. I've seen defrag tools do this for windows, but I've never seen anything similar for linux (can someone clue me in!).

(4. Network delay)
I hesitate to add this one, since an unresponsive network should not really cause local bootup delays. But in fact I have seen cases of it where serially loaded processes get blocked due to a process waiting on DHCP or a network connection.

(4. Network delay)
I hesitate to add this one, since an unresponsive network should not really cause local bootup delays. But in fact I have seen cases of it where serially loaded processes get blocked due to a process waiting on DHCP or a network connection.

I've also occasionally run into parallel loading where neither the developers nor the system were smart enough to take prior boots into account, so the network init got done late enough that, before the network init finished, the init system had started to run out of tasks which could be run in parallel without waiting for the network to be up.

The system will become idle while waiting for the network to come up, and then when it does it gets busy loading again. Ideally the loading would never stop/block until everything is fully loaded, even if the network isn't ready.

The problem can be more complex than meets the eye, since many daemons will fail when they call bind & listen if the network isn't up yet. Existing standards lack a way to have these processes load all their resources prior to the network becoming available. They'd probably need to poll for the network, which would only make the problem worse.

The last connection reveals an interesting fact. It is possible for a linux process (the second socat in Terminal A) to hold an open listen socket on an interface that is not yet enabled. If we could somehow get into this state on bootup, then it would solve our delima perfectly. Unfortunately though I'm not aware of a direct way to enter this state. Somehow we'd need the Linux API to permit the first command on Terminal A to succeed.

"Consider disabling SELinux and auditing. We recommend to leave SELinux on, for security reasons, but truth be told you can save 100ms of your boot if you disable it. Use selinux=0 on the kernel cmdline."

Hey, I can do a lot in 100ms! I could literally blink in that huge window of time, or I could have a fleeting thought, why I could even scan halfway across my screen with my eyes!

Sarcasm aside, I don't use SELinux myself as I'm not that paranoid about security. My router's firewall has never been breached (in fact I've tried to get through it myself while at work, to no avail), and I also use iptables on the computer itself. So I'll be able to blink at least one more time during boot!

Well, if you ask me to list a number of ways to improve things, the most obvious and those of greatest help would be at the top. While those that make no sense would be at the bottom or not on the list at all. Although, I may start descending from those that are slightly helpful to those that are not helpful at all, before entering into the realm of the absurd. So it wouldn't be bad at #12, if #16 was throw the computer into the river and #20 is " obtain a microscopic black hole ...".

Well, back ups are a gazillion times more important to me than boot time improvements, so no. Heck no.

Number two is also requires not using LLVM. So double heck no to that.

Did you notice that the article is mostly about configuring systemd for appliance-type hardware, not for a desktop or server?

This is for an environment where a) hardware is a constant and so modules aren't required, and b) the root partition is going to be on a simple disk setup, probably an SSD. In that environment, neither LVM nor initrd are required.

I am sorry. I can see that there are some benefits with systemd, but doesn't anyone else feel like systemd is implemented in a way that's pretty much the exact opposite of what we always about unix-like operating systems?

The boot process (I for myself always liked BSD-style rc.d, like in Gentoo or Arch Linux) never was something I thought would need reimplementation.

Also I don't really like that everything is replaced by "magic" in form of services that guess everything on their own . I know this is a bit off topic, but take LightDM for example. I really do like its concept, because it tries to be a display manager for everyone, whatever desktop or window manager you would use, whether you prefer GTK, QT or whatever. What's really bad about it is that when accountservice is installed (not somehow enabled or something) you can't configure it by hand anymore. The problem is that accountserive usually comes as a dependency, you are locked into this unless you become hacky.

I know, one shouldn't always have to deal with configuration files, but what was so bad about having them set up automatically by some configuration fronted (or script that runs on installation) and then can be easily configured by hand. Also, why did everyone start using XML? Were other formats really a problem and wouldn't something like YAML or (I can't believe I am saying this) even INI files be better and preferred by most admins?

Sorry, I don't want to rant, but I feel like just a few years ago everyone would have screamed "NO WAY!" at these things and I am not sure what has changed.

I am not saying everything is bad and everything should always stay like it was, but I don't really see how Lennart Poettering simply gets broadly accepted by everyone. Sometimes it looks like a desperate try to bring Linux to the desktop

If that's really what people want I think there would be a huge potential with overthrowing Windows by doing Windows 8 (HTML5/JavaScript) the right way (QT has the right tools, QML and Webkit are amazing).

Sorry, if this sounds like a rant. Maybe I am just overlooking some important things. I used to be more involved some time ago, so maybe that's my problem in seeing a lot of sense behind all this.

I am sorry. I can see that there are some benefits with systemd, but doesn't anyone else feel like systemd is implemented in a way that's pretty much the exact opposite of what we always about unix-like operating systems?

I humbly disagree. Lots of people are dissing systemd on theoretical grounds instead of examining what it's actually doing.

As far as I'm concerned it's a) taking much of the suckiness out of traditional sysvinit which is layers upon layers of shell hacks while b) being relatively lean (most of the extra stuff people complain about are just small programs bundled with the systemd source and don't affect the daemon) and c) standardizing some of the stupid different-because-noone-coordinated differences between distributions that just waste time for everyone.

It's like we had 40 years to collect dust bunnies and now finally someone comes along with the big vacuum cleaner. I expect that in a couple of years, my base system will boot much faster, consume less resources, support more dynamic features, be less confusing and overall more friendly.

I know, one shouldn't always have to deal with configuration files, but what was so bad about having them set up automatically by some configuration fronted (or script that runs on installation) and then can be easily configured by hand. Also, why did everyone start using XML? Were other formats really a problem and wouldn't something like YAML or (I can't believe I am saying this) even INI files be better and preferred by most admins?

Oh, hey - your wish is granted. Systemd *doesn't* use XML for its configuration files. And thankfully, it doesn't use YAML either - that's a good readable format, but kind of obscure, dragging in extra libraries to parse it. No, Systemd uses INI files, nice and readable by humans, and easily parsed without needing to link against special libraries.

no LVM? Whut? If you're in to btrfs, it can be skipped. However not using lvm is most of the times a time bomb (and please don't be stupid by saying that a large partition is good enough; it's not).

Bypass initrd..... sigh.

disable auditing and selinux/apparmor. what....?

disable syslog...

oh my god. typical developer again. He already did no good with portaudio, systemd. And for what?

Not to offend any developer but if he or she doesn't understand how a system works, what things are needed for, we shouldn't let him proceed. Look at the merging of all the paths he's also in favor of.

How to kill linux... hire Lennart.

remove cron... gnome 3.4 .. etc

This man is full of painful ideas that is nowhere near useful. As if I would restart my servers daily. I don't. Besides 192GB memory testing takes a bit longer than a boot cycle. Also laptop: I just suspend. Works for months and the nly time I recall a reboot was needed was twofold: new kernel and filesystem checking.

How can this man be stopped.... can we stand up or do we keep being forced things thru the throat?