Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Not only is this an impressive accomplishment, but if this can be applied generically to most distributions then it should present an excellent opportunity for advertisement. Showing how you can boot, check your email, read the latest news, and be done with all you need to have done while a fellow Vista machine is still booting says a lot. Even if we can get most distributions down to 15sec average, it's a huge leap. Grats to these guys.

All modern OSes suck in boot time.
CP/M was probably OK but my Zilog-based PC had floppies only so it sucked too.
MS-DOS 3.0 was up and running in 1-2 seconds (assuming you had a hard drive and empty config.sys and autexec.bat).

Then MS rewrote DOS in that punky and slow new language "C" and since then everything went down. The next thing you see is that HIGHMEM.SYS driver taking your precious memory out of 640KB for the promise of semi-useless XMS memory for overlays. Oh well...

Now my kernel sits during boot on 4GB RAM looking for un-present USB devices and waiting for eth0 to figure out DHCP.

I guess Bill Gates was right and 640KB is right amount for everybody so OS would not get confused with all those amounts of bits laying around.

Fail! The desktop editions of Windows XP and Vista will ignore any RAM found above 4GB (that's the RAM displaced by your video card and other memory mapped devices) even with PAE enabled for compatibility reasons.

The 32 bit limitation is 4 gigs of RAM (total).Your video card's memory is mapped into the same address space as your system ram, but from the top of the address space downward, rather than system ram (mapped from bottom of address space upward).The video ram has priority, thus if you have 3G of ram and a 1G video card you are using 100% of your address space. any more system memory installed will be masked by the video ram and thus is "lost".-nB

Actually though, the 4 GB limitation is the virtual address space size. Most 686-class CPUs have a 36-bit address bus and PAE extensions for up to 64 billion physical addresses. (For the uninformated, PAE allows setting the high 4 bits of the address bus in the page tables).

There is no such thing as end-user-OS-boot-time. It depends a lot on device drivers and system background utilities. For example, some piece of hardware AND some release version of its driver maybe causing your trouble... especially if that hardware is removed and the driver probes a lot of time just to be sure. Same for the AV software doing weird things in order to "secure" the system *before* user interaction... At least in the hardware side, this apply for Linux too.

Actually Vista with 4 Gigs of RAM boots pretty quickly. It's once it's up that it is slow.

Microsoft seems to have performed a bit of trickery to make you perceive that Vista boots quickly. The desktop on my wife's Vista laptop appears fairly quickly but it is simply unusable for a couple more minutes. This is different from XP which is fairly usable as soon as (well, shortly after) the desktop appears. It's rather like the desktop is the bootsplash on Vista.

The only issue is that they had to cut some corners to make this work. Axing sendmail? Ok, I understand that (I think was arguing that 10 years ago -- still don't wonder why that's on by default in the desktop distributions). But "The 'done booting' time did not include bringing up the network"? Um, ok... no. With the proliferation of devices solely used to read information from networks (Netbooks, those "quick-loading" Linux apps some laptop manufacturers are including so people can check their email, etc.) accessing the network is one of the main purposes for turning on the machine in the first place. It would royally piss people off to have a quick loading screen, log in and then see "Hold up, still starting up the network". (Just as frustrating as starting a Windows or Mac, getting to the desktop and still waiting while services and programs are loaded).

Come to think of it, what people really need to do is take a good look at modern OSes and determine EXACTLY what still needs to be there and what's cruft. Some of the daemons/services we're launching made sense 15-20 years ago. Does the fax daemon really need to start on my Mac? Does the Group Policy Client need to be started on my Vista box when I'm not on a domain? There's lots of stuff that at one point probably made sense to someone but now is just extraneous.

Sendmail's main purpose in the typical Linux desktop configuration (say, Fedora) is delivering logwatch output to root. [Logwatch attempts to distill the important stuff from system log files.]

But sendmail can be started lazily (in the background) so as not to slow the boot. Or sendmail can be replaced with a lighter weight smtp daemon. Truly though, logwatch-by-email should die for non-enterprise desktops. It's so 1980s it just hurts.

IMHO logwatch should be replaced by some kind of graphical notification widget which requires authentication to actually view the details, since they can be sensitive. As it is, I haven't read my logwatch emails in months, but if SMART is complaining about an immanent disk failure I'd *really* like to know.

Yeah, but you don't need a full ESMTP server for that - a wrapper for the local delivery agent that speaks classic SMTP (but ignores most of it) should be sufficient. In fact, if you're only using it to deliver to root, you've a choice of a tiny bit of text formatting, putting into a huge block of text and whapping it onto the end of root's mailbox, or doing a tiny bit of other text formatting and use the local mail delivery agent to do all of the work.

If you've only one login account (the rest are for daemons or accessed via sudo), then the login code is excessively heavy. There's effectively only one user and effectively only one password. Those need to be in a password/shadow file for compatibility with other apps, but for machines that are essentially single-user, where the data is essentially fixed-length, you don't need search algorithms, routines to scan for the correct column, etc. You store two fixed-length blocks of data and then do a string compare and a byte compare. No files to open, no multi-layer authentication modules, etc. For a straight single-user desktop, you don't need such weight for a console login. You do for servers and other remote activity, but not for the console.

XDM/GDM/KDM could be rigged to work under GGI or XGGI. They don't need the full X system. You can complete booting that whilst the user logs in.

What really needs to happen is for there to be an informative display of what is happening when the system is loading, something that is one of my favorite things about linux. Most people wouldn't gripe about how long it takes for their system to load if they knew what it was that was loading. Sadly, I have stopped being amazed by the people who complain that "Windows loads slow" and then go in and find that they are incapable of saying "No" to any application that wants to install itself on their system. If you want the iTunes Helper and 6 different IE toolbars to load then you accept that requires time. If your fancy all-in-one fax/printer/scanner/roaster has some special monitor that has to load, suck it up and accept a slow load but at least allow the user of any OS to see what exactly it is that is getting put in memory when their system starts up.

But "The 'done booting' time did not include bringing up the network"? Um, ok... no.

Consider that "bringing up the network" generally involves communicating with at least one other device. You can't really call it a metric of OS boot time when merely plugging into a different network (or none at all) might change the result.

Come to think of it, what people really need to do is take a good look at modern OSes and determine EXACTLY what still needs to be there and what's cruft.

I would much prefer simply cutting them out of the boot process, or making a smarter boot process.

For example: Maybe I do want a MySQL server running. But I certainly don't want you to delay my login screen while I wait for it to start. I probably don't want it eating u

It would royally piss people off to have a quick loading screen, log in and then see "Hold up, still starting up the network". (Just as frustrating as starting a Windows or Mac, getting to the desktop and still waiting while services and programs are loaded).

They wouldn't be the first, unfortunately. The Intel wireless driver on my Dell Inspiron laptop works that way. Somehow they've completely disabled all of Window's built in network configuration and replaced it with a tray app that starts as part of t

but if this can be applied generically to most distributions then it should present an excellent opportunity for advertisement.

Not going to happen. If you read the article, you'll see that they compiled all drivers directly into the kernel, so it is essentially an embedded device now. Also consider the fact that they are using a SSD, which is going to decrease boot times regardless of any boot-process improvements.

So basically, you could never apply these speed increases to a generic distro.

Is there any good reason the kernel can't be compiled with device drivers automatically upon installation? Even if drivers change, it can ignore the compiled one in favor of a module, and/or have a "recompile kernel for currently installed devices" button.

I also doubt the SSD is that big of an issue Windows already offers the option of using Flash ram sticks in the same way, and loading boot data sequentially on the fastest portion of the disk is also a viable (and popular) way to boost speeds.

Once the distro installer has finished it would attempt to boot the system to the graphical login. If the login screen came up it would save the state of the machine to a fast loading RAM image that GRUB could directly inject to RAM.
Reading ~100MB of system should take seconds on any machine, and the code area taken up by the GRUB routine could be overwritten with a memory offset command embedded in the first few bytes of the image.
Once the image is in RAM the execution starts up again immediately waiting for your login details.

Of course, hardware would have to be hashed to make sure that the image was still compatible with the machine and that the disk hadn't been moved to a different one. Upgrading the hardware or the kernel, software updates et cetera would require the image to be resaved, but those are easily achieved.
Taking into account the size of the image, I guess that someone could code the installer to compile the kernel with the modules the system uses built in. Maybe as a function of the exit procedure. "Optimise load time - warning! This will take quite some time"

Basically I guess what I'm saying is something like a hibernate file, but one that is rarely changed and only contains the system, not the applications running in a session.

We talked with both the fedora and ubuntu developers at the LPC and even they agreed that a LOT more drivers should be compiled into the kernel instead of being modules (c'mon, ext3 as a module? really?).

99% of what we did to make this work in 5 seconds applies straight for generic laptops and even most people's desktop sytems.

The speedups _still_ are relevant with generic spinning media too. Maybe those are not as fast as SSD's, but the principle is still the same (IOW, for instance reading data in the order that you need it, is better than reading it in the order that it is scattered across the hard disk)

speeding up the kernel to boot in 1 second is TOTALLY applicable to generic distros (not only that, it's relatively easy and we basically already did that).

speeding up X startup to be 1.5s is TOTALLY applicable to generic distros.

How about the first time the system boots, it profiles what drivers are installed. It then recompiles the kernel to include those drivers.

On subsequent reboots, it uses the recompiled kernel and then, once the system is up and running, check to see if something that is compiled in is no longer needed, and see if something has been added that should be compiled in.

So basically, you could never apply these speed increases to a generic distro.

Oh come now! Never say never!You could:

- Boot with modular kernel.- Probe devices and get a list of loaded modules.- Recompile kernel with said modules built-in.- Boot with that kernel from now on.

It's relatively scriptable - in fact, I think there's a "probe loaded modules and generate new.config" script already about the place. If the user is unwilling to wait for a kernel recompile during install, just stick with the modular kernel and incrementally compile during idle time.

It's trivial. I'd code it up myself, but I'm a little busy at the moment, you understand.

I see a lot of comments on the LWN article of people talking about starting services after the user sees the desktop as cheating. However, I ask, does this really all matter. I'm not sure how everyone else uses their computer but I only need to boot my Linux machine about once every 30-60 days. I don't need to dual boot like I did back in say 2002 and comparitively, the amount of time it takes for Linux and X to start up are practically irrelivent. I can imagine laptop users may feel much differently about this, but I thought that was the point of being able to suspend/hibernate.

One thing that worries me is that a focus on ensuring a quick boot at the expense of a potentially less stable system is not a good thing. Fortunately however quick booting is not something that Linux requires, its something that distributions can decide to do or not, which is one of the strengths of the open source/Linux way.

Okay, so I kid. But when your have to wait every morning for your bogged-down workstation to load all sorts of services and client side junk that IT installs on their XP boxes, you get tired of it very quickly. The system is so sluggish and unresponsive for 3-4 minutes after login that I can usually brew a cup of coffee before clicking on the start button actually has any effect. That's 3-4 minutes during which I could be reading Slashdot, er, I mean doing something productive.

How ironic, with all the Vista bashing that tends to go on in threads like these. Vista boots relatively quickly, and hasn't been powered down for me for weeks since suspend/wake works perfectly.
But at least someone, somewhere can boot linux in 5 seconds.

Agreed. The only time I reboot my laptop is after updates/new software install. IMO the power management in Vista is the best I've seen.

Being able to boot in NN seconds isn't so impressive when you look at the incompatibilities it creates.

On my networks, the servers connect to the DHCP server and get not only an IP back, but also the name of NIS servers, who in turn returns (among other things) autofs maps which are used to mount the home directories as well as providing login authentication. The xdm login window returns a list of currently available X servers.In other words, there are reasons why things run in the order they run, and any deviation will cause things to stop working.

Improving things are fine, but not when it's at the expense of current and well-known functionality.

On my networks, the servers connect to the DHCP server and get not only an IP back, but also the name of NIS servers, who in turn returns (among other things) autofs maps which are used to mount the home directories as well as providing login authentication.

What you describe is similar to what Windows calls "domain authentication". Not every computer logs on to a domain, especially in the home or home office environment where a fast boot is paramount.

so you are saying that you would rather stare at a hung boot in text mode instead of having the possibility of working in offline mode in X?

that does not make sense at all:)

for network-client setups like you describe, we should still start X immediately and if the network fails or is slow, at least provide some interaction with the system (work offline, nudge network with login attempt etc).

I've administrated network authenticated openSUSE machines and they definitely benefited from booting faster (compared to older versions of openSUSE) - after all the sooner the kernel finishes the sooner you can start waiting for that DHCP lease...The key is that the moment someone says they want to run NIS/LDAP/NFS you just say "start everything that doesn't depend on the network while you wait for the network to come up". In your case NIS/NFS/autofs/xdm DO need the network so they have to wait until that

If you're talking about a server I agree, but for a desktop, well, do you leave any lights on at your house 24/7? I don't. Electricity isn't free, and I don't like contributing to global warming any more than I have to.

TFA showed no indication of the system being less stable. With a machine that boots virtually instantly, why would you NOT shut it off when you're not using it? IMO wastefulness is shameful. You should stop!

Funny, it's usually us geezers who are portrayed in being stuck in your ways. I look f

I don't need to dual boot like I did back in say 2002 and comparitively, the amount of time it takes for Linux and X to start up are practically irrelivent. I can imagine laptop users may feel much differently about this, but I thought that was the point of being able to suspend/hibernate.

Unless you end up stuck with a machine whose suspend/hibernate sequence is defective. On various computers running various operating systems, I've had no video, or no audio, or no network, or no mouse after coming out of sleep.

Seems like there is market for a fast-boot Linux distro which just install DHCP, network services, an X-server and an E-mail daemon/web browser to read E-mail. Perhaps you could call a thin-client Linux.

I'd love to see linux in general have better boot times. My install of ubuntu on my PC takes about 2.5 minutes while XP is up and running in about 1.5. On my laptop it's the reverse (windows taking forever, opensuse being relatively quick).
As far as 'cheating' by loading services at the login screen, GO FOR IT! It's not cheating if it's making things better for the user, it's called being more efficient.

And no cheating. "Done booting means CPU and disk idle," Arjan said. No fair putting up the desktop while still starting services behind the scenes. (An audience member pointed out that Microsoft does this.) The "done booting" time did not include bringing up the network, but did include starting NetworkManager.

It seems to me that the five seconds could concievably be brought down to virtually zero with cheating! My work PC slows down so much sometimes from antivirus, inventory controls, etc that it takes longer than that to add a record or open a table in an Access database. With a keyboard buffer you could stick a fake desktop and login in, and have the real desktop and login take over before the user finished typing in his password.

I agree, I absolutely HATE Access. I rather liked dBase, didn't have any problem at all with Foxpro, and absolutely loved NOMAD. But as to NOMAN, I'm not sure they even have a mainframe any more. I wonder of there's a PC NOMAD?

Yes, please. More common users would benefit from this greatly. I would love a prompt for user and password while it is booting, because most users do not leave their computer all day, especially laptop and handheld/UMPC users.

With desktop computers, I really think that the time it takes to boot into Ubuntu, THEN going into a Gnome prompt, THEN loading the services and desktop is a silly idea.

My stepfather still has an old Pentium III laptop with Windows 95 running on it. Booting the laptop to read an E-mail takes around 20 minutes. His advice to anyone who wants to use it, "switch on the PC, do something else like have a bath, do the lawn, read the newspaper, or have a coffee, and the PC will be ready to use before you know it".

I find that if a PC is dragging its feet it's either a memory or harddrive problem. I was doing some development on a 1.7Ghz system with a really old 40GB drive and it took forever to get the program loaded. I popped in a brand new harddrive and the program loads in a few seconds. The problem was largely getting information from the disk.

If you decide to just get rid of the laptop, get an external enclosure and use the drive as a backup drive for any other system.

I call bullshit--unless he's doing some serious work for spyware companies and sending e-mails peddling "Vla*g-ra" -and/or- has a serious hard drive or memory problem. Let's see, Win95 required a 386DX with 4MB RAM--although that setup was painfully slow. Win95 recommended a 486 with 8MB RAM. So, unless you are running a lot of startup apps, Win95 (including Win95B) should boot with 8MB RAM and have a WIN386.SWP of 0 bytes. So, even if he has a PIII/450 with 64MB RAM (even then most laptops came with 128MB

O'Reilly News recently interviewed Arjan
van de Ven [fenrus.org] about his efforts to improve Linux performance and reduce power
consumption. Arjan works for Intel in the Open
Source Technology Center [intel.com]. This interview is approximately 30 minutes.

One of the projects you're probably most known for in the past
couple of years is the PowerTOP [lesswatts.org] utility, which I
found very fascinating. Looking at some of the gains you've made over the past
18 months, it seems like Linux-based devices are saving a lot more power than
they used to. What do you consider the big successes in the past year and a
half?

To be honest we fixed effectively the entire Linux desktop space. It's
not--PowerTOP is more--it's not just what we fixed with PowerTOP is not
individual pieces. We fixed everything. For me that was a success.

Is that everything in terms of not just desktop but servers as
well?

Yeah; we fixed not just Evolution. We fixed Firefox; the thing with
Firefox was that it wasn't one thing that was broken. Everything had problems
and we had to fix all of it. So for me the success was how quickly everything
got fixed; it was just amazing.

In this context you consider
fixed--everything is no longer broken in the same way or--?

Everything is no longer keeping the CPU out of idle basically.

Do you have a reference machine? I guess I'm asking what's your
benchmark for this, a particular software configuration stack or particular
type of machine, or are you willing to say it's pretty much every Linux based
machine out there?

I'm looking at several machines--my own laptop but to be honest, what runs
on my own laptop is what I care about most. At least that's where I got more
battery life, this is where I see the changes. I tend to run a quite rich
environment on my laptop but I also look at service. We look at all kinds of
machines and we see the same trend everywhere in that all the various pieces of
it--never polling or keeping the CPU up. They all got fixed.

In fixing this, is there a component of education, for example,
saying "Instead of doing a busy wait on a select loop or continually polling
you should set a kernel timer and wait for that to call you"?

That's part of it but the biggest thing is that you had no visibility. Just
two days ago at IDF I spoke with a developer of the GNOME desktop and he said,
yeah; when I saw it happen I fixed it in 10 minutes, but you don't know it's
there until you see it from PowerTOP. Adding the visibility turns out to be
enough for people to start fixing it. They know how to fix--how to not poll
most of the time.

You can't fix something you can't measure.

If you don't see that it happens you don't know it happens and you can't fix
it.

Are you getting the same sort of results from other
projects you run into?

GNOME was there but it's almost everybody goes oh yeah; we should have not
done that; either they fix it themselves or some--a lot of people give them the
fix and in general it's like oh yeah; we shouldn't have done that. Unless you
see what's happening you don't know what to fix, so the biggest thing that
PowerTOP did was add visibility. We can see under the hood what's going on and
then we can fix it. And quite often the fix is very simple.

It sounds then, maybe I should be able to say that just about
everybody is happy to see this. Is that the case?

Yes; people--all the developers I've worked with--and that's quite a
few--they all go oh yeah. Thank you for the fix; we should have no problems in
the first place. We didn't know this; it's fixed now. In the beginning I did
most of the fixing when PowerTOP was very new and now days the people do it
themselves. The developers learn

In an attempt to head off the inevitable here's a link straight to the existing Interesting but how useful, really? thread (Yes! No! I have a Mac! I use suspend! I use hibernate! Suspend is broken for me! Hibernate is broken for me! Hibernate takes too long with 500Mbytes! Why do Linux people always say change your habits? Etc.)

What I really want to know is what can be done about usb-storage and pciehp (PCI Express hotplug). I have an EeePC 900 using a kernel with Arjan's fastboot patches [lkml.org] and with USB entirely disabled and pciehp turned off the kernel mounts the root filesystem in just over one second. With USB on and pciehp in use it's over 5 seconds....

We've sent a patch to Greg KH making USB initialization go in parallel which reduces usb initialization from [N * 0.1] seconds (where N is the number of usb ports in your system to [0.1]. This patch is currently in linux-next afaik.

I'm wondering why you would even have PCIe HP turned on on an asus 900:)

I'd love for my MythTV box to boot faster. Since it's not silent (though the TV fans are louder, the TV isn't always on either), I leave it turned off, and the long boot time makes it less appliance-like.

Note that nowhere in the article is there any mention of the processor, its speed or the number of cores. There's also not one word about how much RAM the machine has. With enough RAM, you can load your entire system into a RAMdisk and even if you don't have SSM access time becomes (effectively) zero. Also, of course, a 2Ghz quad core machine is going to boot faster than a 1 Ghz single core. I'm not saying they're cheating or anything, but these specs are something you need in order to evaluate what they've done, and they're not telling us.

and we used the 'stock' SSDs in these systems. from the bootcharts you can see that our SSDs top out at 25-30mb/sec throughput on read, far below what most regular hard discs or server-grade ssd's can do.

The real lesson from all this load time buisness is that our compilers still really really suck. I mean the truth is that when you boot your computer there is only a tiny bit of logic that really needs to go on because only a small amount of stuff changes between any two boots (and less between a boot and a power off).

A truly well desgined system wouldn't care about arbitrary boundaries between this program and that one, it would hunt down optimization opportunities everywhere and automatically reduce boot up to an extremely lean and quick procedure without adopting the harms of merely loading an old image.

I mean to take one example a substantial amount of time during start up is probably spent searching for and then parsing configuration files. So long as their is no cross cutting OS level JIT compiler that can deal with both system IO code and application code there isn't much we can do about this without massive investment of effort. However, in principle there is no reason that the system couldn't simply read the preparsed data from a cache and jump directly to the real substantive logic that needs to be done on boot (checking out network conditions, looking for changed hardware, dealing with changed configs)

ROM was a wonderful thing. Simply flip the switch and the software is already loaded into memory. There was about a second or two of initialization (on a ~1MHz 8-bit processor!) and you were ready to go. It's still possible to create such fast boot times using ROM. Especially with re-flashable ROM. These sorts of boot times are seen in systems like Game Consoles.

Unfortunately, desktop OSes are so complex that using re-flashable ROM adds a great deal of complexity and cost to the design. Thus you aren't likely to see any systems keep their OS in Flash. Compounding the problem is that modern OSes are rarely designed to boot from a ROM configuration and would require substantial changes to boot properly.

desktop OSes are so complex that using re-flashable ROM adds a great deal of complexity and cost to the design.

Flash is almost dirt cheap. $10 buys you more Flash memory than most systems have RAM. Just save the state of a freshly booted OS in Flash and when the computer starts, load just what you need to access flash and handle page faults, then start as if you've already loaded everything into RAM and start copying from Flash to RAM. Whenever a page fault occurs, load that page from Flash next. This way you don't need to wait until everything is copied to RAM.

It's still possible to create such fast boot times using ROM. Especially with re-flashable ROM. These sorts of boot times are seen in systems like Game Consoles.

Uhh, if I turn on my Mac Pro and my Xbox 360 at the same time, I'm up and running with OS X a few seconds before the 360 starts to load the game. I for one hope my computer DOESN'T have boot times like a console.

My Commodore 64 booted many times quicker than that and was probably far more useful.

Well, it did run the popular games of it's day...

Sometimes I become nostalgic for the days of the C=64 and think about getting back into it all again with some of the groups that are still out there but I stop and catch myself, do I really want to waste all those hours writing demo code for the 120 people who are likely to ever see it?

But after you ruminate on it for awhile, you realize that people just assume a long boot time

I still get surprised when my gaming rig boots up in 8-9 seconds, and it's using RAID0 on four drives (with frequent off-system backups of saved games). Having an OS ready with a login prompt at five seconds would make it seem blazing fast.

So the answer to your first question is: soon you will be able to set the final video mode in the kernel. As for your second question, doing it in X is not the best solution (as doing it in the kernel means less flickering when X starts, the ability to support graphical kernel panics and nicer virtual terminal switching).