Posted
by
timothy
on Wednesday July 15, 2009 @01:23PM
from the all-it-does-is-operate-the-power-button dept.

Sam writes "A new goalpost has been set in the race for faster bootup times. MontaVista Software announced (and demonstrated at the Virtual Freescale Technology Forum) a dashboard application going from cold boot to operational in one second flat on their embedded Linux platform. Although this is unlikely to immediately benefit your average Linux user, previous real-time patches have eventually made their way into the main kernel."

I'm surprised that this is news. I remember working a few years ago on booting Linux (also the MontaVista version) in 600 million cycles flat, which for a CPU running at 600 MHz, is exactly one second as well.

It is not news. I work for an embedded software company and we have operating systems that boot in well under a second. Its not a big deal. Really. Its just a pleasant side effect of the fact that a lot of embedded operating systems don't really have to do a whole lot on boot.

Impressive and would be a huge improvement over the current state of things.

But then again, my 1Mhz Apple ][ could cold boot in just a couple seconds.Of course, loading Applesoft Basic from tape took an additional two minutes but Integer Basic was in the ROM.

Michael Abrash wrote a great article about this in Dr. Dobbs magazine in the 90s. His young daughter (5 years old?) asked him why he never used his "fast" computer. Abrash was using a state-of-the-art 266mhz DX2 powerhouse and couldn't figure out what she meant. She was referring to the old Vic-20 in the corner that would boot in just a few seconds. Windows 3.0 took several minutes to load. IIRC, the article was titled "perception is everything"

I wonder how much of the boot slowdown has to do with architectural change(loading from slow disk to plentiful RAM vs. small amount of RAM and lots of stuff burned into ROM, the rise of networking as a more or less assumed part of the boot process, increase in the number of highly complex peripherals that need to be negotiated with), and how much has to do with the OS gradually grabbing more of what applications historically had to do(DOS loaded like the wind; but didn't actually load very much).

I wonder how much of the boot slowdown has to do with architectural change

None of it - at least not the architecture you're meaning (ROM vs. HD loading)

The post above yours talked about an A1200 that booted in 3 seconds - from hard drive.

I had an A3000 that booted from HD in 6 seconds - and it was only that slow because it loaded up drivers for my network card, set up TCP/IP, initialized the (additional) display card, and a bunch of other things (desktop customization.)

good catch -- indeed, that was a goof. meant to write 66 but my fingers had other ideas. Sorry. But at any rate, I don't think the specific numbers are that important. The point was the new machine was computationally hundreds of times faster. But in actual use, it was slower in some areas that really matter, to the degree that even a young child noticed!BTW, It's been a few years since I read it, but I believe this story is included in Abrash's book titled "Michael Abrash's Graphics Programming Black Book"

FACT NAZI Observation:
The 486DX came in 20, 25, 33 or if you were unlucky 50Mhz variants. Consequently a clock doubled (DX2) 486 was not capable of anything close to 266Mhz. That wasn't achieved until the Tillamook-Pentium much later.

CoreBoot (formerly known as LinuxBIOS) will boot a full Linux kernel on a general-purpose machine in 3 seconds.

Except one can't easily install coreboot on a random PC because as I understand it, most motherboard makers have declined to cooperate. Therefore, any machine to run coreboot would need to be purpose built. So if a machine is advertised as compatible with coreboot, is it really "general-purpose"?

Coreboot actually works on a very wide range of motherboards, in much the same way that Linux had drivers for hardware where the vendors weren't cooperating (reverse-engineering). I maintain (when I have time) the Freshmeat record for Coreboot and just about every entry I've done for it has had vast numbers of new drivers.

Okay, I haven't been using desktop Linux on a day to day basis since around 2003; but even then, sleeping and hibernating worked reasonably well - so I didn't reboot all that often. On my Mac, the only time I reboot is when an update forces me to. So (serious question) why is faster boot times all that important? I wouldn't think devices w/ embedded Linux would shut down regularly, but maybe I'm wrong...

Certainly you don't want them to shut down regularly, but if it does spontaneously shut down for some reason (power interruption comes to mind) then getting your embedded device back online fast may be very important, depending on the importance of the embedded device in your system.

Yeah, that's exactly it. In my business, we try to design for 0 failures, but that's unrealistic. So we also design in restarts that are fast enough that the outage isn't as noticeable to whoever is using the services. The less our customers customers have to deal with outage, the less our customers have to yell at us, or do bad things with contract clauses. If my PPC embedded controller comes back to life and working in 1 second, then my peripheral services can come back that much faster, and the network i

In extremely broad terms, there is "embedded" as in "essentially always on" which covers things like routers, NAS boxes, LOM cards, and watches; and "embedded" as in "should turn on and off as fast as the device it is embedded in" which covers things like TV electronic program guide systems, car engine computers, and the like. The line between the two can be rather blurry(my cellphone is on most of the day, to receive work calls; but waiting 40ish seconds

Outside consumer devices it could be even more important, such as solar-powered wireless sensors. For example, every hour it powers on an embedded device which transmits the data back to a server then powers down. The boot time has real effects on power requirements. Which is either solved by larger solar panel ($$$) or fewer updates.

My phone gets no reception at work, so I turn it off. I turn it on again when I leave, and I usually need to use it at that time. It's frustrating that it takes longer to boot my phone than to actually make the phone call.

It depends what you are doing. shutdown != hibernate/suspend.suspend is used as a workaround for slow boot,hibernate is also used but even that can only be so fast ~2.5s per gig of ram in use when hibernated.But both are just ways to save power while and allow you to continue your current task later and should not be considered replacements for shutdown/startup when your done.

I wouldn't think devices w/ embedded Linux would shut down regularly,

Embedded linux can go everywhere, in this specific case its a car system that1) has very low power usage requirements (no suspend)2)

My gaming workstation at home goes from off to Windows XP desktop in 9 seconds. Minimal POST, no pagefile, auto-login, 4-HDDs striped, yadda yadda. If I put FF in the startup folder, your 10 second requirement would be met. Of course, it's a bad disk just waiting to happen, but it loads games faster than my wii or xbox.

Somewhat off-topic, but related to your post... my gaming machine takes forever to boot. Part of the problem is that when it's powered off, and I push the power button, it spends ten seconds just spinning the fans before the BIOS POST screen shows up. I have minimal POST enabled, but I suspect the issue is my video card's initialization (which, presumably, must finish before the BIOS can initialize and display its POST screen). My machine has a Core i7 920, an MSI X58 Platinum motherboard, 12GB RAM, and

That might be kinda hard, considering that in my experience it takes no less than five seconds, and sometimes as long as twenty seconds, for the wireless card to complete a connection with the router, especially if there's encryption involved.

You're also assuming that latency is trivial. That's probably an invalid assumption in the most common use case for the sort of machine where this would be useful (i.e. netbooks getting online via 3G or similar).

Give me a call when they can go from off to Google in less than 1 second. (OS boot, wireless initialization, browser start, google reply).

That would depend on two things: 1. how fast you can type in the passphrase to unlock the keyring that holds your WEP/WPA/WPA2 keys, and 2. how fast your router (whose operating system you usually do not control) responds.

I'm working as an embedded driver software engineer and setup our company's OpenEmbedded build system to provide an end-to-end build environment for our embedded offering and while I can't find the link at the moment -- the one second boot time has been done before and was posted on TI's OMAP developer site a while ago. If I remember correctly it's mostly about U-Boot and how it copies the kernel into memory (byte by byte as opposed to streaming it) which is where you get the majority of your time decrease.

Either way, MontaVista is not the first on this one and it's a shame they're pretending they are.

The one second boot time is also never going to benefit regular PCs as they achieve it due to the nature of embedded systems -- you build a distro for your specific hardware which means no probing, none of that BIOS junk. No looking for the 'first' boot device.. U-Boot can be configured to automatically jump to the booting phase so you're already faster there. Beyond that, load and decompress your kernel (it'd be faster if your kernel wasn't compressed too wouldn't it?)..

So, chalk this up to having a kernel built specifically for your hardware and a boot-loader that is set to only boot one way, ever.

All you have to do to gain the same benefits on your PC (besides throw away the BIOS and replace it with Coreboot) is compile the kernel for your system. There's no reason this can't be done on every system. Gentoo has proved that. Everyone compiles kernel modules on-demand these days... might as well recompile the kernel.

Now, if only kexec would work on more platforms... or for that matter, work reliably on x86.

This is great but I also want a completely power-loss tolerant file system that doesn't need any fscking on restart. If I'm building a true Linux-based appliance, not a general purpose computer, laptop or netbook, basic criteria would be fast boot and the ability to turn it off by disconnecting the power without telling it to shut down gracefully. Basic toggle switch control and no fancy hardware to keep power available while it's shutting down. This would be battery powered and an end-user should be abl

Some people (including myself) are kind of anal about turning things off when they're not being used. On the other hand, a lot of people also just turn on their computer and walk away to get something to eat/drink/use the restroom while the machine boots, so it isn't really a big problem.

The simple solution, of course, is to add in, say, 4GB of flash storage to laptop motherboards for exclusive use in hibernation. It could also double as swap space during normal usage.

Even as it is, my laptop comes out of hibernation in five seconds or so (in Windows); it doesn't take very long for your average hard drive to spam your data back into RAM (assuming I had a lot running when I went into hibernation).

It sort of already exists [wikipedia.org], but it's not used as you suggest (yet). It would be nice if it were one drive with two virtual drives, separately accessible. MSI has a netbook with both SSD and HDD [osnews.com] (seperate). Provided you could select where windows stores it's hibernate data (don't know, don't use windows), you could probably accomplish what you suggest fairly easily.

provided that the SSD is used only for hibernation data, the cost of the drive is more than the cost of the data, meaning that you can replace the drive for cost of drive + nada. If, however, you wish to rely on the drive for anything more than hibernation, you run into a case where the data is more valuable than the drive and additional measures should be taken to protect that data. Knowing most people though, they'll just curse and turn purple when their report is gone and the drive is dead.

That limitation is effectively irrelevant [storagesearch.com]. And that article is from 2006. And it accounts for the case of constant, high speed writing; hibernating out to disk happens a few times a day, and as such is barely noticeable.

Think about it. Lets say you've got a crappy flash drive: Only 1 million writes per sector. And it's exactly the size of your main RAM, so no wear leveling algorithm will help. If woke up and hibernate that machine 100 times per day, it would still last 10,000 days, or 27.4 years. That

Even desktop users who use linux often have to dual boot into windows. Sometimes virtual machines or wine is good enough for what you want, but for something like games or software that couples closely to hardware (e.g. AnyDVD or most games), this doesn't work. Having a faster boot on linux makes switching between OSes nicer. On my machine my Debian it's already faster than Vista (I forgot by how much, I'll have to remeasure it), and that's including running some slow services at startup for linux like u

Google for Pico ITX. I believe full-tilt consumption is something like 30 watts, which is about double the power consumption of a compact fluorescent bulb. Assuming you leave it running all the time, and your electricity is 7 cents/kwH, your power bill every month for it would be $1.50 (assuming 30watt draw). $1.50 is not really noticeable on an electric bill when you take into account delivery charges and taxes.

Linux has insane uptime. I can usually keep my box on indefinitely. I'll only turn it off when I accidentally pull out the cord when I'm reaching behind my desk or when I blow a fuse by running the microwave, toaster, and dishwasher at the same time.

Why have such a quick boot time when you hardly need to boot in the first place?

Because, in my experience, laptops are far less well-supported and far less reliable. My desktop machine currently has 100+ days of uptime, and the last power cycle was because of a scheduled power outage in the building. That uptime is typical for my desktops. In contrast, my laptop rarely goes more than a couple of days without needing a reboot because some driver or another gets into a fubar state. I use my desktop 8-10 hours per day, and my laptop 1-2 hours per day, so factor that in as well.

If you dual boot between Linux and Windows, like I do, quick boot times are important. I often find myself just staying in Windows to do things I would be better off doing in Linux because I don't want to wait for the computer to reboot. Waiting for Windows to shut down and then waiting for Linux to boot up takes a while (in terms of attention span). I already have a hard enough time motivating myself to be productive at home;)

On an unrelated note... why are your microwave, toaster, dishwasher, and comp

They just want to simulate the effect of printing marketing materials on a worn-out laser printer in need of toner because that looks sooooo professional.

Nothing says we're professionals and have important information for you like a crooked illegible photocopy except perhaps a grade-school spirit duplicator. Expect funky light purple text next. The holy grail, of course, will be a wrinkled paper background that actually makes it look like they dug the web page back out of the trash and gave it to you.

Nothing says we're professionals and have important information for you like a crooked illegible photocopy except perhaps a grade-school spirit duplicator. Expect funky light purple text next. The holy grail, of course, will be a wrinkled paper background that actually makes it look like they dug the web page back out of the trash and gave it to you.

I won't be impressed until they track your visits and gradually lighten the text in shades of brown, while yellowing the background as time between visits passes. A true old-school professional look akin to yesterday's thermal printers.

Whether or not you like his writing, I think Maddox hit the peak of usable web design: dark background, with large-font bright text. If you don't like the yellow, you could go with old-CLI-style green. Either way, it's the easiest webpage on the internet to read.

The BIOS isn't always the problem... if it takes three seconds for the video card to become usable (fans running, memory initialized, etc), you're not going to get less than a three-second perceived boot time, no matter how fast you make everything else happen. The same goes for other hardware. If they happen in series (or worse, if they have to happen in series), then that can add up - that can be mitigated by the BIOS, of course, but I can see why boot times might get longer.

The BIOS isn't always the problem... if it takes three seconds for the video card to become usable (fans running, memory initialized, etc), you're not going to get less than a three-second perceived boot time, no matter how fast you make everything else happen. The same goes for other hardware. If they happen in series (or worse, if they have to happen in series), then that can add up - that can be mitigated by the BIOS, of course, but I can see why boot times might get longer.

This is absolutely part of the problem. The power supply needs to turn on its fans and generate stable voltage, then the case fans and mobo power conditioning needs to stabilize. Then you get to touch the BIOS, which probably does a staggered startup of most devices to prevent power supply droop. As stated, all of this hardware then needs to reach a usable state, both mechanically and electrically.

In a car, the power supply is DC to start with, the hardware is smaller and simpler (requiring fewer moving

I think Bakkster's comment (sibling to yours) has the idea. To wit: my power supply has to stabilize first, and given that it's a 750W, I wouldn't be surprised to find it take several seconds by itself.

I'll reboot when I get home and time it from off to the BIOS screen being displayed, from there to grub, and from grub to the login screen in each OS. I may be overestimating the OS boot time; I just know that the time between pressing the power button and the screen showing the BIOS startup info is like on

All the hardware was purchased brand new just one month ago, and appears to work perfectly once it's started up. I don't know how to verify whether something is wrong with the hardware during startup but not any other time.

During the boot process, the BIOS provides an opportunity for the user to hit a hot key thatterminates the boot process and instead displays a menu used to modify various platform settings.This includes settings such as boot order, disabling various processor or chipset features, modifyingmedia parameters, etc. On an embedded device, BIOS setup (and any similar settings provided byan operating system loader) is more of a liability since it gives the end-user access to BIOSfeatures that are potentially untested on the device. It is better to have a set of setup options thatmay be chosen at BIOS build time. Removal of BIOS setup also saves significant BIOS post time.

Probably they should. I have never seen one single credible justification for over 1 second boot time for any desktop operating system.

I don't think the eventual target is desktops.

From TFA:

For industrial automation and other similar applications, fast boot and response time is critical to successful operation. Applications must be fully operational at power on and cannot be delayed due to the volatile nature of the platform and environment. Variables such as power fluctuation, network failure, device availability, and memory management must be responded to with no loss of performance and functionality. These same demanding requirements are found not just in Industrial Automation applications, but automotive, aerospace, and military applications as well.

I can see other reasons for linux based kernel devices like web/net appliances, game consoles, cell phones, etc... to have really low boot times.

I fly a *lot* and I haven't had to do that in a very long time (it's in suspend all the time just in case).

I flew with 4 co-workers last week, and 3 of us had to boot our laptops. All had our laptops and/or laptop cases swabbed. One of us had to take off his shoes and socks, and submit to having the waistband of his pants searched by hand.

During the summer I shut down my desktop daily. Besides the electricity used directly, it also means my AC has to work harder to keep a certain temperature.

I'd love to have a reasonably powerful desktop machine that idles at 20W or less, but for now it idles at 100W, and that's quite a bit of heat to be needlessly generated in a small apartment in the summer.

Depends on your definition of "reasonably powered", but this [mitxpc.com] has a dual-core 1.6 gHz Atom processor and nVidia graphics, certainly pulls less than 30w in idle, and has no fans so coupled with SSD storage would be completely silent. Also has a kit to hang it on the back of an LCD so it would be pretty invisible as well.

I have a Athlon 64 4200+ low power machine with a no fan motherboard and a underclocked (when not running games) Geforec 9600 GSO. It idles at about 50w with 2x7200 HDD's and maxes at ~150W. It plays most games at 1080p30 or close enough that I don't care.

Depends on what kind of $$ you have, but you could do what I do in my house:

The wife and I share a laptop. 90% of our activity revolves around using the laptop to browse and chat with friends. The other 10% I can walk over and use my desktop - or, if I'm particularly lazy but need the raw HP for something, I can turn it on using WOL and VNC/RDP to it. My desktop PC generally remains off then, and the laptop (when full on battery) only uses 15-20 watts. And it's not a particularly new laptop, either.