Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Pizzutz writes "Softpedia reports that Ubuntu 9.04 Boots in 21.4 Seconds using the current daily build and the newly supported EXT4 file system. From the article: 'There are only two days left until the third Alpha version of the upcoming Ubuntu 9.04 (Jaunty Jackalope) will be available
(for testing), and... we couldn't resist the temptation to take the current daily build for a test drive, before our usual screenshot tour, and taste the "sweetness" of that evolutionary EXT4 Linux filesystem. Announced on Christmas Eve, the EXT4 filesystem is now declared stable and it is distributed with version 2.6.28 of the Linux kernel and later. However, the good news is that the EXT4 filesystem was implemented in the upcoming Ubuntu 9.04 Alpha 3 a couple of days ago and it will be available in the Ubuntu Installer, if you choose manual partitioning.' I guess it's finally time to reformat my /home partition..."

This is one of my pet peeves: why can't computers boot in a second or less?

Imagine a visionary like Steve Jobs (by the way, enjoy your leave of absence and please come back). He goes to his team and says "I don't care what it takes, build me a computer which boots in one second".

Ignore the past, the legacy of tens of years of layer after layer of OS software. Can it be done?

A 3 GHz dual-core processor can process 6 billion instructions in that first second. I know the disk is a problem. I'm not asking for all possible OS services to be up in a second... But I'm sure this could be improved greatly. It's all out there in the open. People want this.

Imagine a visionary like Steve Jobs (by the way, enjoy your leave of absence and please come back). He goes to his team and says "I don't care what it takes, build me a computer which boots in one second".

I agree totally. 21.4 seconds is incredibly slow, and that's only to get to the login screen... which is typically only half way. I know that it is possible to boot Linux in 5 seconds for some special cases. However, the boot time should be even quicker.

I think there's more to it than that, though, too. For example, you'd have to completely bypass all checking, device discovery, etc., on boot (it takes time to discover drives, PCI/PCI-E/ISA;)/USB device. Yeah, you could just have that set up in BIOS or something and just use that configuration, but that could be a pain, too.

Now, if we're talking about post-POST boot-up, I think something could be done there. Even if it was having the option of, oh, 8GB of onboard memory dedicated to having a fast-boot operating system.

As far as the extremely fast-boot idea goes, though, isn't that sorta what Good OS's partnership of Cloud and GIGABYTE is supposed to be? The GIGABYTE Touch Netbook M912 to be precise. Link here [thinkgos.com]. It was mentioned on slashdot a while ago as well.

So why wait for them sequentially? Even better, why not supply optional kernels tuned to modern hardware? I don't own a system with an ISA bus (even a faked-up one on the southbridge or similar), so let me skip probing for one.

Yeah, yeah, compile my own and all. But surely big distros like Ubuntu could make a legacy-free kernel available that skips ISA, serial, parallel, etc.

Why does it take so long to discover those drives and other devices? Why does a CD-ROM drive take hundreds of milliseconds to be recognized during a POST? These things should happen basically instantly at modern hardware speeds, and yet they don't.

It reminds me of NFS timeouts. Years ago when I worked in an environment where everyone NFS mounted a shared filesystem, there would occasionally be outages on the server or in the network. My local system would lock up and hang for MINUTES while it timed out on requests to the NFS server. I could never understand why the thing didn't just time out in seconds rather than minutes. Even at that time, we were running 10 MBit or maybe 100 MBit network connections; if the remote system is going to respond, it's going to happen at MOST after a few second delay. Waiting for minutes just seems dumb.

The same sort of thing happens alot with web browsers too that wait far too long for servers to time out. If the server doesn't respond in 10 seconds, it's not going to respond. Ever. There's no reason to wait 30 seconds or longer to timeout an HTTP connection...

My local system would lock up and hang for MINUTES while it timed out on requests to the NFS server. I could never understand why the thing didn't just time out in seconds rather than minutes.

This was the default setting: hard mounts. If the server went away the NFS client was told to hang so that any program trying to access the export would block to minimize the chance of data corruption. Once the NFS server came back, things unblocked.

You can of course configure NFS clients to use soft mounts, so that an error is returned to the process that was calling read() or write(2), and you would simply hope that the application code did error checking.

You know how everyone wanted a Linux-based operating system that "just worked" on a wide variety of hardware with drivers for everything? And didn't throw a shit-fit if you moved the hard disk to a completely different machine and tried to boot it up?

That's why Linux takes so long to boot these days. You can have very good hardware compatibility or you can have very good boot speed. You can't have both. (Well, until someone invents persistent RAM.)

Why does it take so long to discover those drives and other devices? Why does a CD-ROM drive take hundreds of milliseconds to be recognized during a POST? These things should happen basically instantly at modern hardware speeds, and yet they don't.

The CD-ROM does respond to the BIOS very quickly. What takes forever is the BIOS checking each controller, chain, and bus location for a device. Waiting for those probes to time out is what takes so long. This isn't just the BIOS either, it's the Linux kernel too and any OS that might want to speak to whatever hardware might happen to be there.

. Even at that time, we were running 10 MBit or maybe 100 MBit network connections; if the remote system is going to respond, it's going to happen at MOST after a few second delay. Waiting for minutes just seems dumb.

Seems dumb to you, the user. Didn't seem dumb to the programmers who wrote NFS and whatever application you were using. Why? NFS is 1) a block device, and 2) largely a hack. The way UNIX was designed, block devices just don't disappear from the system. Just like wheels (ideally) don't go flying off your car while you're driving down the road. But when NFS, a block device can suddenly go unavailable and as far as the OS is concerned, that's just really really bad for all sorts of reasons. The programmers figured that in order to make the system as robust as possible, they'd extend the timeout as long as tolerable to reduce the chances of data loss and corruption as much as possible. It's conceivable that a large number of problems could be resolved in a matter minutes (say, somebody tripped over the power cord for the network switch), thus preventing the loss of what could be very valuable data.

The same sort of thing happens alot with web browsers too that wait far too long for servers to time out. If the server doesn't respond in 10 seconds, it's not going to respond. Ever. There's no reason to wait 30 seconds or longer to timeout an HTTP connection...

You click a mouse button. This initiates a request which, after all of the appropriate nameservers have been consulted, hops from your machine over dozens of routers, switches, and cables owned by different countries and corporations. It travels thousands of miles away to some place you can't even pronounce. Once there, the server recognises the request and acts on it, sending you back a mix of static content, images, and database content several orders of magnitude greater in size than your original request. The content then travels back to you another few thousand miles, perhaps via a different path until it eventually reaches your machine where it is processed and displayed in a mostly-legible fashion. And you have the gall to complain that sometimes it takes longer than 10 seconds for all of this to happen?

Good. Fucking. Grief.

I'm continually amazed that it works at all and I'm a sysadmin at a web hosting company. Almost every day I run across a site I want to visit that takes longer than 10 seconds to respond in full. There are lots of very good reasons that a website might take between 10-30 seconds to load in your browser. The authors of the HTTP protocol, web server software, and web browsers having a personal grudge against you sure isn't one of them.

No, it's a Network File System (hint, the initials). NFS supports filesystem primitive operations - see:
http://tools.ietf.org/html/rfc1094 [ietf.org]
Networked block devices are possible, but NFS isn't one of them.
However, some of the points are still valid - unix doesn't particularly like it when mounted filesystems go away unexpectedly.

NFS is just a real mess. It has all kinds of security issues, and has no concept of users beyond the local machine. If/etc/passwd isn't synced across the network all kinds of stuff goes wrong. It also has a lot of limits around permissions/etc.

Probably the next closest usable network filesystem for linux is samba - which really isn't ideal (for one it is almost entirely reverse-engineered and depends on a spec that isn't open). That files

But why does it take more than a few milliseconds to discover a device? I've never understood this. We have had CPUs that have had sub-microsecond execution cycles for DECADES now, and yet the timeouts for communicating with devices are still measured in seconds. Why?

Device discovery should take no more than a few milliseconds for an entire machine, with the possible exception of disk drives which presumably need to spin up and verify correct operating speed to report back on a self-check.

The PCI spec also has required delays from power good to reset negated and then another required delay from reset negated to first configuration access. The second delay alone is about 1 sec (2^25 clock cycles).

Intel engineers have significantly reduced the boot time. However, there's a lot of hacks that need to be done to do so. Hopefully, a lot of those changes will make it in. At 10 seconds, you've got a PC that boots really fast. At 5, you've got a PC you're more likely to shutdown than hibernate or sleep (Linux has session restore which, for the most part, counteracts some of the advantages of hibernating of shutting down).

So in order to be a "visionary", I merely have to decide what consumers might want (not that hard being one yourself), and then ask people smarter than yourself to make it happen with no actual technical insight on how to make it happen yourself?

The boss of Volkswagen did this after they bought Bugatti. He said "let's build a car that produces 1000bhp and goes 400kph". Then it took years for the engineers to figure out how such a thing might be possible. In the end, they did it, and it's probably the greatest car ever made.

Right. At least one car (the SSC Ultimate Aero) has beaten the Bugatti's speed record for a production car, but the Bugatti is simply an engineering marvel. Most "really fast" cars are designed to hit their speed limit a few times, and F1 cars are designed to do a couple races, but the Veyron is designed to last 20 or 30 years of road driving.

The Top Gear presenters kept comparing it to Concorde. That's how big of a leap forward it was.

In the business world that pretty much sums it up. You don't need to know how to do something. However, despite what you say, figuring out what consumers really want and are willing to pay for is damn hard. Companies spend billions trying to answer this question. Most of the results are complete failures. A few ideas make a few people very rich.

Geeks can be absolutely brilliant in their field. Given the right direction they can come up with the next big thing. However, most geeks spend their time on little pet projects that will never make a dime. The sad part is when the business man comes up with an idea and the geek implements it, the businessman usually doesn't give the geek enough credit, aka $$$.

The most rare of exceptions is when the geek comes up with an idea that becomes the next big thing.

"Seeing sound" refers to the sound's spectrogram [wikipedia.org]. It's frequency on the vertical axis and time on the horizontal axis (see the axis on the screenshot). Yes, an horizontal line will sound like a tone, a vertical line like a click, a diagonal like a chirp. You can see which frequency and pitch a feature matches to by referring to these axis.

If you want to make a guitar sound, it'll look like thin horizontal parallel straight lines that fade out, yes. The height of the lowest line defines which note is playe

The BIOS. The BIOS is pretty much the sole reason PCs take so long to boot. For example, at home I have a Core 2 Quad Q8200. When I push power, I get the XFX logo up while the POST runs. This POST takes approximately 20 seconds alone to run because of the inherent slowness of actions like writing ones and zeroes to every byte of RAM and then reading them back to test whether any memory is faulty, or initialising the Video BIOS, so on. Power on to OS loaded (even if it's still spinning up services) is impossible in 1 second, because it takes about that time for the CPU itself to start!

The Linux BIOS replaces the normal BIOS found on PCs and other machines. The BIOS boot and setup is eliminated and replaced by a very simple initialization phase, followed by a gunzip of a Linux kernel. The Linux kernel is then started and from there on the boot proceeds as normal. Amongst many other things, it provides a very fast boot time: 3 seconds from power-on to Linux console

And sadly, if a 1000-to-1 bet has 1000 punters, chances are that one of them will win it. And will be called a visionary, and people will crap on about how he was amazingly insightful, and a genius, and all that, when in actual fact he just happened to be the one chump who got lucky.

A 3 GHz dual-core processor can process 6 billion instructions in that first second. I know the disk is a problem. I'm not asking for all possible OS services to be up in a second... But I'm sure this could be improved greatly. It's all out there in the open. People want this.

Hard to say if there's really a point to booting up before the services are running.

What good is the PC being 'at the desktop' if the search service still hasn't started, the network still hasn't obtained an ip-address, half my tray icons aren't up? and the hard drive is still madly churning to get everything else running, so anything I try and launch is just going to be thrown into the queue and it probably will depend on something that hasn't started up yet anyway.

Seriously, how much stuff could you really -defer- to after seeing the desktop and have a useful system?

Remember the average hard drive moves under 50MB/s. Even a fairly modest Ubuntu desktop requires several times that much RAM. If the hard drive started loading data at maximum speed you've got maybe 50MB you can load in that time, and probably far less in actual practice. That means your kernel, drivers, HAL, desktop environment, localization, firewall, network, background, theme, etc has to ALL fit in under 50MB. And you'd need some sort of impossible situation where the cpu could run all the initialization code for all that in parallel, without waiting... nevermind that it almost has to be initialized in sequence due to the layer dependancies.

If you want instant on PCs, the only real solution is to never turn them off, waking from suspend to RAM is about as good as its going to get for the forseeable future.

First, the main thrust of my post was really to comment that getting to the desktop BEFORE everything else is running is a victory simply not worth fighting for.

I.e.deferring things to startup AFTER you arrive at the desktop to give you the appearance of a faster boot time is pointless if you need those things to actually use it... or even if the fact that those things are still starting up is pegging your cpu/hard drive making it essentially unsusable even if you aren't loading something dependant on the items still loading.

That said...

The entire distribution is 50 MB and it includes network, Gui, etc... Based on your numbers above we should be able to load this entire distro into Memory within a second or two, maybe another 2-3 additional seconds if you want to add a 3d desktop like Compiz.

I think potentially, yes, this is theoretically possible. There is a laptop out there, for example, with an instant on Linux distro flashed into the BIOS, that you can use to quickly browse the web etc, without having to boot up the OS off the hard drive.

So this absolutely -can- exist. I'm not sure just how instant, instant-on is here, but it sounds like its in the 3-5 seconds range.

Other services could potentially be loaded in the background after the login screen and/or desktop are available.

I think this is a bad idea. See above, for why.

I see little reason why an OS like Ubuntu can't reduce boot times down to the sub 10 second range with a little work. It's all about scheduling.

Sure, I agree 10 seconds is quite concievable conceivable.

However much beyond that and I think coping with querying the hardware itself will take longer than that. Just querying all the buses etc to make sure nothing has changed will probably take a few seconds.

If the OS has to do a bunch of initializations every time it start up, why cant it just do a memory dump after those initializations, then only load the ones that change every time the computer starts?

Why bother reinventing the wheel? We ALREADY have "suspend to RAM" and "suspend to Disk" and that is basically already what it does. Trouble is, the device drivers have to support it for it to work properly. And it turns out that, for suspend to disk at least, that reading in the big ballooned out memory image to and from disk is usually SLOWER than just booting clean because of all the extra data involved.

And on top of that you STILL have to wait for a pile of device initialization because simply loading in your network/video/audio/etc driver to a particular ram image state doesn't do a thing towards actually putting the network/video/audio/etc device into a suitable state.

(This is in fact precisely why you need a dedicated protocol to communicate you are going into and out of suspend and device driver support for it.)

This is one of my pet peeves: why can't computers boot in a second or less?

Cripes, I'm all for innovation, but damn, if you're literally counting the half-seconds sucked from your obviously insanely demanding lifestyle waiting for your current OS to boot up, then what the hell are you doing reading Slashdot?;-)

Hell, while we're on the topic of the damn-near unobtainable, I'd simply settle for true open-document standards, and a pop-up free Internet. Give me that, and I'll go get another cup of coffee while I wait for my OS to boot.

Sorry to break it for you but boot time is measured from the push of the power button to a usable desktop.You may enjoy your 26 seconds of pretending that "this is not really happening" - most other people don't.

You'd have to have some sort of auto-login setup, but it'd disingenuous to call your PC booted when it's just sitting at the login screen. On my ubuntu box I'd estimate a good 50% of my boot time is after the login screen before I'm able to do what I wanted to do.

I set up an auto-login for my Ubuntu laptop, and then have the session-manager lock the screen immediately after logging on (before the panel or nautilus have even loaded, so while the desktop is still unusable). This way, after pressing the power button, I don't have to interact with my computer at all until immediately before I want to use it (i.e. to type my password in order to unlock the screen).

Unfortunately, just putting `gnome-screensaver-command -l` into the session manager won't work because it doesn't seem to load immediately. Instead, I made it run a script that executes that command in between calls to `sleep 1` six or eight times. It works for me.

For cold boot to be as fast as hibernate, it would have to be able to run through the bootloader, do hardware detection, and generate a system image as fast as your system could copy one into ram. It's like trying to write a book versus trying to read one. Well parallelized init processes, cool hardware tricks, and bootloader shenanigans can get you pretty close to it, but its still not exactly a fair fight, and as another poster pointed out, the two have very little in common from a technical perspective.

Use your common sense."Boot time" obviously refers to the time that the user is waiting for the machine and not the other way round.And "usable desktop" is obviously the point in time when the user can begin launching his applications without significant slowdowns from boot-tasks still grinding in the background.

This is obviously always an apples-to-oranges comparison but with just a tiny bit of common sense it can still be more meaningful than "OSX boots in 4 seconds".

Yes, but then, as we're comparing penis sizes, lets do it fairly. TFA explicitly states that they time from after the boot loader is finished, to when the login window appears. Boot your mac, and time between the grey apple with a spinner appearing (the grey apple is displayed while the boot loader does its thing), until the login window appears.

TFA explicitly states that they time from after the boot loader is finished, to when the login window appears.

Not quite. It's when the login window sleeps. Pretty close. Some people are arguing that this is too narrow sighted, and that we should wait for the gnome login process to sleep before punching the stopclock.

If it boots in less that 1/3 to 1/6 as much time as ext3... Surely there will be an improvement in overall performance?

I seriously doubt the major factor in boot time improvement is the file system. They're also continuing to work on Upstart, their replacement for the SysV init daemon, and one of Upstart's primary goals is to increase parallelism in the boot process. The traditional boot process is quite linear and as a result spends a lot of time waiting around.

My desktop uses nfs as its root filesystem so it is easy to measure how much data it will need to read on boot by measuring network traffic. A complete reboot with "shutdown -r now" generated only 44 megabytes of traffic (including both read/written data and ethernet overhead) so there is clearly no need to read a GB. The system runs debian gnu/linux 3.0 with linux 2.6.18-4-486.

The iPhone emphatically does NOT pull off a quick startup. I've just timed mine, it took 43 seconds to go from pushing the power button to having the springboard appear. It does, however, sleep and wake as quickly as any other mac computer.

But there's no reason the computer shouldn't function before the "real" drivers are installed - we have standards for a reason. Ever notice that your video card works before you install drivers for it? Windows is using a standard API to access the card.

People who boot their laptops two or three times a day want it. (Suspend doesn't work in Linux on my laptop right now, and my battery is almost kaput anyway. Shutdown then boot later is my only option.)

For most users, no it will not work. One of the major features of ext4 is extents, which basically reserves space for a file to continue writing at a later date. This will decrease file fragmentation and improve performance.

If however, you disable extents, then yes you can mount it as ext3. And as you know, ext3 can be mounted as ext2 without the journaling.

I agree that the win32 ext2 drivers need updating. I would hate to lose access to ext partitions for dual boot systems.

At this point, tweaking filesystems to accommodate not-really-random-access media seems like backwards thinking.

Over the next couple of years, SSDs performance benefits : price premium ratio may increase to the point where they are usually the primary and often the only drive on new desktops and laptop systems, but Linux is more than an operating system for the the newest desktop/laptop hardware. Its also for servers, and older hardware, and...

From the wipe man page==NOTE ABOUT JOURNALING FILESYSTEMS AND SOME RECOMMENDATIONS (JUNE 2004)
Journaling filesystems (such as Ext3 or ReiserFS) are now being used by default by most Linux distributions. No secure deletion program that
does filesystem-level calls can sanitize files on such filesystems, because sensitive data and metadata can be written to the journal, which can-
not be readily accessed. Per-file secure deletion is better implemented in the operating system.

Encrypting a whole partition with cryptoloop, for example, does not help very much either, since there is a single key for all the partition.

Therefore wipe is best used to sanitize a harddisk before giving it to untrusted parties (i.e. sending your laptop for repair, or selling your
disk). Wiping size issues have been hopefully fixed (I apologize for the long delay).

Be aware that harddisks are quite intelligent beasts those days. They transparently remap defective blocks. This means that the disk can keep
an albeit corrupted (maybe slightly) but inaccessible and unerasable copy of some of your data. Modern disks are said to have about 100% trans-
parent remapping capacity. You can have a look at recent discussions on Slashdot.

I hereby speculate that harddisks can use the spare remapping area to secretly make copies of your data. Rising totalitarianism makes this
almost a certitude. It is quite straightforward to implement some simple filtering schemes that would copy potentially interesting data. Bet-
ter, a harddisk can probably detect that a given file is being wiped, and silently make a copy of it, while wiping the original as instructed.

Recovering such data is probably easily done with secret IDE/SCSI commands. My guess is that there are agreements between harddisk manufacturers
and government agencies. Well-funded mafia hackers should then be able to find those secret commands too.

Don't trust your harddisk. Encrypt all your data.

Of course this shifts the trust to the computing system, the CPU, and so on. I guess there are also "traps" in the CPU and, in fact, in every
sufficiently advanced mass-marketed chip. Wealthy nations can find those. Therefore these are mainly used for criminal investigation and "con-
trol of public dissent".

People should better think of their computing devices as facilities lended by the DHS.==

My understanding is that ext4 provides some very nice features, but faster data access isn't necessarily one of them. I'd imagine that an ext2 fs, which doesn't have journaling to slow it down, should be even faster.

This is a truly disappointing news item. Instead of setting the bar higher and truly trying to reduce boot time, they have not done much more than shave seconds off the existing boot time.

For a generic desktop distro, 20+ seconds is still terribly long. 10 seconds should realistically be easy to achieve, especially as it took Arjan and me only a few months to get to 5 seconds on a netbook. We sure cut some corners, but we did not even use ext4 on those netbooks, and we still had buggy X starting times of 1.5 seconds, something which we can probably do in 0.5 seconds with kernel modesetting.

I hate to see everyone settle down with "20 seconds" being "the next 5 second boot". This is really not progress at all, but rather, complacency.

This is a truly disappointing news item. Instead of setting the bar higher and truly trying to reduce boot time, they have not done much more than shave seconds off the existing boot time.

I just checked, and it does seem that a fast boot time was one of the goals that Mark Shuttleworth set for Jaunty.

There are some specific goals that we need to meet in Jaunty. One of them is boot time. We want Ubuntu to boot as fast as possible - both inthe standard case, and especially when it is being tailored to a specific device. The Jackalope is known for being so fast that it'sextremely hard to catch, and breeds only when lightning flashes. Let's see if we can make booting or resuming Ubuntu blindingly quick.

Given that, I must confess that I'm also a bit disappointed that the boot time isn't closer to five seconds.

I love your work with the 5 second boot, and I look forward to that technology being implemented widely. On a modern super fast CPU with a solid-state hard drive, I should hope that a desktop computer could boot as fast as a netbook. (And I'd be willing to install Coreboot [wikipedia.org] to get that speed.)

You haven't had to restart Windows due to a networking configuration change in almost 7.5 years. You haven't had to restart Windows due to a driver change for almost 3.5 years now. Please get your facts correct.

server maintainers care, because people pay them a ton of money to get a guaranteed 99.999% (extreme case, like NY stock exchanges etc.) or more uptime. That's only 5(!) minutes of downtime a year, and if you can boot in 5 seconds (and lets say shutdown in 5 as well), you can reboot 30 times a year for security updates. If you reboot in 30+30 seconds, that's only 5 reboots.

imagine having a scsi raid array which takes 1 minute to initialize. a 20+20 boot+shutdown time would give you barely 3 boots per year. A 5+5 boot+shutdown almost gives you 5 reboots in the same time.

you care for netbooks. The batteries are small, if you waste one minute at boot, and a minute at shutdown, at which the cpu and ssd (or worse: hard disk) are working hard, you lose two minutes of battery time, which translates into 5+ minutes idle or browsing the internet time. Reboot your netbook to quickly send a blog update from the airport a few times, and you've lost half an hour or effective work time.

bottom line: shorter boot (and shutdown) means more _net_ work time available, for both a/c connected and mobile devices.

If you're guaranteeing 5 nines, you'd be stupid to be using a single machine. Update a test environment, verify it works, then take down your cluster a machine at a time updating each one. No downtime if you do it right, that way you can "bank" your downtime to deal with network outages and such that are outside your control.

When I start up my IBM ThinkPad (1.5ghz single processor, 512RAM, garbage video card) running Windows XP, it takes roughly 10-15 seconds to get to the user log-in interface from the moment the power button is pressed.

But, once you log in, you are talking two to three minutes where background applications and processes are opening, explorer is loading, and applications that launch at start are loading.

After you log in does that time count as boot time? Considering it takes me 10-15 seconds to get to the sign in screen, not that much time, but after logging in it takes well over two minutes for me to be able to actually run anything at normal capacity.

Thats about the same amount of time it takes my machine, but I can type my password in ~1 second and a second later be at the desktop. Ie generally will start in another second or two. This is because I have disabled most of the crap that puts itself into the system tray. Everything continues to work, except when I want to change my desktop resolution I actually have to right click on the desktop instead of the system tray... anyway most of that crap that rattles your disk after login is probably useless an

However, the good news is that the EXT4 filesystem was implemented in the upcoming Ubuntu 9.04 Alpha 3 a couple of days ago and it will be available in the Ubuntu Installer, if you choose manual partitioning.' I guess it's finally time to reformat my/home partition..."

From what I understand, there's no need to reformat. Similar to how EXT3 was layered on top of EXT2, EXT4 should just be another mount option as long as the kernel supports it.

I have a couple EXT4 partitions I'm testing... It's been rock-solid so far...

I migrated some of my non-critical partitions over to EXT4 and hit a race condition that corrupted my filesystem and resulted in data loss (the bug has since been fixed). I'm waiting a little longer before converting my important partitions over.

Switching from EXT3 to EXT4 is as simple as a flag change and a remount. HOWEVER, your existing data will still be laid out without extents and thus you'll miss you on a lot of the improvements in EXT4. Eventually, an online defragmenter will be written to defrag

I've been using Ubuntu for I year. I was quite happy with the 8.4, but unfortunately I've switched to 8.10 64bit (to support 4GB RAM). You know what? I couldn't care less about how fast it boots. I do, however, care about these things:- switching from dual display to presentation (clone) and back totally messes up x config, I have to uninstall and reinstall nvidea drivers- in dual screen mode, nautilus opens on the first display. I have to open terminal and run nautilus& to lunch it on the second display- in dual screen mode, keyboard keeps focus in the previous screen. I have to minimize/maximize a windows on the "new" screen to move keyboard focus- RDP client crashes X windows in some cases (it does not close the drop down list of used servers... and bang)- oh and NO it's not AN ERROR if I close the RDP window. If I want to reconnect, I will, don't hide under my active windows and bring RDP windows back in 30 seconds. That's just plain stupid.- java and window decorations don't play well together (popups without buttons etc.)- How about opening a connection to a new server in a new tab, not in a new nautilus window?- Flash stops working. I just see a gray square where flash is supposed to be.- Firefox is not very stable.- Windows would become gray and unresponsive when there's a lot of disk activity.- I've seen ubuntu crash on my much more times than I've seen BSOD on the same HW.- If i lock my computer, I want it to be locked. I don't want it to be locked for a minute or so and than display what was last on my desktop. Sure, you'd have to log in to get access, but there could be things for my eyes only on that screen. So don't you ever roll eyes on windows security, ok? You've got your own issues.

I could probably think of more but this is just a list of things I remember from the top of my head. Sure, you'll be downmoding me and say I'm trolling. Maybe I am. But my point is: there are MUCH more IMPORTANT things to fix than the FUCKING BOOT TIME. Who the fuck even cares about boot time?? Can't you just grab a coffee while it boots? What kind of idiotic metric is this?

I guess SW development is hard and complex. And we've reached a point where maintaining these beasts is hard, for either open source or commercial products.

Nvidia is notorious for awful drivers, especially for dual display. The screensaver issue is also probably from bad Nvidia drivers.

Adobe Flash is unstable in Firefox, especially on 64 bit systems. Open Source alternatives are also very outdated and slow.

Third party plugins like Java and Flash can make Firefox have to wait. The Mozilla team needs to design much less dependency on plugins and more of a sandbox model, so that a crash or hang of a plugin will not freeze all of Firefox.

Programs become unresponsive due to a lot of disk activity for reasons of speed; DiskIO has more priority. This is a GUI design problem and it should be decidable/configurable easily on or after install.

And it is Canonical's job to test that software and choose which version they are going to ship with. The last release of Ubuntu, all sorts of software broke on my computer that used to work before. This is their fault for choosing to package bad software.

Also, for what it's worth, I've been having the same problem that he is having with Flash when using Gnash and swfdec as well. It seems like ndiswrapper has some issues in the latest Ubuntu that were not a problem in previous releases, beyond the fact that the flash plugin sucks.

Fire up the system without the proprietary Nvidia blob and file bugs! Ubuntu, like all FOSS software, needs _you_ to file the bugs so that the kinks can be worked out. Do not assume that what you see is what the devs see.

This is exactly why I left the Linux scene and consider it a failure.Filing bug reports isn't answered with a solution or bug fix, but with one of these:- bug report already exists- You're doing something wrong, it's not Ubuntu/Linux- It's your hardware, not Ubuntu/Linux- It's because of these evil hardware companies, not Ubuntu/Linux- You have the source code, fix it yourself

Nearly everyone else working on it is a volunteer doing it in their spare time. We're working on it, I assure you. If a bug report exists, that's important to know. If there's a workaround, it may still be that there's a usability issue and that's valid. If it's a problem with your hardware, what on earth do you expect them to do about it? If you can live without your shiny 3D eye-candy (or buying an Intel graphics card), you don't run into the evil-hardware-company issue.

The problems I encounter in Windows are a few orders of magnitude smaller than those in Linux. I think it's been since the early 90's that I had problems with sound, video or internet on MS products.The reason is fairly simple: no matter how complex these peripherals have become, they are ubiquitous: we're 2009... an OS failing to provide basic multimedia ? Big Failure.

I see the humour in your reply, but to be honest: Ubuntu makes me feel ashamed of being in the S/W business.

Your comment about a locked screen showing the contents of the desktop is probably related to a screensaver taking a screen capture and using it in the screen saver. This can be fixed by running xscreensaver-demo (I think) program to get the configuration for the screen savers. You may have to install the xscreensaver package to get this program. When it comes up, it will warn you that xscreensaver isn't running -- that's OK, just ignore it. You can still set the options for the various screen savers.

Okay, you're right; resuming from power savings modes works perfectly in Vista.

Now, run a test for me. Attach a secondary monitor, and place it to the LEFT of your laptop. Configure everything to work well. Reboot, and notice everything is still good. Open a few applications, move them to the secondary monitor, then close them. Something mainstream, like Outlook, will do.

Now, suspend your laptop. Undock it, and walk to a conference room. Wake it up. Note that many applications now open on the (non-existent) second monitor. Including mainstream applications from major software companies, as an example Outlook.

Suspend it. Take it back in and dock it. Wake it. Notice that Vista now believes that your secondary monitor is on the RIGHT of your laptop.

Heaven help you if you connected your laptop to the conference room projector when you were there.

Yep, Vista works exceptionally well for all common usage scenarios with suspend/hibernate.