Friday, March 20, 2009

Here's something silly. It seems like y'all are concentrating on boot times these days. Boot in 20 seconds! no, 15 seconds! no 10 seconds! But first, you need to answer me this question:

Why the fuck do I care?

Seriously, when's the last time I cold-booted my desktop? Uhh, a month ago? My laptop? ummm, three weeks ago? Oh, but I really wish I saved 10 seconds back then. I could have gotten a couple extra jerks in this month.

Or, is it that we're moving toward a Linux where the kernel updates every 30 minutes? So, if you want to stay on the train, then you better optimize your rebooting.

Those of you who still think boot time is important, go find your friend with a Mac. Ask them to show you how the desktop is back up even before they finish opening the lid. Ask them how many times a year they explicitly choose "Shutdown". Now multiply that by the number of seconds they could possibly save with a faster boot, and compare that total with the time they could save by not listening to your freetard come-ons.

The sad truth is, boot time hasn't mattered to most of the world's computer population in a long time. S3 sleep solved that problem. Perhaps this is Linux's totally awesome way of solving the same problem by ignoring existing technologies.

Think about your phone. When's the last time it booted? My blackbery takes minutes and minutes to boot, and yet nobody cares. Should RIM spend more time optimizing a process that happens maybe once every 6 months, or work on bettering their battery life, which affects me every day? Hmm, that's a toughie. Let me ask some freetards for some advice.

The only place where boot time kinda matters is for these bolted-on-the-side Linux firmwares like splashtop and such. But even then, who cares if I can get to a crippled desktop in 5 seconds when I could resume my suspended useful OS in just as much time? Oh, but this is where Linux EXCELS. I mean, it's open source, so you can totally strip out all the features and BLOAT that you don't need, so that you can boot faster.

Hey guys, I have an OS that boots in like a nanosecond. It's called GRUB-OS. It even has a text editor, just hit "e". Pretty sweet huh?

1193
flames:

Hehe, besides of your valid point, my Vista at home boots in like 10 seconds (not considering the BIOS bootup stuff) even after having it installed for 2 years. And I've got a SQL Server 2008, Virus Scanner, Office2007-Autostart stuff, etc. etc. starting up at boot time.Then I read on ./ that a newly installed Ubuntu takes around 40-50 seconds to boot up... and freetards are really proud of that.

I think I'd actually care a bit more if *shutdown* were faster. There's approximately a 1:1 ratio between reboots and shutdowns, but the shutdowns are more likely to occur when I need to get up and go somewhere and can't afford to wait minutes for my computer to accomplish a simple task.

Does anyone remember Two Kernel Monte? It let you upgrade your (2.2.x only) kernel in-memory. That way, you didn't even have to sit through the BIOS part of the boot cycle (which, sorry to say, first Anonymous, totally does count). Since it appears to be a dead project (a real surprise, I know), I'm going to have to say that boot times for Linux are actually worse than they used to be.

But listen to me, I'm totally buying into the "boot time matters" meme. I need some fresh air.

I love how freetards seem to think it takes Windows a fortnight to completely boot. My XP desktop takes about 45 seconds from pressing the button to having a fully functional desktop. Sometimes it takes longer for my old POS LCD monitor to get itself going! (I think it's a power supply issue.) My laptop running XP MCE takes just a bit longer (about a minute) because it's running the OEM installation it came with, as opposed to the desktop's retail XP.

Still nothing compared to my old Mac SE/30. That thing cold boots to System 7 in 10 seconds flat.

How long will it take for the saved boot time to make up for all the time you wasted tinkering with settings to save time? And why should boot times matter, anyway? I thought Linux, being a Unix-like OS, didn't need to be rebooted 965123^5 times every day, like Windows allegedly does.

Linux boot times took a permanent turn for the worse during the early 2.6 era where it was no longer practical for someone outside distribution management to roll his own system. Now that we're stuck with vendor provided kernels, we must sit through autoscans of pointless and bizarre hardware like weirdo multi-port serial adapters upon every boot. Naturally, Linux provides no easy interface to disable the unwanted functionality outside of recompilation. As Linux Hater explained, suspend/hibernation would mitigate this issue, but that doesn't work, either.

I don't think boot time is so important on any system, be it a netbook, notebook, or phone - I boot them rarely as they all are designed to run in some sort of low power mode when not in use.

"Now that we're stuck with vendor provided kernels, we must sit through autoscans of pointless and bizarre hardware like weirdo multi-port serial adapters upon every boot. Naturally, Linux provides no easy interface to disable the unwanted functionality outside of recompilation."

Bullshit. Nearly all drivers are modular; it's trivial to only have the ones that apply to a given system loaded. You could make some sort of image with all the right drivers to be loaded in RAM (like a RAM disk) when the system initializes. You might call it initrd. That way, only the drivers needed to load the rest of the OS are loaded at boot time. Then, you could have some sort of code that loads only the modules for the hardware installed in the system based on things like PCI IDs.

Anonymous: upgrading kernel in memory was incorporated long ago, it is called "kexec". Skipping few minutes of POST on servers sounds fun, but servers are almost never rebooted.LH, Macs resuming during lid-open time... that's my envy! I even reported the issue to Ubuntu guys: https://bugs.launchpad.net/ubuntu/+bug/135083 . Their answer... ,,but is it slower than other ubuntu boxes''? Full cluelessness.

Doesn't matter how fast Linux boots, you can't do shit with it anyway. Freetards can get the boot time down to negative seconds but it still wouldn't make any difference in the world. Yeah, you can boot faster to do nothing except launch Firefox & bash Windows on Slashdot, play with your spinning Compiz cube, let PulseAudio fuck up your sound, yay for you, Loonix Lusers.

No shit, but this blog deals with desktop Linux. And anyway, you say to think outside of home usage & the desktop? Huh? What have Lintards been pushing all this time? "This year is the year of Linux on the desktop!!!1"

> efforts in linux (the kernel) to reduce boot time *are* important. efforts in desktop distros - not really, unless they also target laptops.

Bullshit. Laptops shouldn't be rebooted. Laptops should hibernate. Laptops should go on standby. On both counts, Linux FAILS.

Same with desktops. Linux is not green; you have to keep your desktop on all the time -- enlarging your vpenis with bug uptimes -- to not have shut down all your software and forget you were working on.

Why Safari? Why didn’t you go after IE or Safari?It’s really simple. Safari on the Mac is easier to exploit. The things that Windows do to make it harder (for an exploit to work), Macs don’t do. Hacking into Macs is so much easier. You don’t have to jump through hoops and deal with all the anti-exploit mitigations you’d find in Windows.

It’s more about the operating system than the (target) program. Firefox on Mac is pretty easy too. The underlying OS doesn’t have anti-exploit stuff built into it.

I am trying to come up with some angry invective here, but I am failing...

As soon as your computer takes ~60 seconds to boot, it doesnt really matter anymore. I see there are a couple of fosstard based quick book operating systems that are severly gimped that boot in about 30 seconds, and I can't for the life of me think of a time that I needed to boot a computer in 30 seconds instead of a minute or so.

Quick an asteroid is going to hit the earth destroying all life... Um nope.

Hurry! Boot the linux based cardiac de-fibrulator... er, no.

Um, I got nuthin.

And I can't come up with a single case where that 30 seconds would matter...

So lets say I add up all the time I have spent waiting for windows to boot in last year... 20 minutes maybe? Versus the time Ive spent trying to get linux play video without stuttering over last year...

This just validates that Macs aren't protected by some magical barrier like their advocates like to claim. You can bet Linux desktop would display the same manner of nonsense were it a target of any level of appeal. Despite Windows' lousy reputation, it's the only desktop OS that's actually targeted and therefore the only one that's battle hardened.

Production server environments, where availability is measured in 'how many nines' uptime you provide. In this case Windows is super-fail because services (what matters in this case) are still churning up a good 30 seconds after a logon prompt is presented.

Or maybe folks who cold-boot their machines daily. There are a lot of us, you'd be surprised.

Your points here are only valid if everyone in the world used their computers exactly like you do.

this is QNX booting straigth to an opengl application on an atom reference board, taking 17 seconds with a normal BIOS, 2 seconds with its custm IPL before dismissing this as fake, one could remember that using the BIOS takes as ling as it takes because of the amount of serialized, unoptimized (like waiting for some kind of devices to be "ready" -hdd's ahve to spin up, detecting devices to creating tables that the OS has to later reparse, etc), or unnedded (like initializing devices that have to be reinitialized by the OS) work it does

actually, achieving fast boot by performing initializations as late as possible or not at all, shall make clear that having functional stacks (graphics, storage, networking etc) initialized the old way, that is, from "the bottom" (from pci devices) "up" ( to the os abstraction for them), is inherently slow as far as cold startup is concerned

now, on a server one couldnt' care less, but on a desktop, or laptop or netbook things are quite different imho - on a desktop one would want to see a reactive and ready graphical environment as soon as possible ( i'd not even mind to see networking progressively get from uninitialized to "ready" in the background AFTER i've logged on the desktop, as long as i can start working or playing locally )and no, hybernation is not a solution , but a way to bypass and ignore the real problem, which is that often things that it would be useful and even logical to defer, are done first because of how things are structurally ingrained and the impossibility to think out of the box in order to understand how to change them

for example, in order to present the user with a graphical bootup screen (and then with a full desktop) it would be nice to initialise graphics first , even before networking, input, or storage, possibly before even detecting and initialising storage or networking *devices* - that can be done immediately after

and it would be nice to set ip address and other minutiae at the last moment before the user start surfing the web (AFTER he has logged on - unless his profiles reside on a remote server, but it's an uncommon case for a netbook) BUT, that would require implementing "top down" init and lazy init, in the kernel -AND, services that are started before storage or networking is initialised, shall be redesigned to ensure they dont rely on it, and/or sanely fallback to a sane default behaviour

anyway, come to think of it, the PC is the only device in the home, that needs from the tens of seconds to minutes, to get from power off to a usable state ...

besides, i can't think of many other (if any) electronic appliances, that employ off board, external (and moreover, mechanical) storage for what makes its vital functions, and is absolutely and totally helpless without that external storage drive...

one could object that it's for the sake of choice (to allow for the installation of as many OSs as needed) - but do we really need that choice? cannot the possibility of making that choice be conciled with possibility of having the OS of choice, resident and as quickly booting as it can?

one could also object that resident OSs are a relic of a past in which machines were tied to the OS supplied with and for them, so it made sense to have OS functions in on board ROMs... but, the iphone, thr olpc, many netbooks, and the low cost of flash as of today, prove that wrong

The only place where boot time kinda matters is for these bolted-on-the-side Linux firmwares like splashtop and such.

so, in the light of the previous, putting a relatively complete and usable OS (not necessarily linux, the contrary, the more "industry standard" te os, the better) directly on board, is a great idea imho i'm actually hoping and expecting such resident OSs to be functionally overhauled to the point of eliminating the need for an hd installed one , thus becoming the norm...

it would really be interesting to see what splashtop lacks compared to a normal HDD installation of linux - i cannot see big differences apart from:- the onboard kernel configuration;- lack of swap; - lack of user profiles (from passwords to wallpaper);- lack of space for configuration customiation in general;- lack of possibility to install new drivers and programs

now, imho it could be overcome by:- defining a streamlined but complete platform (kernel, runtime libraries, most important drivers, the wayland graphic server, the taskbar and so on) and put an image of it in the on board ROM - using the NVRAM to hold the most important system settings (swap space and path, mounting options, path of "system profile" - system configuration key value pairs)- use the TPM to hold the most important user data (passwords + path to files making "desktop profile" - with a default profile used if the specified one is not found)- using union filsystem to accomodate builtin software as well addon applications

these are not overly technical impediments, so given what's missing is the *will* to promote the on board OS to full (or privileged) citizenship...

"Hourly billing, on-site, New England area: $150.00/hour plus .95/mile round trip. Minimum 2 hours on site...If that sounds like a lot of money to you, please keep in mind that I have a LOT of experience. Very often I take literally minutes to fix problems that other people might spend hours on. Look over this web site to get an idea of the breadth and depth of my knowledge"

Yea, he can't even remove a frikn simple 360 BHO, and he left the poor user unprotected with Norton, a poor choice of AV for those of us in "in the know". I would not trust this guy to fix my kids computer.

Dude, ever hear of using the right tool for the job? Maybe "Hiren's Boot CD"? And the many FREE antivirus like Avira, Avast, AVG?

Amateur Lintard Luser posing as a special IT fixit guy for windows..HA!

That article about the Mac has me thinking. Linux is probably similarly vulnerable. We know about the "Linux Virus in 5 Easy Steps" article (http://www.geekzone.co.nz/foobar/6229). It would be interesting to one day see a mass-infection of a generic distro such as Ubuntu. Maybe some disgruntled freetard Ubooboo developer will infect the repository before leaving the project. We all know what extreme assholes those types can be.

Instant-on is desirable in pocketable systems that are full x86 systems and draw more power than phones. Hibernation requires fast disk access and a good sleep state requires good hardware that doesn't waste power - something you don't always get.

On desktop systems boot time doesn't matter - as long as it's not much longer than on a lean Windows install. Some users DO turn them off.

Fast boot is the new DailyThingThatWillTakeLinuxMainstream(TM), period.

Lintards need a dose of that on a regular basis so they can keep the faith/delusion. After all, we all know how the revolutionary spinning cube turned out to be totally irrelevant, and don't even make me mention KDE4 and its revolutionary desktop and change of paradigm...

As far as the latest fosstard to come out and hate on windows for a program that he can't make quit... At least it is possible to do that somewhere.

OMG, if I had a nickle for everytime xorg crashed on me losing all my work... Or mysql, or this or that asterisk daemon. Or having a computer that is fscking for some reason that I can not determine and is no apparent way to fix...

Or OMG, having a crontab for samba that kills all the runaway processes every and again then restarts samba.

Nope, the occasional gig wiping out malware is no problem, not compared to what I'd have to do over on that side of things.

If he wanted to have that discussion... oh back in 2003, I might agree with him, but not anymore.

Working on windows strugling to make your software stop, is better than Linux where you try and make your software run...

Antivirus 360 is cake. I've coached people over the phone with this one, and the first time I only needed 120 seconds or so to get background information on Google.

I can believe he spends "literally minutes" on jobs since he admitted to leaving his customer without solving the problem. Anyone in the business who gets repeat customers knows it's key to "hang around" to ensure the problem's fixed, even if it seems like a waste of time.

People who are actually good at this stuff say things like, "I stay until the job's done," not, "I am uber 1337 h4xor. I can has $300 in 10mins k tx bai."

That article about the Mac has me thinking. Linux is probably similarly vulnerable.

It's probably even worse. Think about it. Apple is infamous for monoculture, yet, despite a tight team working toward the same goal, they still can't catch flaws in ways their system interacts with itself.

Now take Linux, which is constructed by people who take pride in not communicating with each other. These people have no idea whatsoever of the implications of stacking all this random software on top of each other. Anyone who says they've "read the source" of all this is not only lying directly but severely overestimating his ability to calculate all the possible permutations that combining code generates.

This actully shows real intention. Pin point everything no matter how trivial it is. What is the problem, if people are able to boot faster? Comeon every minor minor point counts. I turnon/off my PC everyday. I save energy, why keep wasting energy (even hibernating takes energy). I will be happy if it boots faster. So I am happy that people take efforts to boot it faster.

On the other hand, I am sure there wud have been other thread, otherwise, criticizing Linux being slow to boot. In short criticize, no matter what.

I am not saying Linux boots pretty fast, but it is acceptable, but I still wud prefer if it boots even faster.

Production server environments, where availability is measured in 'how many nines' uptime you provide

You mean ISPs that brag about millisecond differences in uptime as a lame marketing gimmick? I wouldn't call that a typical server enrvironment. In the real world the last priority when choosing a server OS should be boot time.

But still according to netcraft the host with the third best uptime in Feburary 2009 was running Windows Server 2003, and given that most ISPs run Linux I think it is safe to say you should return to slashdot where rambling about the eliteness of linux while stoned in your mom's basement will MAKE YOU A GOD AMONG MEN.

Fucking freetards still can't get over Jordan K. Hubbard's words from 1996:

"The upshot of this is that right from the opening "Windows" screen, Microsoft has got us somewhat outclassed and have already detected the mouse and modem at the point that we're still loading a kernel off a floppy."

How can Linux advocates even talk about boot time when there are still standby/hibernate issues?

Have a look at this mini 9 experience:http://blogs.zdnet.com/gadgetreviews/?p=2165

The Dell Ubuntu forgets his screen brightness settings, and even reverts to default settlings after a period of inactivity.The brightness hotkeys stopped working intermittently. Touchpad deactivation — so you don’t mouse around while you type — hasn’t been activated.The entire computer comes back from standby in seconds, but it takes the Network Manager “at least 30 seconds” to re-find the Wi-Fi connection.

Well, the main idea is that if you have the skills, talent and ability, you should be focusing on some other development area rather than boot time. It's not like it's taking an hour or even five minutes to boot up. Focus that talent on improving the many, many other areas where desktop Linux is lacking.

But one of the few good things about open source is you can often find someone passionate about something that isn't contributing in some other way.

Why must this be exclusively the realm of open source? Most contributors to the computing cause start their own project anyway. There's a whole shareware/freeware ecosystem that built the Windows and Macintosh infrastructure. Some of them ended up as large companies, but others were weekend warriors that happened to come up with something that was useful to a significant number of people.

The difference between Mac/Win and Linux is that sane development platforms allow these would be jokers to accomplish more before they burn out or lose interest, thereby increasing the odds that others will find the work useful.

Face it,the average user's expectations have been irrevokably set by theMacintoshes and Windows boxes, and while the X community should havebeen off inventing all those good things and trying to put them intoan easily accessible framework, they were focused instead on seeingwhich competing GUI group ("OpenLook!" "No, No, Motif!" "Argh,NeXTStep you philistines!") could kick the other the most times in thecrotch.

Jordan got it already figured out 13 years ago. No wonder I always liked FreeBSD alot more than cluster-fuck Linux.

We know about the "Linux Virus in 5 Easy Steps" article (http://www.geekzone.co.nz/foobar/6229).This is already fixed in KDE.

It would be interesting to one day see a mass-infection of a generic distro such as Ubuntu.I doubt this will ever happen, if you enable apparmour protection for the browser.

But still according to netcraft the host with the third best uptime in Feburary 2009 was running Windows Server 2003, and given that most ISPs run Linux Jerkface, you got your answer on this last time, why do you insist with lies?

Linux was, Linux is and Linux will be always years behind Microsoft Windows.If you say it enough times, you might believe that it's true.

Boot times seams so not important. Lets cover the little list of things you have to get right to get fast boot times.

1) Drivers able to operate parallel to each. Now nice big one here. You don't want X driver holding a lock blocking Y. Or a driver sitting waiting for a spinlock blocking the driver with the spinlock from getting cpu time quickly.

Yes defect that effects general runtime has to be fixed for this.

2) More Services started on need instead of at start up.

Yes defect effecting general runtime less services also equals less OS consumed memory usage and more for you applications. Also frees up cpu time for your applications.

Boot-time is a interesting benchmark for the numbers of things you have to get right. A Linux bootchart ie bootchart.org gives you are very detailed view to work out how well drivers and services are loading.

"S3 sleep solved that problem." Bull it don't. S3 sleep is a hack around the problem. Mac and Windows using it go unstable from time to time because of S3 sleep. If you have something in your hardware that is not tolerant of S3 sleep boot times are important.

Currently some prototypes restore from S3 sleep and boot in the same amount of time. Difference here is a true boot will be 100 percent stable on all hardware out there unless its broken. S3 sleep will not be.

For someone who has claimed to be a programmer in the past Linux Hater was a complete idiot here.

1) Move the DDX code for per-card drivers into the kernel. This allows you to save the state of otherwise write-only and therefore unrestorable video mode registers, etc.. X must use an interface that does not require process cooperation to restore the screen contents.

Says some guy in 1996. It's been only 13 years, and now kernel modesetting for Intel chips has been merged. Yay for rapid open-source innovation! It's, like, totally different from the glacial pace of proprietary sofware development!

I feel very strongly that too many unix vendors have completely given up on the desktop, and are now targeting the desktop market by being "servers" for it (or then they are ignoring the market altogether, and are going exclusively for the traditional UNIX type cumputing market).

I shudder every time somebody says that UNIX is a server OS, because if that is your view on UNIX, then UNIX _will_ die. Maybe not today, maybe not tomorrow, but in the reasonably near future. Because the desktop OS's are what are going to take over, as they get more powerful and give you more of the features "traditionally" found in the large computer OS's.

This blog is about linux on the desktop. FreeBSD on the desktop is even smaller than the struggling to remain... relevant, gives too much credit... visible? noticable? Linux on the desktop, I would say that FreeBSD has accomblished very little.

Oh Darwin you say! Nay, darwins maintainers have taken extreme efforts to eliminate everyhing that is a problem with linux culture and replace it with a desktop stack of their own design. For example, Darwin based distros (with the notable exception) all suffer from the same problems as Unbuntu.

The problem with "Linux" or even "FreeBSD" has very little to do with the tiny fraction of a desktop operating system that is the kernel, and everything to do with the culture and software that gets piled on top of it.

For example, I see after a short hiatus Oiaohm has returned. The guy seems totally oblivious to to what computers are, and what people use them for. He also seems ignorant of the basic fact about sleep on Linux. For all the work that has been done, most of the time it still fails.

Simple algebra... The time ive spent dealing with sleep issues on Vista over the last year is something like zero. Linux, it is greater than zero. Maybe someone who uses OSX can check my calculus?

For those of us that use software, as opposed to the extreme minority of people who use operating systems, this is an unacceptable state of affairs. There is no reason to think that it will be different a year from now, or even 10 years from now.

The good thing is I'm seeing a lot more "I tried Ubuntu and went back to Vista" type stories on various sites. Ordinary users that had bought into all of the lies of Freetards gave Linux a try, saw how badly it sucked, and went running home to "horrible" Vista. They probably now appreciate Microsoft & Windows more than ever. Classic example:

"I just had a very frustrating experience with Ubuntu. And frankly going back to Vista from Ubuntu was like coming home from a war zone."

You miss the point. Even when retreating to the comfortable territory of server roles, Linux still has difficulty stacking up against a considerably smaller project. Whether we're talking about Linux server, desktop, embedded, or supercomputing, the pervasive lack of management and goal setting is endemic.

Recently, a co-worker was hanging out in my office. During the 30 minutes or so he was present, he must have opened and closed his Macbook half a dozen times. Every time, the thing came alive in about a second, and it never gave him shit about anything.

Through VMWare Fusion, he has Windows XP and Fedora images running full time. He never bothers to close anything or even to consider how the hell his crazy setup works at all.

Meanwhile I reminisced about how Windows 95/98 rarely came out of sleep and added insult to injury with haughty messages like "Windows is scanning your computer because YOU FAILED to shut it off properly you miserable human being". In the middle of explaining it I realized that there isn't much difference between Windows 95 and Linux 2010 with respect to functioning sleep modes.

Experience: Sadly, the Hype is just hype. neither of these Linux babies boot faster or run faster than my XP install which is now at least twice the size of either of the Linux flavors. Add to that the hype that Windows has all kinds of processes in the background running that slow down the machine. So, let's check task manager. XP=55 Processes running, 250 MB-260 MB RAM being used. Linux=120 processes running and 250-260 MB of RAM in use. Plus way more (and I do mean WAY more) Hard drive thrashing going on.

The sad fact is that the boot time really doesn't bother me that much. Longer or slower for either windows or Linux won't affect me much either way... except with how fast i can run grab a snack or take a bathroom break. On the other hand, 120 processes? With NO programs up and running? Hell, My Vista here only has 74 programs running even with a ton of programs open and working right now. If you want to impress me, these things need to be fixed. I've got a couple of old computers around I would like to get up and running a bit faster, but so far, I'll be sticking with Windows.

P.s. Don't bother with the whole BSOD line of tripe. I've not had one in over two years. the last one i had was because of a fault in Flash that was causing crashes... a fault that was discovered and fixed very quickly. Thanks to Vista for pointing out the cause, AND the solution.

Ok I will make it simple. Give me another benchmark other than boottime that can be done that is not going to be effected by user action to validate that the kernel locks and everything lower down that will trigger lag is not causing problems.

The boottime benchmark has a valid place even on Windows. Always has and always will have.

Lot of the fixes being done for fast boot are making Linux more responsive. Something desktop guys complain about all the time. Why does Linux pause for not good reason... Those driver and applications pauses also show up in boottime tests but in a dependable way. Not some random combination of applications triggering it at random times. Random events are really hard to fix.

Yes boot process can be used to build test cases that display defects as users you are suffering from in a repeatable way so developers can cure it.

Perfect boottime requires lot of things corrected you are request. Yes lot of general user problems have to be fixed to fix the boot speed.

Simple case for a so called developer Linux Hater is simple trying to sabotage Linux Developers form using a Valid benchmark to fix lot of user problems. Sorry to say you guys are dumb enough to fall for it.

Boot times is a very important diagnostic tool.

Again I will pull windows apart just as much as Linux.

Hard disk thrashing bit is being worked on. Cause is fsync(file id). Hard way is working out how to cure it without causing data loss. Changing from ext3 to ext4 will reduce the thrash alot without changing applications.

You will find particular services and applications trigger it. Yes Linux disk thrashing can be stopped.

Everyone forgets that Windows also has done a lot of boottime and shutdown profiling to get to where it is today.

S3 sleep does not also cure the need to update kernels. Please try keeping all you windows updates applied without doing reboots. I am sorry but there are a lot of laptops out there were over half the current windows updates are not applied because they have not rebooted them recently.

What Linux Hater is saying is being a secuirty idiot as well. Do you guys like getting virus infected or something. If it cheep to reboot it makes it simpler and faster to keep your system current for all updates.

S3 Sleep is nothing more than a hack. As with all hacks it has major downsides along with its advantages.

There is work on Linux to allow all applications to be hibernated and the kernel replaced. This is a feature you should all be yelling for.

Stop being dumb sheep start looking at what is truly required and start requesting the right things.

"Ok I will make it simple. Give me another benchmark other than boottime that can be done that is not going to be effected by user action to validate that the kernel locks and everything lower down that will trigger lag is not causing problems."

Fail. Boot time isn't a valid test to produce metrics for what you're looking into.

Also consider that some boot sequences might be more crash resistant than others.

Fail. Boot time isn't a valid test to produce metrics for what you're looking into.

Also consider that some boot sequences might be more crash resistant than others.Yes Linux developers are more than aware about that some boot sequences might be more crash resistant than others. For any alteration to enter main line kernel crash resistance test must be passed.

Give a test that can give dependable results on it.

Boottime is a valid metric. Note that bootchart produces time through the complete boot process. Its also measuring cpu, disk and memory usage.

Looking for areas where hardware is not being use to its most effective. Most articles talking about boot time on Linux are also printing bootchart data.

Time is one of the key bits of information that must be measured to show performance improvements from alterations. Yes something might use less cpu less ram and less disk access and still be slower and do the same as one that uses more cpu time more ram and more disk access.

Without time the other measurements are useless.

A stable time measurement is something you must have. Again tell how you are going to get a stable time measurement without using boot time/boot timing. Stable time measurements depend on user not being able to disturb it. So that leaves only 2 places. Boot up and shutdown. Location of the most complex operations is Boot up. So best to measure is boot time.

Time is the hardest metric to get clean. Without it the rest of the cpu ,ram, network and disk data is very hard to isolate how good or bad the performance is.

Lock issues turn on in time value. Threads not running side by side well turn on in time value. Poor optimizations turn up in time values. Loading services in wrong orders turn up in time values.

The list of faults that turn up in time if you can get clean time measurements is long and important. It does not mater the OS you are running. If you can clean time measurements you can find lots of problems.

Just because something might seam not important to end user it sometimes can be major importance to a developer trying to repair problems.

Flat values some places are quoting is wrong. Most of the open source reporters are using the correct methods. Linux boot time metrics come from bootcharts. http://www.bootchart.org/images/bootchart.png

Most open source articals about boot times start off with Like complete boot took 5 seconds then a complete boot chart showing were all that 5 seconds went.

Then you get dumb groups comparing Windows to Linux. Since they don't know how to produce a boot chart out of Windows they raw measure. There is a way using Windows Development tools to have Windows spit out something equal to a Linux bootchart.

This is where you are screwed. Boottime is a timeline. People comparing OS's are not getting this yet.Also bootcharts stop when the last server is started. Driver developers on windows use boot charts all the time.

Well if you goal is to have a technically perfect (if only in your mind) operating system then he has a point...

But as a person who uses software, and not operating systems, when I turn my computer on, I don't really care how it gets to a point where it presents me with a screen, so that I can look at my software.

If someday linux can get to a place by booting in one or two seconds that I get with suspending and hibernating, thats great. I will regard linux as usable in that one area. At the same time, I am not holding my breath.

On the other hand, we have yet another post from oiaohm that yet again demos his near total detachment from how everyone else in the world interacts with and relate to our computers.

I am sure every engineer type out there or even carpenter, and crafts people have a story about how some hack job worked better than "doing it the right way the first time" and resulted in inovation.

For what its worth, I cut my teeth on mini computers and the first 10 years of all PC tech looked like a hack job to me. And how about that A-patch-y web server. Originally that described how it was made, and was not intended to produce mental images of native americans...

"If you start to think about that, that's a pretty significant technical challenge that we (open source developers) need to overcome and it's a hard one because it spans multiple open source projects... the hardest problems are the ones that span multiple open source projects because then you have to get people that don't know each other, to talk to each other and sort how to fix it and they kind of pass the buck around, user space v. kernel space is the classic example..."

Once again you argue against a strawman, and, once again, you choose to examine an argument in the narrowmost possible fashion. Aren't you the one always talking about context?

What you're talking about is boot stability, not necessarily speed. Yeah, everyone agrees that stable, debuggable boot processes are valuable. So what? And, yeah, it'd be great to boot in five seconds and do away with hibernate.

But you know what? You miss the point. Boot time doesn't even make the top ten of Linux's problems. Taking anything more than a cursory look at improving matters in this area is a waste of resources. That's Linux Hater's point. Once Linux has a stable platform and killer software, sure, make it boot faster, but, as someone previously stated, all the boot time improvements will go toward showing users Linux's uselessness faster.

You are missing how it plays. Competition. If you can measure something correctly you can complete to do better.

Competition for boot speed bring developers brings stability. Crashing in a boot test is like getting a unlimited time answer. So you lose.

Boot time improvements to make them happen have knock on effects for general usefulness.

You want general desktop performance more stable. The boot time competition where it can be measured is having that direct result.

The problem is can you provide me with a benchmark that will work in standard desktop mode to show kernel locking glitches in a dependable way that cause all those random pauses that effect general usability of Linux.

Boottimes and Bootcharts do. Ok its not exactly where you would expect to have to do work to improve the responsiveness of the desktop.

Part of making Linux a stable platform is sorting out the locking problems, cpu usage problems, memory usage problems. To search and check for defects you need benchmarks/test cases.

Sorry Linux kernel has a lot of quality controls before something can get into mainline. Major patches take 2 years of testing before they get included in the Linux mainline. This is why Linux does not respond fast to change. Developers will be talking about something for about 3 years before end users see it.

Does this now explain why Linux Hater and you guys sometimes start asking where stuff is. Its locked in the Quality control process.

So far no one has ever managed to create what a good desktop should look like so it cannot be benchmarked so it cannot be cleanly competed over.

List Linux top ten problems. You will find some of them are linked to the work being done to improve boot time. That is the problem you guys have. No concept at all that everything running on the kernel including the boot up has the kernel in common. So you desktop applications suffer from the same problems that stuff boot time performance.

From the Linux kernel point of view it does not know the difference between a service starting up and a user run program.

Complete failure to understand that something you are saying is independent is not.

Do you also fail to notice that windows machines that take longer to boot on the same hardware as another windows machine due to services or fragmentation have slower over all performance.

The rules of boot time performance effecting complete operation of the OS apply to almost all OS's out there.

I never said everyones motivation was faster general performance. Most of the motivation is competition to be the fastest. This is embedded in a lot of peoples natures.

People doing the alterations for faster boot times most of the time they are not aware of the knock on effects they are doing.

Splitting buses allowing multiable devices on the pci bus to be talked to at once. All things that are being done to get faster boottime. Fixing these issues improve the overall performance Linux not just boot time. I could do a huge list of all the things the faster boot time guys are altering that link to faster boot time and general desktop usage.

The issue here is seeing the big picture most of them in the boot time competition don't.

The people going after faster and faster boot times are helping people who are not after faster boot times without being aware. Killing the competition is not helping you.

Quality controls applied by Linus and others are forcing the people who want faster boot times to do it right for all. You have to drop the idea that Linux is a free for all. There are rules. Those rules guide development.

Basically you have a good benchmark you can start a competition around it and get work done even that the people doing the work don't know the overall importance of what they are doing. All you have to apply is quality controls to make sure the work they produce is good quality and provides the features that are need. You have them motivated. Far simpler to gt motivated people todo stuff particular hard and complex stuff. Lot of the things the faster boot time camp have to fix is complex.

Reason why I am saying if you don't want boot time and boot charts to be important you have to find another benchmark to get the jobs the current benchmark is getting done.

Hard bit is finding other great benchmarks competitions can be run around to motivate people.

Its like the old air races and the solar car racers. People in the competitions are not aware about the R&D they doing that in time improved solar generation tech and effective running of solar farms(motor is used to tilt solar panels). They are too focused on winning.

Ie the reason why a bootchart.org only stops when the last service is started is to prevent offset cheating. So you cannot make X11 first and show everyone a chart that you have it all started like lightning.

Guess what Vista uses offset cheating. GUI is started before the last serivces are loaded. Benchmark of boottimes of windows is incomplete.

The people going after faster and faster boot times are helping people who are not after faster boot times without being aware. Killing the competition is not helping you.

That's only if one takes your word for it that there will be a correlation between the the two that results in observable gain in desktop performance.

Sorry but windows and osx already have acceptable performance after booting. If the only way to get lintards to produce acceptable performance is to trick them than consider me unimpressed. And that's also assuming that working on faster boot will actually produce the desktop performance.

Since we are totally incapable of creating a useful, competitive desktop OS, let's concentrate on those things we CAN fix. For example, now that we're talking boot times, how about changing the bootsplash artwork?

Guess what Vista uses offset cheating. GUI is started before the last serivces are loaded. Benchmark of boottimes of windows is incomplete.

That's not true. Boot time for a desktop system is over as soon as the user can start doing his work using his programs.If certain background services that are not directly related with the desktop itself have a delayed start, then this is insignificant for the user.Infact, it sounds like a pretty good idea to me that you don't have to watch your system starting stuff like Apache and MySQL before you actually can log into it.

I notice a pattern in your arguments. Basically, you'll take any insignificant issue and blow it up into this huge matter that, if addressed, will fix all of Linux's problems. This is why you're handily associated with the freetards; you act like one. autopackage will solve all problems. DRI2 will solve all problems. Faster booting will solve all problems.

And every time, everyone on the planet, including the developers themselves, are "idiots with their eyes closed". Only oiaohm has the answer, but no one will listen to him.

What's funny is I think as far as boot and run time stability are concerned, most people would concede that Linux does just fine here. Congratulations, you've once again succeeded in making Linux look even worse than we were going for.

And you miss it again. When we talk about "stability", we're talking about the non-existence of a Linux "platform", not kernel panics.

Another way you make Linux look bad is with the illustration of poor goal setting. If developers were looking into, say, video stuttering (though I don't see what the kernel has to do with this), and thought boot time affected this, the goal would be "examine the boot process to find locks that affect video performance." But that's not what we're hearing. It's all about shaving seconds off the process itself (which no one cares about) or reducing the need for sleep (which no one cares about). Once again, only you tell us that the "real goals" are completely contrary to what the rest of the Linux PR machine says. And we're the "idiots".

OK, I hate Linux (no, let me take that back: I hate Linux on the desktop and I hate Linux freetards on Slashdot, Digg, and any other online forum where the same three pathetic people who have lived in their mother's basement all of their life can post multiple times using different anonymous identities), but I feel that boot time does matter.

Why? Because it's sometimes the only want to work around a Windows bug. For example, there is a bug with my graphics driver where I can not see my girlfriend while we're talking on Skype if I open up Skype after playing a video game. The only way to fix this issue is to reboot the system; then I can see her fine on Skype.

Thankfully, since my system is a pretty fresh install of Windows XP, this whole process takes under two minutes.

The reason why Linux freetards finally care about boot time is because Linux was taking a good deal more than than Windows XP to boot up. Also, the freetards, as much as they talk about how evil "Windoze" is, boot up in to their copy of Windows every time they want to do anything with their computer besides post masturbatory comments on Slashdot about how much better Linux is better than Windows.

So, yeah boot time does sometimes matter. It's about time Linux stopped starting up sendmail, cupsd (why does Linux need to start up a daemon that has security problems and binds to ports just to print a document?), nfsd, crond, gnome's stupid configuration daemon (Why do we use a daemon to store GUI configuration data? What ever happened to the UNIX way of using plain text files?) and other crap you don't need on a desktop OS.

why does Linux need to start up a daemon that has security problems and binds to ports just to print a document

Because in LinuxEverythingIsASerice(TM)EverytingHasNetworkTransparency(TM)

What ever happened to the UNIX way of using plain text files?

Because they finally realized (but failed to communicate to the world out of embarrassment) that Microsoft had the right idea with its registry in 1995. With the explosion of simultaneous applications and the increase in their complexity, an improper shutdown with hundreds of open files is disastrous. Microsoft saw this one coming back in the DOS era when every new application wanted to increase the number of simultaneous open files to the point where the average user had something like FILES=80 in his config.sys. And this was for a single tasking operating system.

GNOME's has a PR problem with "registries are bad" so gconf is more hidden than it should be. gconf isn't as well thought out as Windows' version, and gconf is barely documented.

Microsoft had the right idea with its registry in 1995. With the explosion of simultaneous applications and the increase in their complexity, an improper shutdown with hundreds of open files is disastrous.

Except that it's disastrous only on a Windows system, because critical system and general application settings are stored in the same file! KDE works fine with thousands of configuration files and a binary image resembling the registry for performance reasons. Even with gconf, if you delete the registry, you'll get the defaults. Don't let the facts get in your way, keep going...

1996 - "The upshot of this is that right from the opening "Windows" screen, Microsoft has got us somewhat outclassed and have already detected the mouse and modem at the point that we're still loading a kernel off a floppy." Quote by Jordan K. Hubbard

1999 - "Anybody? Semaphore theory used to be really popular at Universities, so there must be somebody who has some automated proving program somewhere..." Quote by Linus Torvalds

2002 - "The fact that Linus *does* have to pass on all such patches, and is dropping a lot of them them on the floor, is the clearest possible example of the weaknesses in the present system." Quote by Eric Raymond

2004 - "If you start to think about that, that's a pretty significant technical challenge that we (open source developers) need to overcome and it's a hard one because it spans multiple open source projects... the hardest problems are the ones that span multiple open source projects because then you have to get people that don't know each other, to talk to each other and sort how to fix it and they kind of pass the buck around, user space v. kernel space is the classic example..." Quote by Havoc Pennington (fast forward to minute 8:47)

Guys when talking about Operating Systems in Desktop usage, the actual OS doesn't matter anymore because all OSes that can at least do basic desktop usage are good right now.

Windows 2000/XP/Vista/7, MacOSX, Linux, BSD and Solaris all have all the required features already.

They all have journaled file systems so that you won't lose files on power and other failures.

They all support most x86 hardware that are cheaply available now. (You can easily choose hardware to work pretty well with any of the above OSes)

They all have good security (even if that security includes an antivirus program). Bad security cause of ignorant usage can't be prevented on any OS.

They are all free (except MacOSX that is on purpose tied with Mac hardware for style). Windows are free for those that don't want or can't pay (from a copied cd of a RL friend or from pirate downloading). This is part of Microsoft strategy.

They all have free basic support (*nix from forums/irc etc, Windows from the same or real life friends).

They all support easy file/storage management and can use all of widely available I/O (Input Outputs).

So the truth is that all of the above OSes are perfectly good.

However the best OS by itself is useless if you can't use the software you want on it.

replace "of the above programs" with "of the desktop users" if you want, it is the same thing.

If you think that the problematic PulseAudio, the messy Xorg, or the repetitive usage of the command line are the causes of Linux's low desktop usage, you may not be a freetard, but you are an OSTard, cause really the OS doesn't matter.

If you think that the problematic PulseAudio, the messy Xorg, or the repetitive usage of the command line are the causes of Linux's low desktop usage, you may not be a freetard, but you are an OSTard, cause really the OS doesn't matter.

Suuuuure. This way, you can argue that Linux doesn't get apps because BlameLiesOnOthers(TM) and It'sAllAConspiracy(TM)

Freetards levy most of the OS minutia. The remainder revolves around easily observable and long standing issues such as poor quality sound and video playback.

Mostly this site deals with the lack of Linux adoption as a function of its lack of platform stability and frameworks necessary to secure vendor interest. For example, Linux lacks totally anything to combat DirectX or QuickTime, which is a pretty big deal. Freetards constantly charge that stuff like SDL, OpenGL, and OpenAL bolted to each other are good enough, but ActualDevelopers(TM) of games and multimedia projects tell a different story.

It sounds like maybe the old UNIX way of using plain text files for all configuration doesn't make sense on a modern end-user desktop. The traditional UNIX way is to have an application read its configuration file at startup, and have the sys admin edit the configuration file with a text editor (vi, usually). Should an application need to apply any changes to its configuration, one restarts the application or send it a HUP signal.

All of these goes out the window when you want something easy to use where changes are done with a GUI, applied instantly, and kept the next time one restarts their application or reboots their system.

My annoyance with Gnome trying to emulate the Windows way is that I need to run their stupid gconf in order to have Firefox not use gconf's ugly default tiny font size for the dialogs and window items. Last time I used Firefox on Linux, something happened that made gconf not work at all; these days I use Opera when developing in my Linux virtual machine and want a web browser.

And, oh, I use Windows XP on the desktop because Linux gave me constant crashes. Windows XP is solid as a rock; Linux is the unstable POS.

Back in the dot-com days, Linux may have been a PITA to use, but it was sure a lot more stable than Windows. Microsoft saw this, adapted, and came out with XP that is solid as a rock.

Linux tried to catch up with Windows in terms of ease-of-use, but simply can't because Linux is (and always has been) too fragmented to come up with a unified easy-to-use interface.

Look at those Launchpad idiots wasting time with the various Ubuntu bug reports. I used to be like that until I saw that my efforts were either ignored, or the fix would be completely broken with the next 6-month update.

"I tried to switch compiz to metacity (checking for another bug). The X-server froze and I switched to tty2 to stop gdm. This took a long time and afterwards even the Xorg process hang."

I dual boot Vista and Fedora10. Both of them boot in under a minute. I don't see why anyone cares beyond that. In fact, Fedora often crashes on me, forcing me to reboot by holding the power button, never had to do that with Vista. So I guess I can see why it matters on Linux.

One of the huge disadvantages that foss has is its usability development process. When Microsoft, Adobe, and Apple develop a gui dialog for example, they do actual scientific research on actual end users to see if some change has improved or worsened user perception of the dialog, or work task. From the gui work that goes on in fosstardia a lot of it looks like it was developed by the "hmm, that looks good" methodology, and there is never any actual usability research going on.

Thats why the best elements of fosstard softare are obviously and plainly stolen from OSX and windows. The fosstard'sphere does no usability research of their own. Thats why we get usability nightmares like the spinning cube, multiple virtual desktops, and that horrible dialog that Hater disected a few weeks back.

The simple fact of the matter is, a good gui is the result of emperical science. You can't have some developer somewhere declaring by fiat what a good UI looks like. It is rigerous application of a design philosophy that "you can please some of the people all the time, and you can please all the people some of the time, but you can't please all the people all the time". Sure, a developer can develop an inuition of what the users will like, but they never get it right all the time.

With the internet, we have developed a culture of feelings entitlement, where every pissant out there feels entitled to tell the world why and how they think something sucks. Worse it empowers them to do something about it, and the result is desktop linux. The cream of the complainers, designed by people who think they know better than anyone else(see any oiaohm post for example).

I take an example from a TQM handbook. A publishing house where it is everybodys job to check spelling, but there is no ultimate spelling authority. End result is lots of mis-spellings make it into print, because it is no one persons job to make sure spelling is correct, because everyone is preocupied with their own job. Does that situation remind you of anything? Is it fixable in the fosstard'o'sphere?

That was entirely reactive and irrational. They were angry that the window of opportunity had closed.

But it was more than that, it was sort of a perfect storm of hate. It was linux users, OSX users, Sony users, and software pirates all angry that the small amount of progress they had made was being undone.

It was driving primarly by Gen-y with their overwhelming sense of entitlement (lol and I thought gen-x was bad wtf was I thinking???) at having everything NOW, and it can't just be free, it has to be beyond free! You have to pay me to use your shit! (see housing bubble, and how gen-y has paralyzed the economys of the world)

We should build camps for gen-y, and put them all in them, and make the little fuckers work!

My favorite thing about virtual desktops is that open source spent thousands of man hours over two decades implementing and reimplementing them across every single window manager with no innovation whatsoever aside from "the spinning cube". Then comes Leopard and smashes all their work to itty bitty pieces with their radically more functional and intuitive Spaces. I mean, they even got the name right! Spaces! So obvious. Now "multiple virtual desktops" looks and sounds like the ugly girl at the punchbowl.

And now look, here we are on the verge of Windows 7, which is by all accounts another leap forward. Vista was supposed to be Linux's "chance"; users were going to be fed up with "M$" after that and begin mass migrations to Ubuntu (yeah, even leaving behind all their familiar, feature-rich apps). Not only did that fantasy window of opportunity close, but it's about to be sealed & welded shut forever by Windows 7.

see housing bubble, and how gen-y has paralyzed the economys of the world

You can't blame this on Gen Y; only about half of them are aged 18 or older! The aging Gen X who was shielded from economic troubles in their youths and never saw them in adulthood are much more to blame, and, even then, the movers and shakers of the financial systems are even older than Gen X.

Gen Y responsible for the new era of hyper-disposable consumerism? Sure, I'll entertain that. But the housing crisis? Unless there's this huge glut of emancipated teenagers with liar loans I don't know about, I doubt it.

BYW, the average age of first time house buyers is somewhere in the early 30s, depending on country.

Not only did that fantasy window of opportunity close, but it's about to be sealed & welded shut forever by Windows 7.

Yeah, it looks like MS got their shit together. Win7 could be the best Windows so far.

The sad thing is, Linux is improving. It sucks less now than it did, say, two years ago. It's just moving too slowly to catch Windows.

Linux missed two or three windows of opportunities, actually. Win95 sucked at launch. Linux wasn't even remotely ready. XP sucked at launch and had big security problems until SP2. Linux wasn't ready. Vista was the third window of opportunity.

I don't think Linux missed it entirely, largely thanks to Ubuntu and the netbook craze. Linux gained users. It just didn't break through.

We'll see what happens when Win7 launches. Probably some OEMs will continue to offer Linux on low-spec machines. But I wonder if it's ever going to gain something like 10% or even 5% market share on the desktop.

For a little while, netbooks were a linux thing, but now that marketshare on netbooks has fallen below 10% in the one area that desktop linux has seen any signs of life, I'd say that opportunity has ceased as well...

Linux in 1995 was very good for what it was: A free UNIX-like operating system and server for small office use.

The GUI Linux had back then, FVWM1, had a steep learning curve, but, once learned, is a very productive environment. I still use FVWM1 today when I'm developing software in Linux (these days, in a virtual machine running Linux on my Windows XP desktop).

But, yes, this was long before KDE. Gnome was a reaction to KDE because of some licensing issues which are no longer relevant today (thank you, Nokia, for finally making QT LGPL).

Bowman/Afterstep was also available around that time, and it was arguably more usable than the first two or three years of GNOME five years down the road. I think Enlightenment was also available as a primitive FVWM fork, but I can't remember whether it was actually any good.

Anyway, Linux of the mid-90s wasn't the dark ages today's freetards make it out to be. FVWM and its derivatives were highly usable compared to competing technologies. If anything, FVWM got closer to feature parity of Mac/Win than anything since and demolished some of the ancient crap still trotted out by the UNIX vendors of the day.

I know of a few Vista machines that you are logged in in under 15 secounds yet have to wait 2 mins for the system to be responsive due to services still loading. Pushing service loading backwards means when the user goes to use the machine they have to fight for cpu time with services.

Really what is the point of being logged in quickly if you cannot do jack. It would have been better staying on the login screen longer. Reason why the bootchart benchmark for Linux does not stop at when the login screen is displayed. If cpu is still locked a like 95 percent usage its pointless login in at that time.

Linux and Open Source world in general operates on competition. Two major forms of it.

Controlled competition like the boot speed competitions at the moment. That will see performance improvements over all.

Then there is uncontrolled. This is Linux Distributions.

I don't see anyone putting forward competitions to encourage unifications.

"Profound fragmentation" has cause and effect. No pressure for unification it don't happen.

Das Boot topic was asking why was time wasted on it. Problem is its not wasted it something that needs to be done. Linux kernel internally is not the cleanest designed kernel out there.

"Sorry but windows and osx already have acceptable performance after booting." That is correct. Linux does not have acceptable performance on booting a lot of the time. About 8-9 seconds is wasted even before first service is started on Linux for a process that should take 1 second. Due to locking between devices. Those locks cause audio and video pauses on users while in general usage.

Internal faults that cause large sections of poor boot speed are causing other problems that neither Windows or OS X has.

Simple fact repair work on boot of Linux is required.

Is it the only work that is required no.

Stablising interface framework yep required what is the point of being able to run all the applications you want if the interface locks up.

Remember current work on improving boot times is targeting the low hanging fruit of improving boot times. One of the highest fruit that will have to be taken on to improve boot times is the complier itself. Reason there is only so far you can go before size of binaries and disk transfer speeds become a issue.

In the end the work on boot-times will eventually effect every application on Linux performance.

Now what about other desktop applications. That was not the topic of this.

but on the server or in a production environment - it's another story. think outside of home user and desktop for a change.

Talk about moving the goalposts... This whole instant boot time thing is about Splashtop, which is being marketed as a desktop OS.

Further, I'll agreethat boot time is important, but not anywhere near as much as you'd like to think. even and especially in a server environment.

You want to minimize reboots and shutdowns, stretch them out as much as possible.

First priority is minimizing the need to reboot during normal operation.

Second, you minimize on updates requiring reboots, and as much as freetards love shitting on WIndows for that (not being able to overwrite files which are in use has its advantages, mind you), Linux, especially Ubuntu is guilty of this, too. Run a distupgrade, you'll see.

Third comes boot time, it doesn't matter how fast you boot your server up, if you're doing it frequently enough to warrant ignoring the previous two points and skipping straight to speeding up the boot-up process.

Besides, in a production environment, or at least a proper production environment, you have redundancy to cancel out the cost of booting up. Don't tell me you seriously even consider bringing down a production server without bringing up a backup first?

I'm all for faster booting servers, but as a supplement to more reliable, more shutdown resistant, redundant servers, not a replacement.

A LOT(!!! (99%?) of people shutdowns their computer/laptop.

And 97% of all statistics are made up on the spot.

Maybe when they don't sleep properly. Only time my iBook ever powers down is when it runs out of battery power and isn't plugged into an outlet (and even then, it could just be put to sleep and be kept on indeffinitely until I can get it to an outlet) . Why shut it down when I can put it to sleep, stuff it in my backpack, and have it come out of sleep before I'm even done lifting the lid?

My desktops, both the Windows machine and the Macs don't power down or reboot unless it's for updates that require one, which only happens every few months, hardware upgrades, or the odd power outage. and even then, it's all of 30-45 seconds to boot up, which isn't terribly inconvenient.

Even so, people do shut their machines off, fine. And I'm all for faster boot times on a PC or laptop, too, but it's only a minot piece of the puzzle. I don't care mow much faster Linux boots, if I still can't get my video editing, audio recording or photoshop. It's the whole pitching instant-on as the be-all-end-all instead of what it is: a single piece of the puzzle, which on its own is neat, but still not very useful.

Linux on the desktop, I would say that FreeBSD has accomblished very little

FreeBSD isn't designed as a desktop OS, it isn't intended to run on the desktop, and In fact, the thread you're responding to is quoting netcraft server uptimes, doesn't even consider desktops.

Nobody mentioned the desktop there, and further, the point of the post was to illustrate that Win2k3 competes on uptime.

Oh Darwin you say! Nay, darwins maintainers have taken extreme efforts to eliminate everyhing that is a problem with linux culture and replace it with a desktop stack of their own design. For example, Darwin based distros (with the notable exception) all suffer from the same problems as Unbuntu.

Mac OS X is built on top of Darwin. (and frankly it's the only place it makes sense to use Darwin) Darwin maintainers (Apple) have actually put in very little effort to eliminate anything having to do with Linux, It's a BSD. Moreover it's NeXT's BSD.

The problem with "Linux" or even "FreeBSD" has very little to do with the tiny fraction of a desktop operating system that is the kernel, and everything to do with the culture and software that gets piled on top of it.

That's the problem with Linux, I agree. Things are different in BSD land, at least in FreeBSD country, there's no point in persuing politics, it's all about writing a good, free, high performance server OS, and they succeed at it. much better so than Linux at any rate.

Anyway, the point was that FreeBSD proves, that even in "OSS", the cathedral model trumps the bazaar model any day.

For example, I see after a short hiatus Oiaohm has returned. The guy seems totally oblivious to to what computers are, and what people use them for.

For example, I see after a short hiatus Oiaohm has returned. The guy seems totally oblivious to to what computers are, and what people use them for.

Kharkhalash Nice that you agree to that. It explains it. Both of you lack the same thing the means to see the big picture.

Some things are directly to effect end users. Others indirectly.

Boot time competition is a indirect. You have some brains spend some time Kharkhalash making a list of everything that screws up boot time then make a list of everything that screws up applications from running smoothly. Compare. Guess what you will find a lot of matches.

You will also find some in that list that has to be repaired that boot time reduces also with increase through put of servers. Boot time alterations are not about boot time alone.

You are pure tunneled visioned blind to the big picture of interrelation ships.

If this was a chess game you would only be able to see your pawn because that is what we are talking about. Not that when the pawn moves it allows check mate.

Its the other effects of the actions you are missing.

The true final way to min reboots is ksplice. http://www.ksplice.com/ Simple replace kernel and keep on running. This is a non stop upgrade. There is nothing of Linux that cannot be replaced on the fly in well setup servers. Freebsd is still quite lacking in this regard.

Kharkhalash you might know freebsd tech but you are clueless about Linux tech. There is no reason to be rebooting a Linux server other than hardware problems. Some distributions out there like Ubuntu are stupid and have to be fixed up by good server admins so they work right.

You completely missed the server room reason for fast boot times. So reserve servers don't have to be running chewing power. If you can bring them online quick enough it will not matter that they were off. Really tight time window you are aiming its a max of 10 seconds if able from nothing to running.

Hibernation sounds like a good idea until you wake up on large servers it does not have a hope in hell in most cases of getting inside 10 seconds and the instability it can introduce. Hibernation Filetransfer from ram drive will take longer than 10 seconds.

Freebsd fixed a lot of the issues the boot time competition on Linux is going after. What Kharkhalash fear that you preferred server OS will soon lose at everything. So have to spreed miss information?

Remember I see the world as a complete picture of what users want and what is required so they get what they want. So at times my answers seam strange. The question should be more what are you missing in the relationship between the pieces at play.

Same issue of tunneled vision blocks you from seeing a complete stack of important relationships.

You took ext4, which is being tested at the moment in development releases, as an example of data loss, hardly a typical example (the corruption in this case will be fixed in the final release). If Windows registry were used on ext4, a crash would make the system unbootable (worst case) or lose all settings edited in the last session. gconf settings are actually stored in a tree of XML files, so you will only lose the settings of the applications you had opened in the last session.

All of these goes out the window when you want something easy to use where changes are done with a GUI, applied instantly, and kept the next time one restarts their application or reboots their system.

What does this have to do with text files? Any technical reason that justifies this? What's wrong with having text files and an expendable binary or daemon for fast access?

Except that it's disastrous only on a Windows system, because critical system and general application settings are stored in the same file!

LIES.

Are you saying that the system hive doesn't include settings like file type-program associations, explorer settings etc (MS defaults, third-party programs make even heavier use of it for almost everything) ?

It doesn't matter that the windows registry doesn't corrupt unless the hard drive is failing, they have declared by fiat that the registry is a single point of failure and they way they do it is better, so there!

Are you saying that the system hive doesn't include settings like file type-program associations, explorer settings etc (MS defaults, third-party programs make even heavier use of it for almost everything) ?

Re: losing registries. This is yet another Win9x nightmare scenario that freetards still think applies today. Remember that time when you set up Win98 with the 16 bit Sound Blaster drivers on floppy disk because you didn't know any better and your registry was blank the next time you booted? Yeah, that totally applies to modern NT stuff.

All of these goes out the window when you want something easy to use where changes are done with a GUI, applied instantly, and kept the next time one restarts their application or reboots their system.interestingly, IIRC all of this was possible on the Amiga (!) in the 90's, without using daemons so called application "Preferences" could be applied instantly after modification, even if modified externally - this was likely done triggering an update of the application's internal variable and passing the new value, via some dedicated automation message port

on a unix like system, instead, it'd take polling for events on the file descriptor corresponding to the config text file, then reparsing it to get the new valuesbut, since usually config file parsing is done at initialization time, the process must be restartedbut, since processes don't usually restart themselves arbitrarily afaik, a daemon monitoring changes to config files (better if in charge of managing the "config namespace" - in itself a job better suited for the kernel) and restarting affected services is needed ....in the light of simplicity and elegance inherent to change notifications via message ports, all this seems quite old fashioned and lame ...besides , it can't be helped, since linux guys are still struggling with the idea of adopting libelektra for fetching/saving configurations key/value pairs, implementing message ports for asynchronous value change notifications is out of the question...(another method would be to let the kernel itself store key/value pairs for applications - actually the kernel already has cfgfs and sysfs that do such a thing, it would just take to extend and generalize the use of cfgfs outside the kernel, but surely linux developers would uproar at such idea, since it would bring linux closer to the dreaded windows... )

moreover, the ui paradigm amiga applications often adhered to, had "Use" (for using a new setting without saving it, ie using it only in this session with the old -and safe- one restored at next boot) and "Revert" (which retrieved the previously used value ) instead of the much less useful and flexible "Apply" present in current ubiquitous interfaces...

What's wrong with having text files?text files are human readable (with a text editor program, that is) but the application needs to do parsing to retrieve data XML files are human readable too (again, with an editor) must be parsed too, and their parsing is even heavier than with plain texta binary file, or a key/value pair is not human readable (not with a text editor, but if a tool with the interface of an editor is available ...) but retrieving data involves little to no overhead also, saving data is more efficient in the latter case than with a number of small text / xml filesmoreover, key/value pairs ( or group of pairs) are as securable or more, than individual config files (especially if the k/v pair is managed by the kernel)

and an expendable binary or daemon for fast access?a daemon MAY provide *distributed* (cross- process) access, and yet this depends on the design of the whole protocol and features implemented (e.g. one wants that, if process alters something belonging to B, B is notified of the change ) a daemon SHALL provide *remote* access, ie in the case of a networked access i'll most surely need a daemon / service (but, we're talking application configuration - so a networked configuration registry !? )but "fast access" no, since every operation will need at least an rpc transaction with marshaling over sockets and a rount trip, a daemon does not provide that

if my system is based on a kernel that can unmistakenly be seen as an archetypal traditional monolithic one (especially one which has even some grave lacks and disadvantages compared to other monolithic or hybrid kernels), i want kernel facilities to be exploited and, if needed , extended, to ensure functions are provided consistently with the monolithic kernel philosophy on a monolithic kernel system one has the privilege of being able to consolidate "cross process arbitrated" functionality in the kernel, critical per process functionality in user libraries, and dedicate daemons to optional (if local), or REMOTE, services

needing auxiliary service processes to provide vital non-optional functionality to *local* applications, contradicts the rationale of having a pure monolithic kernel in the first place (as it's a trait of microkernel systems) doing so denotes either a biased developer culture or scarce ability or willingness to decouple what needs to be done locally to/by a single process, from what that needs to be done across processes or across the network (doing clean and objective SW design and adopting a sane approach, that is )

again, a proof that linux is "designed" with a server oriented, rather than desktop oriented, mentality...

moreover, there would be an interesting side effect, should the system widely implement kernel managed key/value pairs...

essential subsystems and services (i'm thinking of the graphics server/ UI) could be started without relying on the file system, while initialisation of the storage stack, mounting, fsck etc could be deferred, instead of having to be performed before anything that the user can act upon

in addition, WTR to on board resident (i prefer this term since "embedded" refers to an application field, but also says something about the os' functionality - here i'd refer to just the deployment type) OSs, the configuration set could be mapped to the CMOS nvram, instead of being a part of the filesystem image there wouldnt' be no need to flash the firmware to change something

and, since it MAY drive manufacturers to introduce larger nvrams into their design - since the current cmos size is determined by the legacy BIOS' need for settings, as much as the size of the on board flash rom chip is dictated by the size of the BIOS image - but in a world with CoreBoot and larger on board flashrom chips, installing the OS of choice directly *on board* instead of a hdd partition would be possible...

Text files are also error prone due to their lack of structure. Furthermore, unlike Windows .inis, there's a total lack of standards with regards to Unix text configuration files. Taking for granted that using '#' everywhere for comments and white space doesn't matter can land you in trouble.

In contrast, registries are rigid in structure and formatting, but I guess good design cuts down on "freedom" and "choice". XML is a terrible, terrible choice for something intended as machine readable.

except that EFI which i know as long as it's existed (ca 1999) is not a complete solution per seit's just another BIOS ( although a more modern and legacy free one) in that it is a *loader* for, and provides *services* to, a full fledged OS that has to be installed on some attached storage(interestingly, there is an open "standard" -ieee 1275 aka open firmware - that actually does the same things excepts for the implementation details - C instead of Forth bytecode for expansion roms, PE in place of ELF for executable fils - yes, the point of both is directly loading the kernel bypassing the MBR bootloader - ACPI for the device tree etc ... suppose intel was too eager to impose their own standard...)

but it's this very fact that a pc mainboard, even a high end one, is as helpless as a paperweight, without a storage device that holding the SW required to support the MB (together with the rest of electronic devices in the system) , that i have never liked nor understood being 32 and raised with home computers of the past and their resident BASIC interpreters, now i want my OS (be it windows, haiku, solaris, *whatever*) to be installable directly to onboard flash, and my home pc of today to be able to at least "boot" without an attached storage device storage ( then sata / usb drives will be used for *data files* and *addon applications* that do not not fit the n GB on board flash, rather than the core os... )

"Taking for granted that using '#' everywhere for comments and white space doesn't matter can land you in trouble."

This is what threw me off the *many* times having to edit menu.lst in the BEGIN AUTOMAGIC KERNELS LIST section. One hash meant it's readable, but two means it's a comment, unlike everywhere else, even inside the same text file. What the fuck?

Text files are also error prone due to their lack of structure. Furthermore, unlike Windows .inis, there's a total lack of standards with regards to Unix text configuration files.<...>

even worse when one tries to apply one centralized configuration tool to the mess of anarchic tools services and components... the tool has to likely keep all configurations in an own database (or whatever) then write setting (and changes) to the respective config files according to the syntax of the respective service...

and, if the syntax changes across versions of the tool/service , the configuration tool will have to be updated too so, to avoid performing too invasive changes to the "core" of the config tool, and to allow it to support all the installed OS components included with the distribution, developers come out with "backends" for the config tool itself utter nonsense, but nonsense that is necessary in the miserable world that is linux land...

The lack of standard linux base makes it that much harder to create those kinds of central admin tools, or across the distros. It's like you have to create a patch for a patch for a backend hack for a tool. It's just a mess really. I already stopped looking at linux as a desktop OS other than using it as my webhosting provider.

Vista home/xp full is about 100bucks, but as far as useability and breadth/depth of programs, there's no reason to use anything else. Slap on firefox and avira or avg free antivirus, and have a good backup schedule, and you're golden.

What you describe is one of my biggest frustrations with the modern packaging infrastructure.

Once in a while I'll try Linux again with the foolish belief that it can handle some specific task without extreme headache. I know the "old" way of doing things through text files, which comes in handy because plenty of stuff that should come included with GUIs don't. So I'll download whatever programs and libraries, open up the config files, and....

I have no idea what I'm looking at. Everything looks completely different from what I remember, and there's all sorts of stuff put there by the package manager. I can't tell what's intended to modify default behavior and what's the package manager talking to itself.

The project documentation is now useless because it expects you to configure, compile, and install from upstream source. The distribution may have split and renamed the config files. This kind of thing wasn't too bad in Gentoo because of its stellar documentation, but most distributions don't bother. They assume you've been reading the mailing lists and package changelogs since 1996 so you'll know all this already.

A recent example of this was my investigating using LIRC in Ubuntu with the IR sensor on my old Creative Labs Infra CD-ROM driver. After several hours of reading documentation and poking around, I hadn't the faintest idea of what to do or even where to start. What the LIRC website said should be happening didn't match up with Ubuntu's reality. I started going the compile myself route but then I needed dev-this and dev-that. As I got pissed off that this multi-gigabyte system lacked even a rudimentary development system, I realized that I put behind me this kind of thing long ago and that I have neither the time nor inclination to chase "the dream" anymore.

Yeah because ultimately the biggest determinant of a quality desktop operating system is whether you have to reboot after installing software. Whether the operating system supports useful software is completely secondary to whether you have to reboot after installing said software.

Thank you for reminding us of just how in touch lintards are with reality.

I thought we care about productivity and shit now. It's 2009 and Windows still can't figure out how to install a web browser without rebooting multiple times. I thought Windows was so fucking high tech and shit.

Come on dudes keep it up. I'm glad Windows works good for you all, but my l33t Linux skills entitles me to a high paying and demanding job so I can not respond to all your retardations personally. Sorry!

Linux Hater is a fucking poser and a fucking hypocrite. Shelly the Republican has been hating Linux before this fuck even heard about the OS and she runs her blog on Windows unlike this hypocrite shit. For real Linux hate visit Shelly the Republican! Oh and fuck Obama!

I thought we care about productivity and shit now. It's 2009 and Linux still can't figure out how to install a webcam without compiling multiple times. I thought Linux was so fucking high tech and shit.

I thought we care about productivity and shit now. It's 2009 and Windows still can't figure out how to install a web browser without rebooting multiple times. I thought Windows was so fucking high tech and shit.

Actually, I only had to reboot once. And nobody's saying Windows is perfect, but at least I can get a new version of the browser without having to upgrade to the latest Bugbuntu Jambo Mambo every 6 months.

[ Not to mention that my hardware fucking works, and that Windows has actual useful applications, of course ]

Yeah we all know Windows isn't perfect. But only some of us apparently know is sucks ass. Oh wait you are on a blog talking shit about another OS. Excuse me while I reboot to install anti-virus updates BRB

This reminds me of the asshole retards on Slashdot recently claiming it took Vista 15 minutes to boot up for them. They're either lying, delusional, or running on a fucking 486DX66.

Lintards are pretty stupid though, so stupid they can't take 5 minutes to properly secure a Windows system which must be the reason they constantly complain about "virus&&omgspywares" so maybe they truly didn't know what they were doing.

After a lot of thinking I worked out what I take as second nature when comparing boot times.

Its boot charts. The simple fact it seams that Linux people are just talking about boot time. They will say I got X boot time then always back it up with a boot chart.

The talk about boot time is linked to monitoring OS effectiveness.

A bootchart can show you everything that was loaded. Most common just shows services. So if boot time are different and everything that was loaded was the same something more efficient has to have been used in the faster system. Could be a driver could be a different complier ... Its something. Without bootcharts and boot time compares you would not know that you had created something more efficient as quickly.

Bootcharts time everything starting. So if you create inefficiency like a slower starting service you know.

Biggest recent change on Linux boot times without tweaking was the change between ext3 to ext4. This clearly showed ext4 was performing better so more detailed examinations could be done and that information taken forward into future file system driver design.

Ie bootcharts allows you to compare apples with apples when using boot time. Since there is no user intervention when a bootchart is formed it is a dependable system to find out of a kernel or service alteration has caused a performance change.

Really there is no reason why benchmarking services could not be created to be run on startup to give even more details. Das Boot is the cleanest place to the information if a alteration has improved or harmed performance of the OS.

Lack of boot charting on windows really does make boot time a useless number since it boot charting that enables you to confirm people are comparing the same and locate the problems. It might be as simple as 1 version of a driver being highly non effective. People who are tweaking windows start up performance are mostly doing it blind with lack of good information. People cannot dream how much drivers and services effect boot time then go on to effect general operation.

So yes Windows users focusing on boot time is a waste of time due to lack of information.

Boot chart on windows would allow the media to have a closer view on what is causing the alterations in windows performance. Boot charts do force better performing drivers and services. If you don't you risk being balled out by a boot chart used to prove the problem.

Catch is all the embedded OS's I deal with have something equal to boot charting. So its status normal to me to be using them to compare. Run of the mill Windows users no clue what you need so boot time value has meaning so when they see Linux people talking about it when it has meaning you simply don't get it.

When a person just talks about boot time without a matching boot chart or equal they need there asses kicked no matter the OS.

Registry is not a good thing neither is non standard configuration files. MS created a + of standardized storage but also created a - of not known what application is linked to the registry key.

Stupid as it sounds ideal for performance and secuirty is a hybred of both the ideas of Registry and Individual files. Basically a binary config file per application in a standard format.

More files provide resistance to a single write bring the complete house down. More files make it a special operation for an application to access another applications settings.

Then there is still a issue one of the biggest things that is liked about a lot of Linux/Unix configuration files is the simple fact comments list all the features and what they do in the configuration files of particular programs. This is not a speed effectiveness thing but would not not be great if you could do a right click in regedit on a key a registry key and it told you exactly what the key was for and what information goes in there. Even better right click and tell you every key that the applications using that section of the registry know.

Both sides are missing features. You really need to start looking at why the other one has done what they have done and the advantages they get out of it. No OS out there has a perfect configuration system.

Well it'd be great if Microsoft showed account names with their associated SIDs in the registry. It'd also be great if they made totally seperated OS registry keys and application registry keys. Put them in completely different hives.

If the application is properly written, settings should be in the user hive, located at %HOMEPATH%\NTUSER.DAT. I think Vista even includes mechanisms to enforce this policy for badly written software, but don't take my word for it.

It's true that checking user SID is difficult outside of Active Directory (in which it isn't that much easier), but I also can't think of much application for such a thing. If a non-networked PC is messed up to the point where you need to examine user SIDs, it's probably time to blow it away. What am I missing?

Biggest recent change on Linux boot times without tweaking was the change between ext3 to ext4. This clearly showed ext4 was performing better so more detailed examinations could be done and that information taken forward into future file system driver design

Wow, that just great. Ext4 brings performance. Well, it did it on behalf of reliablity, but who cares? Robust file systems are for sissies.

Which is totally the reason why it comes with a DE by default! Oh wait, it doesn't. FreeBSD's emphasis has always been on the server, why else would there have been such development effort given over to stack smashing protection and mandatory access controls? Not to mention the new work on ZFS. That's totally a desktop oriented filesystem! [/sarc>

There performance gains of Ext4 were not reduce by fixing the bug that caused the data loss in it.

So sorry. Reliable file-systems are important. So is the importance of a Reliable and Fast file system.

Windows NTFS does do the same thing as EXT4 of not saving to disk. Yet I don't hear windows users yell from the roof tops. Ok because it mostly does it with your registry hives so you have to reinstall. I would think you would be upset.

Apparently a Reliable file system is not important to Window users. Unless you call being Reliable in the sense of self destroying. I speak from personal experience having to do data recovery on them.

Kharkhalash Nice that you agree to that. It explains it. Both of you lack the same thing the means to see the big picture.

Oh, for fuck's sake, I didn't even comment on,much less agree or disagree with weather or not debugging the boot process can potentially result in advances which may (or may not be) useful in other, not directly related sectors. It can. Except as usual, you're going on a tangent completely irrelevant to the point being discussed.

It's about things like Splashtop, and the way they're being marketed. In this case it's being done solely because "instant-on" is the new be-all-end-all product of the month, it's purely about boot speed, any any resulting advance useful elsewhere is at best, a pleasant, but unintentional byproduct, not a the result of a conceous effort.

Boot time competition is a indirect. You have some brains spend some time Kharkhalash making a list of everything that screws up boot time then make a list of everything that screws up applications from running smoothly. Compare. Guess what you will find a lot of matches.

My comment was about how instant-on, has absolutely no bearing on things which matter to me:

Photoshop: Instant-on does not result in CMYK support, they don't result in support for higher bit-depths, or Pantone (which doesn't exist in Linux both because GPL3's anti-patent clauses, and the refusal commercial distros or developers to bite the bullet and pony up for the license fees. No technical advance will change this), nor does it result in adjustment layers, or any other of the plethora of application-level features that as a whole, are available to Photoshop, but not in any Linux offering. Instant-on won't change the lack of a standard colour management library to rule them all.

And especially, instant-on won't help that even if development on photoshop were to stop today, and GEGL would be ready tomorrow with all of those features, it would still take a decade for GEGL to reach robustness/maturity parity with photoshop, and even at that point, the lack of Pantone support, still makes it useless for print and pre-press.

So I'll reiterate my initial comment: I don't care how fast Linux boots, I still have no Photoshop equivalent.

Audio recording/editing: Instant-on doesn't change the fact that the horrible mess that is the audio stack in Linux is fundamentally flawed, by design, no less, it doesn't get rid of the half-dozen or some competing, overlapping and only half-working sound systems and abstraction layers. Furthermore it doesn't magically address the issue of there being no audio editing or recording solutions on Linux offering anything even remotely resembling the power, feature set or maturity of things like Cubase/Nuendo, Pro Tools, Live or NI Kore. It doesn't magically result in a decent, loop-based step-sequencer. It doesn't result in the sound engine or the quality of stock samples in LinuxSampler to suck any less, it doesn't address the lack of a studio/instrument/effect-sprocessor assembly framework with the power and robustness of Reaktor. It doesn't make my high end audio hardware work, and it doesn't result in a Serato analogue. Nor does instant-on change the fact that VST/i still require an archaic wine build to load (when they even make it that far, nevermind the overhead of loading instruments via wine, which nullifies any timing or latency advantage provided by Jack), and it doesn't change that both DSSI and LADSPA are so far behind ReWire, RTAS and AudioUnits, never mind VST/i.

I'll reiterate my initial comment: I don't care how fast Linux boots, since it still sucks for recording, editing and live DJ'ing.

Video editing: Instant-on doesn't change the lack of a video editor of the caliber of say, Final Cut Pro, Premiere, Shake or After Effects.

If this was a chess game you would only be able to see your pawn because that is what we are talking about. Not that when the pawn moves it allows check mate.

If this were a chess match, you'd have gotten distracted, forgot that the purpose of the encounter was a chess game, gone off on a tangent about the importance of peanut butter in HPC computing, and wandered off into traffic, and declared victory.

The true final way to min reboots is ksplice. http://www.ksplice.com/ Simple replace kernel and keep on running. This is a non stop upgrade.

Agreed, this is how it should be done. hence my point regarding minimizing forced reboots. KSplice, however is not widely deployed, even in Linux and is not default behavior. It's a moot point, and will remain so until this is standard behavior.

Even so, it's nice to have, and a good supplement to redundancy, but not a replacement for it.

It's largely rendered moot by the increasing prevailence of virtualization on blade servers, mind you; aside from the initial purposes of saving space and consolidating hardware, they have the added side effect of fascilitating multiple levels of redundancy, especially in the case of blade servers, but I digress,

Quick bootup times are a supplement to redundancy, not a replacement (there is no replacement for redundancy)

What if your kernel update or recompile (due to human error, but you did test the upgrade/change on a backup system, if not just for sanity, for the sake of picking up on any potential unexpected bahavior or new bugs, right?) resorts in an unbootable state? What if there's a hardware failure? A hardware upgrade is necessary (and you did test the new parts on a backup before putting them on a live server, right? Right?)? Some other form of human error? what about updates?

10 seconds is an eternity on a production server. What about your hardware replacement? The machine is down for more than jut that 10 seconds required to boot, after all. What if something goes wrong during the procedure? What if your hardware fails, or a drive corrupts? Even if nothing does go wrong, do you really want to take the chance that it does? Anyone willing to take that chance shouldn't be allowed anywhere near a data center, even for visiting. Besides what it "acceptable", you always aim for zero downtime, no more.

hot-swappable Sunblades mitigate some of these issues, but part of the reason for going with a bladeserver is to have multiple levels of redundancy, anyway.

if you still can't understand the importance of redundancy and uptime in production environments, a) there's no point in continuing this discussion, and b) you shouldn't be allowed anywhere near a datacenter.

Kharkhalash you might know freebsd tech but you are clueless about Linux tech.

You would be surprised at how far my familiarity with Linux extends, I have more man hours and real world experience with it than many of even the most rabid zealots, but more on that later.

There is no reason to be rebooting a Linux server other than hardware problems.

Some distributions out there like Ubuntu are stupid and have to be fixed up by good server admins so they work right.

Which is why I expressly stated that ubuntu was especially guilty of this.

Hibernation sounds like a good idea until you wake up on large servers

nobody even mentioned hibernation on servers except you, just now. It's a stupid idea, and not because of instability, but because it does nothing when a forced powerdown arises. It does nothing when the need for a hardware upgrade or replacement occurs,and a sleeping machine is effectively off, anyway. You want a production server to be up and awake 24/7/31/365.

the only place I mentioned hibernation was in regard to putting my iBook (it's a laptop, not a server) to sleep instead of powering it down. And you know what? I go weeks at a time without powering down, and get no weird behavior.

What Kharkhalash fear that you preferred server OS will soon lose at everything. So have to spreed miss information?

You assume entirely too much. And why is it always about fear? Why is it that you freetards always assume hidden agendas and fear are the reasons people don't use Linux?

Actually, Solaris (preferably on SPARC) is my preferred server OS..

FreeBSD is my preferred choice for commopdity x86 hardware Solaris won't run on.I find that as things are, it is more robust, reliable, high performance, more fun/less of a headache to maintain, better designed, and more stable (in both senses) than Linux, and it has a configure once and forget about it feel to it, which Linux sorely lacks. As it stands, FreeBSD is a better Linux than even Linux, as it provides and maintains ABI compatibility with Linux, while Linux itself does not retain ABI compatability with itself, beyond that FreeBSD provides (optional!) mechanisms for backward compatibility with several versions of itself, unlike Linux. Unlike Linux, it's designed expressly for server use, to boot.

Before freeBSD, though, I used Linux for 12+ years. (Manrape, Debian, Slack, Arch, sourcemage, Lunar, Ubuntu, Linux From Scratch, Crux, Gentoo (I can still perform most of a stage 1 install from memory), SuSE, SLES, RHEL, CentOS, even Yggdrasil, you name it, I've used it, or been in a situation where I had to work with it). I'm rather familiar with Linux and its internals, easily more so than with any other OS, but obviously my disdain for Linux couldn't possibly stem from my familiarity and being forced to work with it for so long, could it? It's obviously fear and a lack of understanding, because it's not as if I'm limited in knowledge only to my personal preferences, right? I could never possibly have the skills to simply fall back on Linux should the unlikely situation arise where Linux surpasses both FreeBSD and Solaris, right?

I also possess varying levels of proficiency and familiarity with AIX, IRIX, HP-UX, NT4, Win2k, Win2k3, Netware, OpenBSD and NetBSD. Being a one-trick pony doesn't get you far in tech.

It's pretty funny though, given that I don't even work in tech or IT anymore (other than some consulting on the side and tech implementation/maintenance at the shop.) but rather in design and music.

So, what's your schtick, why the insistence on trying to convince people that your way is the only way? Why take yourself so seriously? is it because nobody else does? Is it because you fear that if your pet OS goes down the tubes, you have nothing to fall back on?

@oiaohm:Windows NTFS does do the same thing as EXT4 of not saving to disk.

Hey, write-behind cache has been around on PC's for last 20 years and more. Does SmartDrive or Norton Cache ring a bell? We're talking about a different story here.

Seeing bug reports about Ext4, it is painfully clear that it uses an overkill of write-behind time, thus increasing dramatically the chance of errors. The design is clearly off-balance.Pay heed, this is not my statement. This is a fact, proved by the numerous bug reports.

Ok because it mostly does it with your registry hives so you have to reinstall. I would think you would be upset.

Oh, come on, have you ever heard of Last Known Good Configuration? To get the system into an unstable state because of registry error, the system must fail during a driver setup or Windows update. In either case, restoring it to the last known good configuration will simply bring you back to square 1, where you can start working again.You obviously talk crap.

oioahm, I can understand your urge to prove yourself always right, but at least try to keep some dignity.

oiaohm exposes himself with his bizarre judgments on NTFS. Pro tip: some of us actually maintain sizable NT infrastructures and know the ins and outs of NTFS reliability. Some of us also maintain Linux and know the horror of Reiser and ext failures. You can't trick us. We aren't basement dwelling hobbyists.

O yes the magical Last know Good Configuration. Are you aware that it only works. If windows\system32\config contains some files. Even recovery mode from cd fails. The failure I am referring to leaves that directory completely blank. Nothing no sam no registries no backups also stored in config. Basically narda. It lost the complete god dam directory contents.

Now if you can trick the OS into running from that you might be able to use a restore point. Strict on might as long as the user did not turn it off.

Nice bit of hell.

Ext4 write behind failure is only lost data in the gap.

Not everything in a directory vaporised from existence because its updated where the directory is point on disk for its information before writing it.

That is completely screwed up write behind caching.

I see reports of about 12 of these a year. Servicing a customer base of about 50000.

I can understand why you guys would not be aware out bad the screwup is down there. Next time you find a directory on windows NTFS filesystem that is completely empty and you are sure you put files there you truly might have put files there and had bad timing.

Partly destroyed is far more direct to a person that there is a problem. Fully destroyed makes a person think they just miss placed the file or did not put it where they though they did.

Photoshop: Instant-on does not result in CMYK support, they don't result in support for higher bit-depths, or Pantone, nor does it result in adjustment layers...

With GIMP's recent addition of CMYK support, it finally caught up with Photoshop. Version 2.5 that is, a 1992 vintage, straight from the Win 3.1 era.

There are seemingly no plans on adding adjustment layers, something we've taken for granted since v4 way back in 1996. Nevermind anything they've added since, which I can't even be bothered trying to list.

Even disregarding it's god awful GUI, it's at best on par with 17 year old software. Almost 2 decades behind.

And that's the best Linux has to offer. No Photoshop (even elements), no Bridge, no Camera Raw, no Lightroom, no Aperture, no iPhoto, no Paint Shop Pro, no Capture One, no Bibble, no Canon DPP, none of the nicer HDR/pano tools, ... If it's nice, is worth using, or you happen to need it, then they *don't* make it for Linux.

About the audio side, the biggest problem is (as for graphics) the support for vst. Ardour works really well (I mean, talking about professional level), so does Reaper (not FOSS but everybody knows how well supported is to work under wine and it's a vst host). NI stuff work under wine (well, NI support it on the Receptor). PD is made by the same person that did Max/MSP, if you use both, you know how similar they are, despite of the free/nonfree bullshit. Lilypond is the best music typesetter out there - if you think it's difficult, you never tried Score or other professional stuff, I mean, not Finale/Sibelius). Musescore is a healthy project and I'm quite sure about it's success. Linux sampler read gigasampler files... Jack it's better than Rewire (don't know why you put it together with other plug-ins formats...).I generally agree with what you say, and I use mac and windows 99% of the time as a professional musician, but the situation under Linux is not at all so dramatic, it's the wrong attitude and a blind ideology that kills linux.Let's be precise and not make the same mistakes linux fanatics do!

The GIMP sucks even as a basic image editor, on its own, without comparisons to Photoshop. Every time it's brought up on Slashdot, the response is most home users don't need CMYK or the advanced features of Photoshop, which may be true. BUT has anyone even used the GIMP? I mean really used it? It's slow, the windows always load in the wrong place at the wrong size, functions that are lightning fast in basic Windows programs like color correction, levels adjustment, and auto correct are slow, etc. At this point it doesn't even compete with the latest IrfanView -- a Windows-only app (FREE of course) which I couldn't live without.

Not to mention Paint.NET totally trashed GIMP after six months of development.

UI-wise, yes. Actually pretty much anything would totally trash Gimp UI after six months of development.

Unfortunately, Paint.Net doesn't have all the functionality that Gimp does. Basic stuff like proper grid with snap-to-grid is missing, among other things. And Paint.Net only supports 24 bit RGB so it's not a Photoshop killer either.

"O yes the magical Last know Good Configuration. Are you aware that it only works. If windows\system32\config contains some files. Even recovery mode from cd fails. The failure I am referring to leaves that directory completely blank. Nothing no sam no registries no backups also stored in config. Basically narda. It lost the complete god dam directory contents."

An extremely unlikely event which still has numerous ways to recover from.

Linux Hypocrite caught using "windows" on a daily basis to do accounting:

http://aplawrence.com/Microsoft/cutepdf.html

While his wife complains about her "slow and troublesome" XP", you know the jobs he's "supposed" to be good at.. fixing computers?

I'll bet his wife keeps XP out of spite. Dealing with these people on the Internet is bad enough, can you imagine waking up next to one every day? The constant preaching, nagging, belittling? Also, imagine how his wife feels knowing that her husband could probably fix her problems in an hour or two but refuses because his hatred of Microsoft is stronger than his respect for her.