Posted
by
timothyon Thursday May 14, 2009 @07:47PM
from the which-layer-does-what dept.

jhfry writes "In an interesting development by an unexpected source, Phoenix Technologies is releasing a Linux-based, virtualization-enabled, BIOS-based OS for computers. They implemented a full Linux distro right on the BIOS chips, and by using integrated virtualization technology, it 'allows PCs and laptops to hot-switch between the main operating system, such as Windows, and the HyperSpace environment.' So, essentially, they are 'trying to create a new market using the ideas of a fast-booting, safe platform that people can work in, but remain outside of Windows.'"

The Paranoid Conspiracist in me says: "This is an essential step for the trusted computing platform, where a government or corporate owned rootkit could exist on your computer, with little to no ability to be replaced or removed by the owner of the machine."

He basically makes the argument that TPM is a dual-use technology: it can be used for good or evil. Problem is, the evil uses could easily be disabled without impairing the good uses... but that hasn't happened.

"Remote attestation" is for DRM, plain and simple. It's evil. There is no reason I'd want my computer to produce a report of what software I'm running without giving me the ability to change that report before it's sent out. That feature is useless for me as a user; it's only useful to third parties that want to restrict the software I'm allowed to run (e.g. by refusing to send me a video stream unless they know I'm using their preferred player).

If they removed remote attestation from the TPM spec, or simply put a switch on the side of the computer so the owner could forge attestations whenever he felt like it, it wouldn't be evil. So the question is, if Trusted Computing is such a boon for users, why does it still have features that only serve to undermine those very users?

> So the question is, if Trusted Computing is such a boon for users, why> does it still have features that only serve to undermine those very users?

Or you might consider a slightly bigger world than your basement and uses for computers besides downloading porn and playing WoW. Remote attestation might not be something you care for, but if you were designing an ATM system you might feel differently about the ability to know, with a pretty high confidence, that the remote terminals are uncorrupted.

You are stuck on the idea that it is YOUR computer and that will always be so, that the person in front of the display owns the machine. But that just isn't true in a great many scenarios. I'd really like a system that allowed me to know if one of the workstations around here had been compromised. All of our machines are 'mine' in the sense I'm the one responsible for them, the employees sitting in front of em just use em.

Even remote attestation can be used for either good or evil. The key is to resist when big media tries to use it for evil. And it's evil because the machines aren't TimeWarner's yet they want to assert ownership over them just because they are displaying their precious IP.

Remote attestation might not be something you care for, but if you were designing an ATM system you might feel differently about the ability to know, with a pretty high confidence, that the remote terminals are uncorrupted.

Fair enough. But if I were designing an ATM (or a kiosk, or any other public-facing terminal where remote attestation might have a legitimate use), I could put whatever additional hardware in there I wanted. I'm already adding a keypad, card reader, touch screen, etc. so why not one more thing?

Remote attestation isn't something that needs to be built into the average PC. On a typical user's desktop, remote attestation doesn't really have any legitimate uses, only evil ones.

I'd really like a system that allowed me to know if one of the workstations around here had been compromised. All of our machines are 'mine' in the sense I'm the one responsible for them, the employees sitting in front of em just use em.

If those workstations came with a switch on the side for forging attestations, and you didn't want users doing that, you could simply disable the switch. Just like you can already disable CD-ROM drives, USB ports, or whatever else users might use to compromise the workstations.

Remote attestation isn't something that needs to be built into the average PC. On a typical user's desktop, remote attestation doesn't really have any legitimate uses, only evil ones.

As a system administrator, I disagree in the strongest possible terms. I'd love to be able to have the domain clients here restricted to an authorized software list. I could let users install things they needed or wanted instead of having to do everything for them, but I could restrict the list of available code to things I'd verified were safe and wouldn't cause system issues, security problems, etc. It'd also offer significant protection against resident malware. It'd be great.

Even being able to detect when a machine had unauthorized software on it would be a huge plus.

The parent poster's point is an excellent one - often the user of the computer isn't the owner, and/or isn't the person responsible for managing and maintaining it. In these cases remote attestation becomes highly attractive.

As a system administrator, I disagree in the strongest possible terms [...] often the user of the computer isn't the owner, and/or isn't the person responsible for managing and maintaining it. In these cases remote attestation becomes highly attractive.

Hi, and thanks for reading the first two paragraphs of my comment!

Since you're a system administrator, I'd like to extend a special offer to you: click here [slashdot.org] to read the final paragraph of my comment, absolutely free! I think you'll find it specifically addresses your concerns.

You're both right, which is why the parent's point is valid. I administer a large number of workstations, and would love the capacity to know what's running on them, to recognize whether they're compromised, but on my home computer, I still don't want Big Media spying on me. Somebody owns every computer out there, and that somebody should have the right to determine what kind of data about the computer's operation and about what it is being used for is being shipped out, and to whom it's being shipped out

I'm assuming here that you're some sort of administrator or something. Based on that assumption I offer this perspective: Your job only exists to enable them to do theirs. You're a meta-worker, they're the workers. Certainly there is some allowance for pride in your work in that it's "your" network or "your" computers, but you're really only there to enable them. Without them, you wouldn't be necessary. As long as you keep that in mind, everyone benefits.

I'm still listening to that darned episode, but they've only been babbling about ssl certificates and other items in their listeners mailbag.

My point was that the os in bios was an essential component, as the tpm is also. I never tried to say that tpm == trusted computing, rather that it is just a component of it. Hardware virtualization is also an essential component (it's also dual use, and I run virtual machines very frequently). A builtin hypervisor (or rootkit, depending

Its a tool, and can be used for good/ill. I actively build/buy servers and laptops with TPM functionality because it allows me to enable encryption with BitLocker, save the recovery key someplace secure (safe deposit box), and from there on out, the encryption is completely forgotten about. On laptops, I enable the PIN functionality so an intruder would have to have the tech of a chip fab to coax the information needed to grab the HD contents. Even though TPM chips are not hardened against physical attack, few thieves outside of intel agencies have the tech to rip open a chip's package and attach probes to the chip's microscopic pads.

Either way, servers can reboot unattended while the data is encrypted, and laptops are protected against brute force password attacks. If an intruder tries to repeatedly guess a PIN, the TPM will just keep forcing longer and longer delays, if not permanently locking.

All a TPM is, is a cryptographic token that is on the hardware, with two pieces of additional functionality: The ability to validate that the MBR and booting parts of the hard disk have not been tampered with, and remote attestation.

The ability to check for tampering is important because in theory, someone can put a keylogger on the boot sector, then pass the info onto the real preboot authentication system (PGP or TrueCrypt) while saving the boot passphrase for an attacker in some safe area. If someone tries to tamper with the BitLocker subsystem, the TPM won't allow the machine to boot and it will be obvious that something is fishy.

Remote attestation is controversial, but you don't have to turn it on in BIOS. Same with Intel's vPro stuff.

Finally, by the TPM spec, all TPM chips are shipped turned off and disabled by default, so a software maker can't depend on one for DRM reasons.

In the fourth case, the core security software grabs input and output from the network and disk to check the data for security threats. In that case, "you won't even really know you are using hyperspace," Hobbs says.

BIOS flash memory is simply mass storage that, just like the hard disk, retains its contents when switched off.

They didn't talk about it in the article but I'd be surprised if there wasn't some way to recover if it gets corrupted, either deliberately (virus), or accidentally (buggy software). Maybe protected memory that does initial boot or provides a re-flashing mechanism.

If there's no such hardware protected method for recovery then yes, root kit hell.

> So is this fundamentally different from Asus putting SplashTop on some of their netbooks and motherboards?

Very different. What Phoenix is doing is pushing Windows into a VM, permanently. The machine boots Linux from the BIOS and loads Windows into a VM container in the background while you have a basic Linux desktop to browse the web, read email, etc. You can flip between Windows and Linux with a hotkey. But Windows stays in the VM. This offers a hope of eventually containing the menace from Redmond. The question is whether Phoenix will want to go there.

Imagine a real firewall dropped between the virtual NIC in Windows and the real one. Even better, just forget the network in Windows for most uses, use the Firefox on the 'other' more safe system that is a hotkey away. Push this tech a bit more and have seamless Windows(tm) windows running rootless on the X side. Now we don't even need to worry about two different displays. Basically, this tech offers the potential to blur the line between Windows and a real Internet ready system in ways impossible to predict. This could erase enough of Windows' defects to keep it viable or it could remove enough of the reasons to run Windows it hurts it. But Pandora's box is open and it will be interesting.

You are asking the wrong question. Try "What is the point of running Windows this way?" Phoenix isn't trying to push "The Year of Linux on the Desktop" here.

> you can do this now: run Windows from a VM under an ordinary Linux distro.

In theory at least. What they hope is different is that is Phoenix doing it. They think they have the power to establish a standard here. If they succeed in pushing Windows on a large percentage of desktops into a secure sa

Once Windows is virtualized, it loses nearly all the security gripes I have against it. When you don't need to run antivirus on it, or any of the security updates, it boots and runs quickly. Here's how I imgaine it:All user docs are kept outside the VM.The VM is destroyed and replaced periodically with a standard image (every day, every hour, or at the whim of the user).

Imagine that, a mere 10 years after LinuxBIOS (now CoreBoot) first provided a full linux version on the BIOS (with near-instant booting into the OS of your choice), Phoenix gives us with this remarkable invention (complete with the standard idiotic fawning by Rob Enderle).

If you got a web browser in your BIOS you will probably want to update it one day. Like, when there is a critical security fix to prevent site A from grabbing your private data from site B - something that will not be addressed by not mounting your hard drive. A flash drive, with a physically adjustable write-protect knob, would do nicely. Else you would just have to stop using this feature after a couple of years or the first hardware upgrade and set up your own dual boot.

... would be much better, and a lot safer. I would prefer it of BIOS writers just leave the BIOS as is but allow users to simply choose which drive to boot from so no dual booting is required.

So you are able to disconnect/switch the other disc electronically via some solid state mechanism, rather then having to go into the bios, using jumpers and dinking around with settings, you should just be able to change the channels, and choose which drive to boot from externally, no virtualization software, no dinki

I hope that someone who is more familiar with this will fill in the details, but as I recall one of IBM's mainframe did this back in the 1970's. Basically, every user who logged onto the system got their own virtualized private OS.

DOS was a BIOS based OS. It passed a large number of its calls directly to the BIOS. We all know how well that worked out.

That said, I would rather have a read-only, default, fallback, usable OS in the system firmware. You know, something that could be used for:

OS installation.

Basic networking.

Backup and recovery operations.

Performing basic system utilities.

The PC is one of the few platforms where the hardware is actually useless to the end user without an installed operating system. Reflashable BIOSes further compound the problem by allowing a software command to render the hardware unbootable and unrecoverable (that is, unless you happen to have a FLASH programmer and another computer lying around...). The PC has perhaps the worst architure and implementation of any major platform, and it's about time they did something to fix that.

In fact, with the falling prices of flash, why not just flash a Linux kernel into the BIOS?

A bootable, usable Linux system with BusyBox can fit into 4 MB of flash.

A 64MB flash (possibly much less) could support the above, plus MicroWindows or similar.

Why bother having a separate OS when the kernel could fit on the firmware?

Let the rest of the system - libraries, apps, configuration, etc... reside on the disk, but keep the hardware related parts (i.e. drivers, etc...) on the firmware itself.

With kernel drivers *in the hardware itself*, one would never have to worry about getting the correct driver, etc...

I have wanted that for years, but not just for basic tasks, I want everything from/boot/bin and/sbin at the least, and possibly/etc,/usr/lib,/usr/bin or paths under those. Give it a physical lock to switch between read only and read/write.

I have fond memories of the acorn archemides machines at school that booted in seconds and ran some pretty cool full 3D graphics stuff in 1991. Even todays latest greatest PCs seem like a step backwards in some ways. They take longer to get to a state that's usable a

Let the rest of the system - libraries, apps, configuration, etc... reside on the disk, but keep the hardware related parts (i.e. drivers, etc...) on the firmware itself.

That would work for drivers for the chipset, integrated peripherals, and devices that have a class driver (e.g. USB HID, USB storage, SATA storage, SATAPI optical storage). But where would drivers for plug-in PCIe and USB devices go?

You know nothing about computers or DOS. DOS didn't have virtual memory. DOS was not a BIOS-based OS; it passed a lot of calls to BIOS, but that can be done just fine, it's a little slower than direct access. Windows did the same, hence why it couldn't access more than 8 gigs of HDD on an old BIOS but when LBA32 showed up it magically could (i.e. Windows 98 first edition, on a non-LBA32 BIOS vs. LBA32 BIOS).

DOS was a BIOS based OS. It passed a large number of its calls directly to the BIOS. We all know how well that worked out.

Let's just call this a gross oversimplification and be done with it, shall we?

Why bother having a separate OS when the kernel could fit on the firmware?

For security reasons. Your firmware OS might have exploitable privilege escalation bugs, so you don't want to run untrusted software under it directly, only in a protected virtual machine environment. That virtual machine environment must have its own OS, and that would be a disk-based OS which is easier (and safer) to update in the event that security holes are found. It's preferable if the whole boot environment is as near to possible as read-only, just to reduce the possibility of malicious exploit. It shouldn't even be possible to re-flash the system without physical intervention (such as changing a jumper).

With kernel drivers *in the hardware itself*, one would never have to worry about getting the correct driver, etc...

This is true for the flash-based OS and the built-in hardware, which is why you can boot into a usable system so long as enough of the hardware is integrated on the motherboard. Don't forget plug-in cards and external peripherals, though. There's no avoiding the need for those drivers, in general.

In fact, providing a web form is being generous.. they could accept requests only by dead tree.

Considering that the files are already on their site to download (but you have to jump through hoops to get to them), it seems like they are just trying to make it more difficult to get to the source code. That's lame.

Why don't they just start to work on coreboot [coreboot.org]? The piece of code shipped currently as BIOS could be so much better. There is an excellent Google Talk about coreboot's improvements [youtube.com].

There are a number of boards and chipsets that work with coreboot, but there are many more that do not.

My guess is that Phoenix is trying to jump on the it-runs-linux bandwagon, leverage a bit of the benefits of the kernel to make a shiny app, and not really contribute back to the FOSS community any more than they have to. I could be wrong here, and I'd be more than happy to have someone from Phoenix correct me, but that's what these new quick-to-boot environments sound like.

Or at least pee on it and create a wall of FUD. Their mighty and perfect OS usurped by lowly BIOS - and a BIOS running Linux. How totally non-elegant !

Its a great idea and I would actually have a reason - a real reason - to upgrade my hardware. But I can see M$ telling Dell - HP - etc. if you want to put Windows a BIOSOS system... no OS discount for you !

However I would love to see the industry find a way to shove this down Balmers throat.

Most of these comments make me want to puke. I've worked on everything from OS and drive code to firmware/bios code. The one thing I've learned is that you _DONT_ want a heavyweight BIOS/firmware. There is a certain appeal to having a system which ships with a hypervisor, and a heavyweight BIOS that can do everything from configure your memory subsystem to allow remote web based console visualization. On the other hand, you have massively complicated and restricted your system. Everyone thinks that putting all this functionality on the motherboard is a good idea because you only have to flash your BIOS once in a while.

If you want an example of where a heavyweight BIOS leads to, you only have to look at the EFI or OpenFirmware specification. They are so full of technical holes and complexity that nothing works right, and in the case of EFI you have to update the drivers in the BIOS as often as you have to update them in your OS. So, instead of one driver you have two.. Plus flashing cards, or upgrading firmware drivers is _NEVER_ as easy as installing a new OS driver. There is always some technical or human factor that kicks you in the rear.

I've had this discussion with other people in the field, and basically aside from the zealots a lot of other people agree. The core concept of the PC BIOS is really close to the ultimate design. Of course its 25 years old, so its gained a lot of cruft and bugs, but if you were to start over the goal should be a modern version of the BIOS rather than some embedded OS, hypervisor, etc.

What you want is fairly lightweight bootstrap and POST utility to get the machine far enough along that you can fetch the hypervisor, or OS from the disk. This means you have to standardize the API for functions like read sector, print text on screen, read data from keyboard etc. You also have to provide the ability to extend or override those functions from a firmware blob sitting on a SAS adapter, or video card.

This is not an argument against service processors (an entirely separate topic, that people often get confused about), but rather an argument that I don't want my motherboard to try to standardize a hypervisor or OS. I want that decision left up to me. Generally the poor dumb customer doesn't want it either, they just want a machine that runs windows, linux, OSX or whatever, if they are even that detailed. The OS in the firmware people forget that firefox has been sending me weekly (daily?) patches lately, and its likely that over a few years timeframe the later versions of FF won't even run on some older firmware restricted OS without the original vendor providing upgrades. This puts the motherboard vendor in the position of being the support infrastructure for the _WHOLE_ computer. Something i'm sure the majority of them are unable to provide, even though they may have a couple people who can port corebios/linux/etc to run on their hardware.

Hyperspace is an extremely fast booting (approx 4 seconds) Linux based mini OS. It is available in two flavors. On PCs without the Intel's VT extensions it is just a fast booting OS, but you can only dual boot it.

On PC's with VT, the bios loads a hypervisor which then boots both Hyperspace, and windows. (It may defer starting windows until hyperspace has loaded). The result is that within for seconds you can begin using the computer, doing things like browsing the web while windows. Once Windows is up, users can instantly switch back and forth.

In theory there should be little reason why other OS could not be used instead of windows, although the system may be installing special drivers in windows to help mitigate some issues.

Not really, all decent systems have two separate BIOS flash areas and will only update the second one after a successful startup from the primary. Heck some systems have that AND a minimal BIOS in ROM so they can always recover even if the flash is scambled (HP workstations and servers do this, stick a floppy in and hit a special key during powerup or flip a DIP and they will read the flash file from the floppy and write it to BIOS flash).

Sure its a good thing. Thats the environment i'll boot into to fix the boot block of windows or linux, whenever they become unbootable. Hope it has
room, for fsck, mkfs, a partitioner and most of the common filesystem types.

Even the absolute worst flash memory can be written hundreds of times without any issues.

At a reasonable update schedule of once a month, that would be no less than 10 years. You would almoste certainly be able to update once a week for 3-4 years. And this is worst case...I would be surprised if you would really even want to use the computer anymore (due to performance issues) by the time the flash wore out 15-20 years down the road.

It's not flash. EEPROMs can be rewritten millions of times, so it would take decades of continuously flashing your BIOS before you hit the limit. Not only is that absolutely pointless, but other components would probably fail before the EEPROM wore out.

I had most of this in the 70s. It was called the Tandy Model I, and the entire OS was on a chip. There were never any driver problems because you couldn't install drivers. It was instant on (and by instant I mean faster than the CRT/TV it was connected to).

I'll forgive your lack of experience on this matter but I have to answer your implication that driver absence is a Linux problem.

There is a problem with manufacturers who decide to keep their hardware specs secret and so make it difficult to have device driver support under Linux. It is true. It is a lot less common, but still true.

But this is not a problem that is exclusive to Linux. There are many devices that are older and will never have support for WindowsXP or Vista or Windows 7. The devices are considered old and outdated by these same manufacturers and do not want people using them any longer and so they don't pay to have people write drivers for more current versions of Windows. It happens. This problem also happens with Mac OS X. Recently, I upgraded my wife's machine to OS X 10.5.x and her Canon scanner does not and will not have drivers for 10.5.x even though 10.4.x and prior are still supported. All I could get were weak apologies from support but there is no intention to change from their position. They recommended that I buy some software from a 3rd party that costs twice what the scanner costs today in stores. (It is pretty weak that they actually display the MacOSX compatible logo on the package and it is no longer completely true...)

My point is that when drivers are not open sourced and/or the hardware specs are not openly available, your hardware is limited by the willingness of the hardware maker to support it. This is true of Windows, Mac OSX and Linux alike. This is NOT a Linux problem. It is a Manufacturer-with-their-heads-up-their-asses problem.

A driver missing on an OS isn't the OS developers' fault, but it is their problem. There is a difference. They're not responsible for making the drivers, so its not their fault. Users still don't want to use an OS where they can't use their electronics, though, so it is a problem for the OS developers.

The solution to that problem may be intractable in some cases (a manufacturer refuses to divulge drivers under any circumstances, and no-one is willing to put in the effort to reverse engineer). However, Linux has done remarkably well, and things are only getting better driver-side.

But you're right its not a Linux-exclusive problem. My current printer doesn't work with my Mac, and older equipment may not work with newer versions of Windows.

That point of view is all well and good if you don't aim to improve marketshare of your OS. If you want people to actually use your OS, then yes, it becomes your problem. You simply can't expect users to jump through hoops in order to be able to use your OS.

How many FOSS drivers must I mention before you admit Linux does have a problem?

More specifically: how many FOSS drivers *which are not maintained in the kernel tree* must I list?

1. MTP008 temperature sensor was removed from 2.6 (was in 2.4).2. Peracomm USB ethernet (stopped working while in kernel tree)3. DIB0700 (and many, many other) based DVB cards - the manufacturer helped making the driver but it still (after over 3 years, in 8.10) is not up-to-date/maintained in the kernel tree.4. Numerous Wifi cards some of which partially work and some not.5. Webcams (gspca).

Need I go on?

6. EeePCs... most came with Linux, most drivers still do not work even in 8.10.

Nobody claims this is exclusive to Linux, it is just a lot more pronounced in Linux.

My point is that even when drivers are FOSS and the manufacturer has willingness Linux *users* can and do have problems.

I leave it as an exercise to the reader to find out why and who is to blame.

My experience over the past 5 years has been that Linux has much better driver support than Windows. Most of the time when I plug something into Linux, it just works. When I plug something into Windows, it will work if I have the driver disk but fail otherwise.

Latest example is a webcam that I pulled out of my spare parts box for a project. Windows demanded the driver disk (which I didn't have) and couldn't find anything when I told it to go searching on the web. Ubuntu recognized it immediately and the driver was already on the system... instant joy. Gave up on Windows... another reason to delete Windows on my last remaining Windows computer.

I also hear lots of stories about WiFi not working but I have installed Linux on about 15 laptops (internal and external WiFi adapters) over the past few years and WiFi has "just worked" on all of them.

Which is why our landfills are filling up with e-waste faster than they should be. Great example of attitudes in a disposable society.

I'm all for progress and new technology, but why discard something because it just needs a new set of drivers? The reason why manufacturers can get away with this crap is because people don't get pissed off enough and light up their call enters with complaints.

If the manufacturers will release the damn specs the geeks write the drivers for them and those drivers get included with every distribution by default.

While that is an interesting argument, there are a few fundamental problems that bother me:

a) The incentive of manufacturers to release said specifications is low. Regardless of money made on the acquisition of a wider user base (often through more hardware sales), such specifications create issues for intellectual property and often serve as an opportunity for any competing manufacturers to digest a well-prepared buffet of the inner workings of hardware and the software that supports it.

b) The incentive of said 'geek' to actually sit down and not only write but actively maintain said drivers is based on demand and free time. This leads to the parent post "now you see it, now you don't" support syndrome.

c) The incentive of a manufacturer to release quality specifications is next to non-existent. In many cases, only the most determined OSS master-mind is capable of both understanding what are often meant as 'internal use only' documents and actually creating a driver. While I have little doubt such people exist, there is only so much time, sweat, blood, and tears that many people are willing to give for results.

Note that I actively contribute to the open source community and use Linux on a regular basis. That said, I don't believe manufacturers are (entirely) to blame.

This brings up an important point. There is plenty of incentive for someone to write a web server, a database manager, an OS kernel or thousands of other generic bits of software. There is almost NO incentive for someone to write a driver for an obsolete device. The former can be a source of consulting and employment. The latter can actually work against you.

I mean, if a hardware manufacturer finds out you like to write device drivers for obsolete hardware, they're not going to be pleased. All those people keeping their old printers will prevent the manufacturer from profiting by making new ones. And if you really get creative by making the hardware do all sorts of new tricks it never did before, they're probably going to look for some excuse to get rid of you.

They want to sell new product, not keep the old stuff going. I know it's wrong to say this, but that's how the world's economy is configured right now.

But that problem is solved by the same reason many manufacturers have ignored Linux up till now. The size of the market. The Linux market by itself just really isn't all that significant, so let them keep their old stuff. Let them hack away at the hardware and if they come up with something so fantastic that someone will switch to linux to do it then all the better because they will need to buy your product to do the hack.

The linux market is not important for sales in the linxu market, its important because

All your points touch the same subject: incentives. Thus there's only one problem: money. Not the loss of it, but the absence of profit. Truth is business is about relations, and hugging papa Microsoft tightly tends to help vendors getting their products out on the market. Microsoft has no interest in helping vendors that explicitly support their rivals. Call it FUD or whatever bullshit-internet-forum-made-up word you want but the bottom line is that MS, and its affiliates, have great impact on the IT world and if papa says no, then no it is.

a) A is a bogus concept. A specification amounts to an interface and really doesn't reveal much of anything about the internal workings of the hardware. With or without a specification you can bet a competitor with a multi-million dollar interest in how your hardware works will acquire that information anyway. So while selling hardware to the technically elite crowd that makes the major hardware purchase recommendations on big ticket accounts might not be a significant incentive to hardware manufacturers there really is no downside.

b) You could make that arguement except that there are no shortage of manufacturers that DO make their specs available and the result is that Linux has dramatically superior driver support for that hardware than any other operating system. Take a system with 10 year old hardware and load up ubuntu on it, everything will work out of the box. The popularity issue is self solving, if something isn't popular its because not many people use it or need it. If it was once popular but is no longer popular then the driver will have stabilized while it was.

c) I fail to see the motivation NOT to release quality specifications. Again specifications are how to communicate with the hardware, not how the hardware actually works. The only reason to misrepresent a spec is because the company is doing something shady like maladjusting drivers to give gains on gaming benchmarks at the expense of overall performance and so forth. If they really want to do this they can just release specs that say those maladjusted configurations are the optimal settings for the hardware. Problem solved. Otherwise, why wouldn't you want your hardware to perform as well as it could on a given system.

Actually since linux remains a tech heavy system, it seems to me that even hardware that is being under driven in software, perhaps to enable the sale of the same hardware at different price points would be best run at full unlocked specs in the linux driver anyway. This will give linux users a very favorable view of the hardware. While linux users may be a small percentage of the market, they are the geeks that make recommendations listened to by purchasing managers and by the early adopters who spend the real bucks.

If say, nvidia graphics cards give screaming performance on my linux box and ati cards suck and both have drivers... guess which cards I'm going to have a high opinion of and recommend to my clients?

The "Apple Tax" is more than worth it. OS X is virtually 100% secure, and its worth paying the cost difference to ensure my stuff is on my Mac and Time Machine drive, rather than being sold off to the highest bidder off a server in Tehran.

***Sane supports most of the more common brands of scanner, provided they don't rely on funky things like parallel ports.***
No, unfortunately, it doesn't. It supports some devices well, many after a fashion, and many not at all. The list of supported devices is here: http://www3.sane-project.org/sane-supported-devices.html [sane-project.org]
I use Linux almost exclusively because a decade of supporting Windows PCs left me with a deep and abiding disgust with that once promising OS gone sour. In my experience, most peri

even mass storage devices can be a pain these days in windows (u3 tools anyone?) and xp doesn't like multiple partitions on a usb stick (had to hack the drivers to make windows think it was a hard drive to be able to access the second partition, even though both partitions were fat32).