As we now all know, Oracle is planning their own Linux. This is causing general
hand wringing and dire predictions of doom. For example, according
to
iTnews.com
(link dead, sorry)

Oracle's Unbreakable Linux 2.0 likely won't fragment the Linux distribution business out of the gate, but the long-term impact on Red Hat, Novell and the operating system market remains unclear, industry observers say.

Market fragmentation? The Linux market is already fragmented and has
been for some time. Sure, the big boys have the lion's share, but
even if you ignore all the little distros the fragmentation is no
worse than that which supposedly plagued the kindred Unix market. If
fragmentation was dangerous for Unix, it's just as much so for
Linux and Oracle's move doesn't change anything significantly.

What I think is being missed here is the importance of
virtualisation. The iTnews.com article does mention that:

"ISVs need to control their application destiny as virtualisation accelerates. They have to control what OS components are installed with their application in the virtual container because of efficiency and supportability," said one high-tech venture capitalist, who declined to be named. "ISVs need to control their own stack, and it becomes increasingly important as virtualisation comes to fruition."

However, I think they (and a lot of other people) aren't
seeing the long term picture. The forces pushing toward VM's are
from all sides: the application vendors want it to control their
environment, and the users want it to control security, for
hardware deployment flexibility and to cut hardware costs. There's
no stopping it: the VM world is coming.

Now right now, that world is VMWare or Xen on Linux or Windows, or
Microsoft's Virtual Server. The virtualisation is separate from
the OS. Oh, tightly coupled in the kernel, sure, but the point
is that the OS flavor of the VM is still important: you are
running the other VM's under Windows or Linux. That is
going to change: the real OS (the controlling OS) will be a
hypervisor like VMWare's ESX Server: a smallish, very tight,
special purpose OS whose only role is to provide the framework
to virtualise other OSes that users interact with.

That's why Linux and Windows and MacOSX are going to
become unimportant. Not irrelevant, but more and more
focused on running specific software inside VM containers: in
other words, appliances. From an evolutionary perspective,
consider this as a major environment change. The flora and
fauna will adapt, and will lose features they no longer need.

Who has the most to gain here? Why Linux, of course. Not
Oracle Linux, not RedHat Linux, not Suse, not any specific
vendor, but generic Linux, modifiable Linux, malleable Linux.
If anything, Linux becomes more fragmented in this environment:
a Linux distro for every app isn't hard to imagine. But they
won't be distros in the traditional sense. Sure the "vendors"
will release their mods back into the mainstream, but a lot
of those will be so specialized to their needs that no one will
care: they'll never be part of the mainstream kernel. In
fact, that mainstream kernel becomes almost totally unimportant.
It remains the starting point from which apps build their
container appliance, but that's all: by itself it will be
unimportant.

There will be holdouts, of course. People so tied to
Microsoft habits that they can't move. But they'll virtualise
too; they'll have to. The only difference is that they'll be
paying a Microsoft tax, so will have incentive to lose
that dependence. Eventually they'll move to generic Linux
also.

I hear you: "But that makes Linux even MORE important!", you
protest. OK, yes, you can look at it that way. But remember:
it's not RedHat Linux, not Suse, and not anyone else
making a general purpose Linux. That environment is going
to go away. Linux as a general purpose OS won't matter. Linux
as part of a container application will matter.

In a larger sense, operating systems never have been important.
It's apps that matter, it's apps that we use. The OS is just
the scaffolding that lets our apps run. The VM world just
reinforces that reality and makes it more obvious to us.

So iTnews is wrong, I think: the long term effect
on the operating system market isn't unclear. The OS market
is going to be destroyed outright. The operating systems
themselves will die if they cannot adapt to the new environment,
but Linux obviously has the most adaptibility here. Remember,
that still means the Linux OS market dies - or more accurately
is subsumed by the application markets. Linux won't be
important to computer users, only to the application vendors
who use it to power their wares.

Of course, I could be completely wrong.. if you think so, I'd love to
hear why.

"This doesn't solve all the problems of virtualisation - and there are many, including legacy hardware that will never run Linux and legacy applications that will never run on Linux. But this doesn't actually matter. In the short run they'll get excluded from virtualisation and in the long run, they cease to exist."

But their take is that their will be a standardised Linux that everyone will
run VM's on - I don't see that; I see app vendors tweaking Linux more
and more toward their specific needs and tearing out anything and everything
they don't need.

Wed Nov 1 02:58:16 2006: 2573 drag

This stuff opens up all sorts of interesting possibilities.

For instance say you look at Linuxbios. This is a effort to strip out traditional x86 bioses off of motherboards and replace it with a Linux kernel.

It's commonly used in massive cluster projects were having flexible and configurable hardware is paramount. It also reduces the time it takes to boot considurably and other similar things.

So it's possible to not only provide a Xen/Linux environment.. it's very possible to embed it directly into your hardware. No royalties, no extra fees, or anything else like that is required. It's actually cheaper financially then having a traditional bios.

Of course having a traditional bios is nessicary due to backward compatability.. But plenty of open source bios exists for emulators and such so that shouldn't be a problem in a VM environment.

So that gets you many benifits. One of the big traditional critisms for x86 commodity hardware vs big iron is lack of low-level hardware monitoring stuff. You have all these subsystems designed to test and retest hardware as it's being used and monitors your system for any issues that may come up and alert you before they become big problems.

For instance were I work you have a mainframe. If the hardware detects a problem the stupid thing will actually dial IBM tech support in texas by itself and send them a bug report.

Then later on, probably in a half hour or so IBM tech support will call whoever is suppose to be in charge here at that time of day and then they can discuss what to do about the problem and arrange downtime for the local IBM contractor to come in and replace the questionable peice of hardware.

I've only seen that happen once and it had to do with the battery that maintained the ram cache for disk i/o controller was giving a low voltage reading so it needed to be replaced.

(Of course in that system it's all hardware.. But we are still running a older operating system hosted on a newer operating system in a VM and all that communicates well with the legacy hardware we have for tapes and such. Recently we've got rid of some tape drives that have been in production since the mid-80's.)

Ok, so you can't do something like that with a modern x86 server motherboard..

But you could imagine that if the manufacturers shipped with Linuxbios in them that they could provide enough flash ram (probably would require about 10 megs or so) to provide almost the same level of service.

Replace expensive raid controllers with Linux software raid with a dedicated CPU core from one of the multicore proccessors to handle that plus network I/O for the operating systems hosted in the VM, then use logical volume management.

To the end user they'd just plug in the drives and go. Maybe even give them a web interface to control it.

So imagine this:

So say you have a cluster of server machines running various services and operating environments. A legacy windows machine here and there.. some old SCO system for running some ancient accounting software somebody lost the source code to years ago.. a few Linux machines.. a Oracle database or a couple MySQL things floating around. Maybe even some 'Mosquito secure execution environment' things banging around inside the various legacy systems to allow for remote management.
www.ephemeralsecurity.com/mosref/ (link dead, sorry)
whatever.

So you buy a new server. No 'operating system'.. At least not in the conventional sense. It has a firmware hypervisor or Xen with Linuxbios. Otherwise it's just a box stuffed full of memory, drives, and cpu cores, with a couple network interfaces. You plug it into the wall, plug into the network, it grabs a DHCP lease and you open your web browser to it.

You setup your raid environmetn to how you like it.. Say a nice Raid 10 array. Divide it up how you want it to support the legacy operating systems and with the database systems you add disk space to the existing clustering file systems. Maybe lustre or GFS or Oracle's OCFSv2 or ZFS. Whatever your using.

Then you add your OS's CPU and RAM resources to the system pool (maybe using something like OpenMosix or Kerrighed). And now your system is part of a full fledged server cluster environment with shared RAM, disk, and CPU resources. Everything load balancing. (link)(link)
or something like that.

Built in hardware monitoring that will be happy to call your cell phone and text message you any sort of problems that may crop up.

This sort of thing isn't even realy limited by cpu architecturers. For instance with IBM's OpenPower you can get a option firmware-based hypervisor for your system to manage multiple partitions (IBM parlance for a slice of cpu time and memory). Even though it's a POWER proccessor environment you can add support for Windows by purchasing a Xeon proccessor that plugs into your PCI port.

It's amazing what the possiblities are.

Wed Nov 1 07:45:58 2006: 2574 TonyLawrence

Yep, it sure is amazing. Going to be an interesting decade, isn't it? Thanks as always for the quality comment, Drag: your insights always add a lot.

As I do, he thinks app vendors will provide their own specialized oses designed to run under VM's.

Quote: "[The situation] has gotten turned on its ear where the application is in charge of picking the best operating system. "

Fri Nov 17 02:07:46 2006: 2625 drag

Oh ya.

Vmware sees this as very important for them. It's how they are going to stay ahead of threats like Xen or Microsoft's Virtual Server (can't Microsoft come up with a name that isn't increadably freaking generic??)

There is a couple other interesting things..

rPath is a Linux/Virtual appliance builder. (link)
They have a automated system 'rBuilder' for whipping out customer's custom VM appliances. They also have a somewhat amusing flash animation explaining all of it. They support a veriaty of VM systems I beleive, but the obvious choice for that sort of thing is Vmware.

Also it appears that the Linuxbios stuff has attracted more attention. (link)

Apparently Google has decided to donate enough money to Linuxbios that they can now engage in widespread and automated testing of their bios-replacement in a wide veriaty of hardware, which isn't something that they've been able to do before.