Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Lally Singh writes "The Linux-friendly OpenSolaris Indiana has been released! A new, modern package manager and all the goodies of Solaris: ZFS, DTrace, SMF, and Xen on a LiveCD that was designed for Linux users. 'Why use the OpenSolaris OS you ask? It's pretty simple, you'll find it full of unique features like the new Image Packaging System (IPS), ZFS as the default filesystem, DTrace enabled packages for extreme observability and performance tuning, and many many more. We think you'll be quite happy to came by to take a look!'"

The big plus of Nexenta for me is that it is based on APT, whereas OpenSolaris (the distro) has invented yet another new package system (IPS). APT just works so well on Debian and Ubuntu that I don't want to use anything else, and for end users there are nice tools like Synaptic and Ubuntu's Add/Remove tool (which shows popularity ratings for packages as well). At least PCLinuxOS adopted APT while still using RPM as the package format...My only real interest in Solaris is to use ZFS on a home NAS - having

apt-get is not perfect. In fact, you may call "a hack." I don't think there's any real "theory" behind it. apt-get may even remove a user's kernel package, as one of the 600 traces in this study reveals:OPIUM: Optimal Package Install/Uninstall Manager

It's not Debian. Debian has had the ability to fully encrypt the root partition during installation since Sarge I think. Etch for sure. Ubuntu can do it too with the alternate installer. OpenSuse and Slackware have excellent docs on how to get / file encryption.
Disk Encryption is essential for laptops and removable media in 2008.
If Solaris wants to get adopted by government and financial sectors for use on laptops it will need to have some form of serious disk encryption.
To be fair to the OpenSolaris

I assert that it's too little, too late. If Solaris had been freed in the early part of the century, it might have made some headway against Linux. As it is, it'll be stripped of anything useful and portable and will be as irrelevant as HP/UX or OpenVMS for all but locked-in legacy users.

I assert that it's too little, too late. If Solaris had been freed in the early part of the century, it might have made some headway against Linux. As it is, it'll be stripped of anything useful and portable and will be as irrelevant as HP/UX or OpenVMS for all but locked-in legacy users.

This is an idiotic statement and I can't believe anyone modded you up. The source for OpenSolaris has been available for years. When will the stripping start? Where is ZFS for Linux? Where is DTrace, Zones, or any of the other cool new stuff?

Those are just some of the big items that get mentioned. Solaris' resource management and auditing tools are very impressive and I haven't seen anything comparable in linux that can give as much control for as little overhead.

They're not there for licensing reasons. If Solaris had good enough drivers I would run it on my laptop --- but again, for licensing reasons that's not going to happen either. Both the GPL and Solaris's licence have advantages and disadvantages, but this is the reason why all free software should use compatible licences.

Code is flowing freely between FreeBSD and Solaris. FreeBSD has adopted ZFS and there's no legal reason not to port *BSD drivers to Solaris.

but this is the reason why all free software should use compatible licences

Which excludes the GPL. Linux's GPLv2 isn't even compatible with LGPLv3 due to some of the extra requirements placed on it (a problem we've encountered just after moving a large library to GPLv3 and getting complaints from developers of GPL applications that include code from places like xpdf that didn't have the 'or later' clause).

Anyone here know what's so special about the Image Packaging System? I found the homepage [opensolaris.org], but it didn't really explain how it differed from traditional packaging methods. (More annoyingly, it didn't even explain that intriguing name!) A quick check of Wikipedia doesn't offer much help, either. Anyone know the scoop on this (new?) system?

Nothing. It's a piece of shit actually. Sun is all about Java so many of the tools like IPS are written in it. It eats memory like no tomorrow and performance suffers. Don't even think of running this stuff on a machine with less than 1GB of RAM.And the stuff that isn't newly written in Java is like a throwback to the early 90's. Cryptic and hard to use. Sun uses a lot of GNU software but it's a big mix of bastardized custom stuff, stuff from the old Solaris, and GNU tools. It's difficult to get stuf

Sun is all about Java so many of the tools like IPS are written in it.

Except that IPS is written in Python, not Java. See the FAQ [opensolaris.org]:

"The Image Packaging System (IPS) software is a network-centric packaging system written in Python."

That much is easy enough to find. What Sun isn't saying is how this differs from existing packaging systems. i.e. The rational for creating a new packaging system rather than adopting an existing packaging system. And why is it called the "Image Packaging System"? Using the term "i

The high level parts of the system may be written in Python but the underlying tools it uses are Java. You can actually run some of the command line tools to save memory.

It doesn't help much but it does help. It only took 48 hours to run the updates on a fresh install on my Blade (LOL, it's ridiculously slow, using the GUI version probably would have taken a solid week to finish running).

The high level parts of the system may be written in Python but the underlying tools it uses are Java. You can actually run some of the command line tools to save memory.

You use the term "underlying", but then refer to the ability to run command-line tools directly. I think you're confused. You're probably thinking of the Sun Management Center [sun.com], a graphical tool that allows you to manage your Solaris-based system. It is based on Java, but it's also sitting ABOVE the command-line tools, not below them as you

I stand by my original statements 100% (I'm a certified SAP basis engineer on Sun equipment).

I might believe you if I wasn't a professional Software Engineer with over a decade of experience with Java and access to the IPS source code on the OpenSolaris site. Alas, however, I am a professional Software Engineer with a decade of Java experience and I canread the source code [opensolaris.org]. There is no Java visible in these tools. It's a completely Python-based system. I seriously doubt you'll find an OpenSolaris developer who will tell you otherwise.

You may believe what you're saying, but you're probably just confused. Don't worry about it. It happens to the best of us.

"Image" in the name refers to the ability of the packageiung system to install to a chroot-like enviornment. The Distribution constructor (what actually builds the iso) basically creates an "image" area, installs the packages to this are, compresses it, and converts it to an iso.

Apart from that, you can also create partial images, which is a space you as a normal user can install packages to. These link back to the libraries already installed.

I'm sure some of these features are available in existing linux packaging systems. But these are things the Opensolaris community has wanted for a long time.

Apart from these features IPS also has automatic snapshoting (using ZFS in the background), so you can revert your system back to earlier snapsots.

For someone who has been using OpenSolaris (SXCE) as a server platform for Apache, ZFS, etc for awhile now, I welcome an easy to upgrade and improved userspace Solaris. Will try this one out.
Solaris has had a relatively poor userspace experience for someone used to Linux machines. The kernel is top-notch though.

I'm still using Solaris 10 for a project I'm working on but am looking to move it to OpenSolaris before release.Project Crossbow [opensolaris.org] is one of the projects I wish was currently available now. It looks like the easiest way to set up virtual switches and networks which is a great feature to use along with zones. Right now I'm using a hack I found online to do this. Crossbow is a lot easier and integrated with SMF. I haven't really had time to really focus on making a management script for the hack yet. It's

The primary difficulty with OpenSolaris is that is part of a new breed of corporate controlled Open Source.Much as they might trumpet that it is, it isn't actually proper open source. I can't take it, rip out any bits I want and use them elsewhere. No matter what the license says, if I can't do that, it isn't 'Open', and as you point out, some bits you can't.

Also, it has hardly any developers not already on Suns payroll, and those that are independent are shackled by a lack of proper tools.

is that ZFS, despite all its goodness, lacks some incredibly basic features compared to 99% of the hardware and software RAID and LVM systems out there. You can't grow (please pay attention here) a ZFS pool except by adding similarly-redundant vdevs, and there is no way to remove a vdev from a pool, unlike LVM2.

So. Got a 4-drive RAID-Z2 array, and you want to add more space by buying another drive to add in to your 5-bay hot-swap cage? You're shit outta luck. If you have a zpool with a vdev that consists of a pair of mirrored drives, you CAN add another vdev of two drives, then another, etc. You also CAN replace the drives in a vdev with larger drives. That's kind of half-okay, but still not on par with RAID cards of a DECADE ago. Even Linux's MD can grow RAID5/6 across more devices!

Someone suggested the ability to grow redundant pools by single devices, and the reaction amongst solaris ZFS developers (!!!) was "now why would you want to do that?", and then when THAT was explained, "well shucks, I wonder how they do that" (they = almost every hardware and software RAID solution on the planet.)

Absolutely astounding that a Solaris filesystem developer would not be able to at least guess as to how a RAID5 array would be re-striped to add a new drive.

Far as I know, they've been working on the grow capability for more than a year and we have yet to see it.

It's apparently on their radar, but at a frustratingly low priority. I agree that the omission of this seemingly simple feature was a major oversight on their part. Here's a link to blog post by one of the developers at Sun:

I'm not sure if this is the case, but I got the impression that RAID-Z isn't the way they'd like you to use ZFS because you'd get better reliability and performance from just adding multiple mirrored sets to the pool. You can add multiple RAID-Z sets to a pool and that will give you better performance than adding one big RAID-Z. I can't find the link but there was a blog posting comparing IOPS in different setups and the recommendation was to use a max of 4-5 drives per RAID-Z vdev.I haven't played around

This is not exactly true. No matter what your pool config is, you can
always grow it by adding any sort of top-level vdev to it. For
example if you have a N-drive raidz, you can add to it a 1-drive "mirror"
(no redundancy, not recommended), or a 2-drive mirror, or a 3-drive
raidz, or a 4-drive raidz2, etc.

I think what you tried to say is that it is not possible to convert
a N-drive raidz/raidz2 array into a (N+1)-drive array. The r

The darn thing never even boots successfully on most all of my machines - on the one machine where it does - the network card (wired) is not detected making it unusable. OpenSolaris seriously needs a bunch of smart driver developers contributing drivers and general x86 workarounds - just not suitable for x86 hardware as of today (unless the h/w happens to be Sun).

I ran into a similar problem. In a lot of cases, the drivers for the network cards are actually available. The problem seems to be that there is no mapping of the PCI id in/etc/driver_aliases. I've found that in many cases you can just add a line in that file with the appropriate pci vendor and product id and the nic will work. You can find the pci vendor and product id using prtconf -v and searching for the Ethernet Adapter section.There are also a bunch of free network drivers for Solaris can be foun [nifty.com]

If I remember correctly, they swapped linux kernel with sun kernel and added some tools. Since debian (foundation of Ubuntu) is kernel agnostic (but linux is the working kernel), SUN just ported Ubuntu to solaris.More on it: http://en.wikipedia.org/wiki/Nexenta_OS [wikipedia.org]

If I remember correctly, they swapped linux kernel with sun kernel and added some tools. Since debian (foundation of Ubuntu) is kernel agnostic (but linux is the working kernel), SUN just ported Ubuntu to solaris.More on it: http://en.wikipedia.org/wiki/Nexenta_OS [wikipedia.org]

What you said relates to Nexenta which is a distribution of OpenSolaris. Indiana is the distribution from OpenSolaris.

Sun had a lot of rights under previous licensing agreements before Novell even purchased the rights to Unix. The SCO deal seemed to be for some additional licensing and some drivers. Novell has claimed they won't be suing anybody over Unix anyway.

While ZFS is cool, it will someday be ported to Linux (the market forces are such). The advantages over ext3 etc. are simply not compelling enough for me to abandon an entire universe of software and hardware I have gotten used to with Linux distributions.I see no use for Dtrace as I use nothing more fancy than Matlab for analyzing my data. No fancy number crunching or developing here. I used to do a lot of heavy duty Fortran 95 programming, but that is history (which will not be repeated).

Actually, this is incorrect. Ext3 can support up to 16TB (there were some bugs for kernels older than 2.6.18 for really big filesystems, but even back then 8TB was no problem). The filesize limit is 2TB, and with ext4 that limits for the filesystem and individual files with be 1024 Petabytes, or 1 Exabytes.

As far as requiring an fsck every X mounts, thats basically due to paranoia because PC class hardware, i

I believe your evaluation to be incorrect on several levels. Firstly, the issue you point out is true for RAID-anything, as the filesystem has to be able to survive the loss of one of the disks for RAIDZ. RAID5 is no different in this regard.

Secondly, with RAIDZ (or RAID5) and 4x500GB, you wouldn't end up with 2TB of disk space -- you'd end up with 1.5TB due to the overhead of the parity data.

Thirdly, you don't have to replace all of the disk drives with RAIDZ to increase the amount of disk space dramatically. You seem to be thinking of RAID5, not RAIDZ. With RAIDZ replacing one of your 500GB disk drives with a new 2TB disk drive would indeed still leave you with only 1.5TB of disk space, due to the requirement for redundancy, but if you bought a pair of 2TB disk drives to replace two of your 500GB disk drives, you would increase your disk capacity from 1.5TB to 3TB, and if you just added the pair of 2TB disk drives to the pool as a mirror, as opposed to replacing existing drives, then you'd increase your disk capacity to 3.5TB.

Fourth, no one is forcing you to use redundancy with ZFS if you don't want to suffer the redundancy/reliability overhead. You can add non-redundant disk drives to a ZFS pool.

With ZFS, a pool is a collection of "vdevs", and you can add new vdevs to the pool at any time to increase the capacity of the pool. A vdev is either a RAIDZ (which is kind of like RAID5), a RAIDZ2 (which is kind of like RAID6), a mirror (which is kind of like RAID1), or a bare disk. The pool is then is kind of like a RAID0 over all the vdevs.

As a proud LDD touting, LWN gazing, MSc wielding geek; the Solaris kernel is a heck of a lot better coded, structured and organised than the Linux kernel. But alas, it lacks the many new features that have truly driven linux over the last decade.

Naturally my opinions lie with the ease of code readability and ease of initial development - these are not the same as a lkml hardened pro

You misuse the semicolon. A semicolon is not used in the same contexts as a colon. Instead, it is used to join two sentences (which would otherwise be complete), or to separate items in a list when the use of a comma would be ambiguous. Therefore:

In no circumstance can you write "As a proud LDD touting, LWN gazing, MSc wielding geek; the Solaris kernel is a heck of a lot better coded..." without looking like a semiliterate try-hard. In general, the best advice for using a semicolon is "don't, unless you know you're sure".

As a self-confessed geek, you should know the importance of correct punctuation. It's not just helpful to compilers.

They have also forcibly crashed it over a million time and it has never lost data even once. Try doing that with your home PC.

And what... you don't care about your photos, docs and music???

Nowadays you lose data because the *disk* dies, almost never because the filesystem gets corrupted (at least not on modern systems). Although the risk does statistically grow with the number of systems.

Last time I lost data to a filesystem problem must have been to a FAT disk, which means it must have been 10 or 15 years ago. I did lose data to hardware failures though. Several times. Recovered most of them through backups. Not all.:-/

Nowadays you lose data because the *disk* dies, almost never because the filesystem gets corrupted (at least not on modern systems). Although the risk does statistically grow with the number of systems.

I wish that were the case. Friday, I was doing file backups to a fairly small (4TB) software RAID5 array (md+lvm+ext3) on a commercial file server running a vendor supported Linux distribution prior to dumping them to tape. The system hung hard during one period of high I/O.

Upon reboot two devices in the array came up with bad magic in the superblock and all was lost. The consensus seems to be that filesystem corruption caused enough confusion that the md driver decided to overwrite the superblocks.

ZFS doesnt offer me anything as im not managing serversDon't want easy raid/storage expansion on your desktop? You don't want efficient storage?
Dtrace doesnt offer me anything as im not a developerYou don't want to know how your system is performing [opensolaris.org] in a way like never before? I'm not a developer, but a sysadmin and use dtrace every day to tell those pesky developers that yes, it's actually THEIR CODE that's at fault at not the server I setup for them. It's also neat to be able to easily see what process is using how much network bandwidth in realtime. That was difficult before.
SMF doesnt offer me anything i cant do with startupI don't like the complexity of SMF, but it's self-healing for the stuff that's already built for it is cool as is it's dependancy checking.
IPS doesnt seam any better than deb or rpmIt's better than just RPM, but it's about the same as deb or yum. It's a big step foreward for what was a commercial OS.

I can tell you haven't even tried solaris 10, but give it a swig. Before solaris 10 I wrote (often rightly) wrote of Sun. Why would I pay a premium for something FreeBSD can do for free and outperforms it? The hardware is cool (see coolthreads processors...it's hyperthreading done right), it's affordable, and it's innovative.
It may not be compelling enough to switch from linux or whatever if all you use from a desktop is firefox and thunderbird, but there is actually some VERY cool stuff in there. Don't write it off. There's a reason FreeBSD is taking in a lot of these features.

Well the only special thing I could find on sun.com is that thanks to ZFS I can now hook up$59,889,696,578,085,169,569,553,930,907,991,205,216.26
worth of harddisks to my desktop instead of the puny $3,246,626,956,972,881,084.41 I can spend on a 64-bit filesystem.

Yeah, and you have to fsck that with a traditional filesystem. Plus, zfs takes care of bit rot (which is becoming a problem as HD sizes get larger) volume management (and makes it extremely easy). Well you can make fun of the theoretical limits, when your modern 1GB hard drive crashes or 1.5 tarabyte array crashes you'll be happy when you can boot without having to wait for the filesystem to be checked. Have you had to deal with volume management before? It was a pain in the ass.

I disagree. I guess you haven't seen one of the common types of data corruption that can happen with raided disks.
It's a common misconception that raid "prevents" data corruption.

RAID only protects you against (complete) hardware failures, and "noisy" IO errors.
Consider:
You have bad data on disk, but the hard drive reads the bad data without error.
With parity, (even assuming the parity is read upon each read request, which would be a faulty assumption), raid 5 has no way of telling which disk is bad, or whether the parity is bad.

Unlike raid, ZFS has end to end checksumming, so it knows when the data on disk is bad, and it knows which copy is bad, too.

UPDATE: See our follow-up story [datacenterknowledge.com] for more. Joyent was using an older version of ZFS, and the bug in question was fixed nearly a year ago.

From that article it seems that patching/updating OpenSolaris isn't the same as patching/updating Solaris. I have no personal experience in updating OpenSolaris though. OpenSolaris does seem to have the smpatch utility.

most raid environments don't do checksumming at every step of the data write / read process.most raid environments cannot detect silent corruption (bad cache, bad sector, flipped bit, etc) once the data has been read or written.most raid environments don't offer double parity.most raid environments require that the entire raid array be initialized at once, wasting potentially hours of time for the formatting/initializing to be completed.most raid environments when using off the shelf SATA/PATA drives can potentially go bad, even with parity... If you were doing a RAID 5 array with TB size drives, there's a potential that the MTBE can be reached while regenerating data on a replaced volume from parity causing the entire array to be toasted.

All of these things are not issues with ZFS....

ZFS is easily expandable, automatically realigns that data as you expand the pool, can have multiple sub-mount points (mounted anywhere) that can have different attributes - like compressing/shared/extended permissions/iSCSI and more on the way, like encryption, multiple compression algorithms, etc....

I've played/worked with ZFS now for over 2 years and have never lost a single bit of data - even though I've tried...

Build your RAIDZ pool on 20 drives, in 2 disk expansion units attached to 2 channels of a single SCSI card (10 drives per channel)... now shut the box down, remove all the drives, move them around between units, add an additional scsi card to the box, split the disks up between the scsi cards so they are now split 5 per channel, take one drive back out, and erase it... hold onto it for later...

Bring the box back up... the pool will come back online without problems, running degraded as one drive is missing.now put the erased drive back in, and issue a resilver command, wait a while (not as long as a standard raid controller would take) and voila - all data that was stored on that erased drive is back and in place, and the pool is no longer running in degraded performance mode.

try any of that with a standard raid controller and your data is f0rked!

Sun has a video out that I'm too lazy to search for here, where they run ZFS on a bunch of pen drives, plugged into a USB 2.0 Hub. Faster, and fault tolerant. Pretty amazing. ZFS is not for just servers. Think of apples "time machine" software. Also, ZFS includes lots of Metadata and checksums, to prevent bit-rot of your files.

ZFS doesnt offer me anything as im not managing servers
Are you using content of any sort (images, documents, mp3s...)? Do you care about the longevity or integrity of any of your data? Have you ever lost data? Slap a GUI on ZFS, call it "time machine" and you don't have to be "managing servers" to appreciate what ZFS can provide to Joe user.

Dtrace doesnt offer me anything as im not a developer
If you think dtrace is just for developers, you don't understand dtrace. Developers have always been able

Sad but true. And the documentation is indeed severely lacking when compared to a commercial system.The code has apparently gotten a bit cleaner although BSD still remains more legible.

Still it doesn't change the fact that for the time being Linux is *it* (whatever that is). It's the system that has the mind share (apart from Windows of course). And for the most part it works just fine.

So while there certainly are other more advanced solutions, I don't see them taking Linux's place in the sun (ha ha) any

things not looking so good after reading the transcripts of the Novell/SCO trial, Novell wasn't properly involved nor paid in SCO's granting to Sun the ability to open source System V technology in OpenSolaris. Uh oh, what SCO lied about Linux might be 100% dead on with OpenSolaris, stolen Unix IP!!!

Novell taking on SCO is one thing, Novell taking on Sun is quite another. Sun is a much bigger company than Novell and a lot more money. It's not worth the fight.

It seems like SCO stiffed Novell by not giving them their cut of the licenses, but that doesn't mean the licenses they gave were invalid. If that was the case, the issue would have come up already.

Novell gets some good publicity in their fight against SCO, but in reality, they're not much of a player in anything. SuSE isn't that popular, at some point their revenues for their legacy products will dry up, and then what's left? There revenue has been declining for years and their profits have been iffy. All they're going to get out of the SCO trial is some pats on the back since SCO doesn't have any more money.

While there's no arguing that what SCO did was messed up, I don't really see Novell in a good light either. Novell purchased the rights to Unix for $300mil. The transaction between Novell and SCO was for about $120-150Mill. So SCO paid about half of what Novell paid and only gets 5% in licensing fees and no patent or copyrights according to Novell.

This just doesn't seem right to me. Either Novell seriously screwed over SCO and they were too stupid to know it, or something else is going on. Ray Noorda, who was CEO of Novell, left to start Caldera. Noorda is undeniably the reason Novell was who they were. From what I could gather they did have a good relationship.

Bottom line, I don't understand how Novell can claim they pretty much just sold a 5% commission deal for 50% of what they paid and act like their shit doesn't stink either.

Up to his death, Noorda owned the Canopy Group. One of its holdings, Caldera Systems, purchased the Unix assets in 1995 from the Santa Cruz Operation, which had acquired them from Novell. In 1996 it also acquired the Digital Research assets from Novell and immediately brought a lawsuit against Microsoft that largely duplicated the claims that the FTC and Department of Justice had pursued in the early 1990s. The lawsuit was ultimately settled in 2000 with a $275 million payment to Caldera.

Every time one of Norda's companies purchases something that used to belong to Novell, they sue. Usually Microsoft (Noorda hated MS).

Sorry but it just seems fishy to me. How would Novell not expect that SCO/Caldera would ultimately sue. Maybe Novell was aware of a possible lawsuit to attack RedHat while they were making moves with SuSE?

Read the transcripts. Novell sent Sun a letter before they open sourced Solaris to warn them that their license from SCO was invalid. Now they're asking the court to rule that this is the case, and Judge Kimball has given every indication that he's willing to do so.

I imagine that the folks at Sun have been pretty nervous since last August. Imagine, paying millions of dollars to put your product in exactly the position you've been (erroneously) proclaiming your competition is in. Not smart.

Um, what repos do I need to enable to get ZFS or DTrace functionality? Perhaps the ones powered by pony magic, because last time I checked Linux has neither of these very very cool (and useful) technologies available (and ZFS-Fuse most assuredly does not qualify as 'available' yet).

You, sir, are unmitigatedly fuckin' retahdid."Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, the Initial Developer hereby grants You a world-wide, royalty-free, non-exclusive license... under [patent claim(s), now owned or hereafter acquired, including without limitation, method, process, and apparatus claims, in any patent Licensable by grantor]... to make, have made, use, practice, sell, and offer for sale, and/or otherwise dispose of the

Ok, you're going to find better explanations elsewhere but this is my understanding of it.

OpenSolaris is not necessarily a "distribution". Nexenta, Shillix, etc are "distributions" built on OpenSolaris. Project Indiana as I understand it, is a distribution coming directly from the OpenSolaris project.

At first OpenSolaris wasn't supposed to come up with it's own distribution, and now that it is it did some people didn't like it. Or they didn't like that they were going to call it OpenSolaris instead of Indiana or something like that. I'm not clear on all the details.

Since Solaris will be built using OpenSolaris, Project Indiana is also kind of like an early access release of Solaris 11, without JDS.

OpenSolaris = Bleeding-Edge Test Version of Solaris 11 (Think "Alpha")Solaris Express = Snapshot of OpenSolaris found to be "relatively stable". (Think "Beta")Solaris 10 = The full "retail" version, often updated with features seeping up from OpenSolaris, that needs to run fine and be perfectly stable on Big Iron.

Well, I tried ZFS on FreeBSD and after a few severe crashes (the last tries were 3 weeks ago on FreeBSD 7-STABLE), this is a combo that I will never put any production data on. At least no until a few years of stabilization.

Yes, FreeBSD has ZFS, but it's experimental for a reason. So no need to avocate this yet.

The only serious platform for ZFS yet is still Solaris, and Indiana is a welcome release.

They had the rights to SVR4 that Solaris is based on to use it, to develop their own OS based on it, to sell it under trade secret and copyright protection but not to make it open. They then bought that right for a song from SCO because at that moment the latter needed a cash infusion to continue their jihad against Linux.

The judge in te SCO V Novell case ruled last August that SCO does not own the c