Posted
by
samzenpus
on Thursday April 22, 2010 @04:59AM
from the how-about-a-test-drive dept.

An anonymous reader writes "It was way back on 2006-09-07 when Red Hat released its first public beta of Enterprise Linux 5. Today, after more than three years, Red Hat finally releases its first public beta of its next-generation OS: RHEL 6 public beta 1. From the news release: 'We are excited to share with you news of our first public step toward our next major Red Hat Enterprise Linux platform release with today's Beta availability of Red Hat Enterprise Linux 6. Beginning today, we are inviting our customers, partners, and members of the public to install, test, and provide feedback for what we expect will be one of our most ambitious and important operating platform releases to date. This blog is the first in a series of upcoming posts that will cover different aspects of the new platform.'"

We have an environment with AMD Opteron 270 based servers where we use virtualization heavily. We either have to give up on the servers or on RHEL 6. I think that we'll stick with EL5 until we go into a server refresh cycle.

Don't really care about Xen vs. KVM from a product perspective, but for the Opteron 270, Xen is the only one that works since that Opteron doesn't have hardware virtualization instructions. KVM doesn't (to my knowledge) support software based paravirtualization like Xen.

While I've been a fan of VirtualBox for a while too, with the Oracle acquisition I wonder if adopting it now isn't just asking to take a ride onto another abandoned VM platform. Oracle already has Oracle VM [wikipedia.org], which is Xen based. At this point it looks like Oracle is going to turn VirtualBox into a gateway product [virtualization.info] used to hook people used to upsell onto Oracle VM. I'm not sure what that bodes for the future of VirtualBox development. I'm guessing that Oracle shifting development focus toward Oracle produc

You also must notice that Virtualbox has a couple of proprietary features that are only available if you pay them: Support for USB and RDP. This is the typical Sun open source business model, open source it but require copyright assignment to all external code contributions, so that Sun can release an alternative version with propietary addons (which even the external contributors have to pay for)

There's still a few apps out there that either require USB keys for licensing, or that you want to have interact some sort of physical device that doesn't have its own IP stack. Thankfully, these cases are fairly uncommon these days, but they do still exist.

I wasn't the one who suggested VirtualBox as a Xen replacement. If your position is that "VirtualBox = desktop", that's just further evidence that it's probably not appropriate for the FP here to adopt, which is in line with my suggestion to tread carefully in that direction.

While primarily targeting the desktop, VirtualBox was becoming increasingly useful as a server virtualization solution. My main point was that such improvements are less to continue now, because Oracle already has a Xen based solution

1. VirtualBox doesn't support 'server' guest operating systems -- This would be incorrect as VirtualBox does support server guest operating systems. In fact, if your guest OS is Linux, it doesn't matter if the distro is a 'server' distro or a 'desktop' distro.. the OS packages are the same, except for their versions and distro-specific patches.

OR

2. VirtualBox doesn't have features typically used by admins who deploy server operating systems -- While this may have been correct years

It's a good example of a glaring barrier for open source growth: programmer man-hours, and corporations filling in those man-hours and buying the product, basically for the technical features already reached and marketing effect, with no commitment whatsoever with open source. Then nobody can quite fork the product and continue maintaining it open, simply out of a man-hours shortage. More options to get people working on open-source projects, keeping them open, are needed. I sort of lean toward feature pl

Do you mean that your virtualization hosts are bleeding for no explained reason, or are you trying to say that RedHat carries a social stigma because of their acquisition of Qumranet and support for their KVM platform?

You can see that there's a 3 phase cycle for release support. Major versions are supported for 7 years, with the first 4 years being "primary support", i.e. new features, hardware support, and bug / security patches, and then after that they move into a maintenance cycle in which they will first not push new features, and finally only push bug fixes / security patches that are marked as "critical

Too bad except the Xen they shipped in RHEL5 has been nothing but a headache for me. VMs set to auto-start don't. Sometimes. Rarely they hang on the way down and have to be killed. Trying to put a different version of RHEL or Fedora on often results in failure (conflicting paravirt support from the kvm switch = no dice).

I don't know the politics, but as someone who has to support two-too-many Xen hosts, I really can't fault Red Hat for ditching that bastard system. It had great potential until Citrix plastered their cursed name all over it, along with a nerfed GUI that doesn't even have a Linux port. Fast-forward to 2010 and the only people who don't retch at the sound of Xen, are the people who have already thrown gobs of money at Citrix to throw broken solutions at their non-problems.

The Citrix stuff had little to do with it. Th Linux kernel developers favor code that is easy for them to integrate and maintain, and KVM fit better into that model than Xen. There are some situations where it performs quite a bit better too, and frankly few people care about those stuck with processors that don't have the right extensions to use KVM. Some good reading on the background here includes Discover the Linux Kernel Virtual Machine [ibm.com], Linux: KVM Paravirtualization [kerneltrap.org], and The truth about KVM and Xen [codemonkey.ws].

A lot of system management utilities had to treat execution under dom0 quite differently than on linux normally. A lot of the industry would rather have a hypervisor platform with a 'normal' OS behavior to it.

The CentOS project is serving the beta ISOs from their tracker, but Ill be damned if I can find the.torrent files served via CentOS. $random_blog_guy is serving some which link you up to the CentOS tracker.

The packages mostly match those in Fedora 12, which makes sense as that came out in November and FC13 isn't released yet. However, they have bumped some things. Most notably, the FC12 kernel was 2.6.31, while RHEL6 uses 2.6.32. That's not surprising given a fair number of virtualization and performance features, as well as bug fixes, happened for 2.6.32 [kernelnewbies.org].

It's not even quite that simple unfortunately. I highlighted the kernel example because FC12 is based on 2.6.31, RHEL6 on 2.6.32, and FC13 on 2.6.33. So in that particular case, they're picking a version that doesn't match any Fedora release.

It's not even quite that simple unfortunately. I highlighted the kernel example because FC12 is based on 2.6.31, RHEL6 on 2.6.32, and FC13 on 2.6.33. So in that particular case, they're picking a version that doesn't match any Fedora release.

FC12 was released with 2.6.31 but is now running 2.6.32, so I guess RHEL6 is closest to FC12.

It's a very fast moving tree. For example, I'm running 2.6.32 on F12 right now even though it shipped with.31. The.32 kernel just happens to be the release that balances the need for test enough with the latest release out of kernel.org.

It would be quite wonderful if someone could figure out a way to make packages installable easily on all linux distros, or at least create a few "compatibility profiles". This whole repository ubuntu-vs-debian-vs-redhat-vs-mandriva-vs-older-versions-of-same is a nightmare for newbie users.

It would be quite wonderful if someone could figure out a way to make packages installable easily on all linux distros, or at least create a few "compatibility profiles". This whole repository ubuntu-vs-debian-vs-redhat-vs-mandriva-vs-older-versions-of-same is a nightmare for newbie users.

This has existed for a long time. It's called 'linux standard base' or LSB.

This has existed for a long time. It's called./configure && make && make install...

But the whole "configure" thing could bring in some ideas from the apt/deb/rpm world. Dependency tracking in tarballs. The ability to maintain a repo of sources and platform-specific patches, then cache binary versions for each requested architecture. Imagine serving the exact same repo to all your different Linuxes, BSD (including OS X), Windows, and Solaris. Wouldn't that be grand?

Specifying the RPM file format is not enough. Without detailed spec of how packages are installed and managed, LSB is of little use. It also doesn't say much about which default settings are considered reasonable. Nor does it deal much with issues of vertical integration (without which a Linux distro can look like a pile of non-cooperating, user-hostile pieces).

Stating in effect ''insert Gnome or KDE here'' doesn't cut it. It leaves a design vacuum (esp. about device-UI and service-UI behaviors) that a desk

It would be quite wonderful if someone could figure out a way to make packages installable easily on all linux distros,

It's called building all the libraries and bundling them all together. Include them all in the package, and then using a script, craft an LD_LIBRARY_PATH that places this library location at the end of the path, using the OS' libraries if they are present. You need only link to the proper versions of libraries to make this work (that is, most projects just link against the major version; link against the minor as well. That avoids a lot of incompatibility problems, at the expense of being more likely to dri

Uh, no. OS X provides a rich set of libraries as part of the base OS. Apple goes to great lengths to ensure compatibility between OS versions (libSystem is compatible to version 1). The only time any software includes a library inside their app bundle is if they wrote it or it is an OSS library that isn't in the base OS. Most apps don't need to.

Exactly. There was little-to-no help with compiling Firefox 3 for RHEL4 til Red Hat released it. Now, you'll not find anything anywhere about compiling Firefox 3.6 for RHEL (at least, it was true last time I checked).

Why do newbie users even need to care about that? If you pick a distribution that has a good set of packages, they should rarely have to leave the ones provided with it. Run whatever front-end for package management you've got, make sure all the optional repositories are enabled, and there should be so many packages there the hard part is sorting through them all--not finding even more. Particularly given that so many things that used to be run as local apps have moved onto web applications nowadays, the main headaches for Linux newbies I see is getting their hardware working and making Flash work.

Download the latest VMware player [vmware.com] e.g. VMware-Player-2.5.1-126130.i386.bundle (download the bundle version, not the rpm one) and run it as root using gksudo. You'll get a graphical installer that installs VMware player for you.

Those hugely important features like "more colours in your OS icon" and "a name that doesn't include 'Enterprise' so directly" (yes, I realise CentOS is still based off "enterprise", but RHEL is short-hand for a full name where as CentOS is its name).

(yes, I realise CentOS is still based off "enterprise", but RHEL is short-hand for a full name where as CentOS is its name)

That'd be what I said - I've never seen anyone call CentOS anything but CentOS. As in "CentOS is called CentOS but no-one ever uses 'enterprise' in its name, even if the 'ent' comes from Enterprise, but RHEL is not just RHEL but Red Hat Enterprise Linux, which people shorten to RHEL because it is too long to say in full".

Right..and unless Red Hat made a serious change to the way they do business, the support contracts aren't free.

CentOS, on the other hand, does not have this limitation. The public yum repositories available by default in CentOS allows you to install and update packages, whereas in RHEL you have to be a paying customer to use their private yum repos.

Oracle's approach with Oracle Unbreakable Linux (which is essentially a re-hash of CentOS, which in turn is a re-hash of RHEL) is if you're not a paying s

Right..and unless Red Hat made a serious change to the way they do business, the support contracts aren't free.

But the GP is already paying for it. It's not like I suggested they go out any buy a support contract so they can get the upgrades. They've paid for a contract that they're not using.

However, if you plan on setting up a box to run Oracle on...

If you're setting up a box to run Oracle, just buy the RedHat or Oracle OS support contract. It's a pittance compared to the Database support contract. If you can't afford the Oracle support and OS support, you can't really afford Oracle in the first place. Which is what Oracle told you when they listed the requirement in

1. Releases: Please compare the release date of say, RHEL 4.8 (19/5/09) to CentOS 4.8 (21/8/09).Or better yet, compare RHEL 5.5 (30/3/10) to CentOS 5.5 (will be ready when its ready).Now, CentOS devs tend to follow RedHat security updates fairly closely, and I usually see the CentOS updates ~12-48h after their RHEL parents.However: A. In production environment, I rather not wait 12-48h. B. Given the complexity of major updates (E.g. RHEL 5.5), CentO

True. But Redhat put a lot of work into Linux, and I'm happy for my company to help fund those coders, so I buy RHEL licences.

Well...if you don't actually need the support and are only purchasing it as a way to support RedHat, wouldn't it make more sense to just make a donation to them and continue using your distro of choice?

This is the attitude that makes commercial open-source so difficult. Until Redhat employ every developer whose code is used in their distro, you can accuse them of freeloading. Redhat contribute to a variety of core packages, including the kernel. That's enough to keep me happy. I'm not saying they're perfect, but they're not bad. The very existence of CentOS should show that they're sticking to the GPL. But you also have to remember is all those patches that go back upstream, and appear in Debian, SuSE and the rest.

Thank you backwardMechanic. Calling RedHat freeloaders is completely ignoring all the contributions they made to OpenSource. They did not write 100% of the code that RHEL runs on but they did fix a lot of issues that would never be taken cared off by the upstream project for lack of coolness.
The reality today is that the Kernel is mostly developed by programmers paid by large corporations such as RedHat. Same goes for Novell who employs a lot of opensource hackers.

Realistically how long term is "long term". They've been playing by the rules for what? 15 years now? Is it still possible that hey can totally sell out and go back on what they've done? Sure. It's also "possible" that RMS might spend a weekend playing with an iPhone and the App Store and realize he's been wrong all these years. (Which will no doubt lead to yet another complaint about the pain and suffering which is his life.) Red Hat has done a good job of balancing corporate health with Open Source v

I don't use RHEL, but I occasionally get complaints from people who do because it ships with a really ancient glibc that is missing features that I use in my code (you know, really new stuff from the 1999 version of the POSIX spec). For Linux-specific features, I don't believe that the glibc included with RHEL includes timerfd() support, which means that implementing an efficient event-driven application is difficult (you have to mess around with timeouts on epoll() and keep track of them yourself, rather

Yup that's what led me away from CentOS too, after several years of fighting with packages that wouldn't compile due to unmet dependencies. I managed to survive for a while by packaging my own sets of PHP/MySQL and friends, but that only covered a tiny part of the spectrum. Trailing a few versions behind everything got really annoying, maybe not a big deal for big business, but my work is always on the more experimental side of things. I'm fine with building stuff from source, but the glibc issue cripple

I develop on FreeBSD, so I only come across these issues when someone tries running my code on GNU/Linux. Glibc is a nightmare to work with, so I generally leave that to other people where possible (you need horrible combinations of -D directives to make it conform to recent versions of POSIX or SUS, and often these hide other things). I generally target GCC 4.2, because that was the last one released under GPLv2 and is the one that clang aims to be compatible with. There aren't many things in 4.2 that a

To be honest your software sounds cutting edge and uses features that haven't made it into the mainstream long term supported server market that RHEL is in.

I'm not sure about his software, but the specific feature (timerfd) he's talking about is not "cutting edge" by any measure - other platforms have had it for decades. I'm surprised that it is still considered newish in Linux land.

Be nice; Linux got it within two decades of Windows NT. Okay, so Windows NT, Solaris, most other commercial UNIX variants, Symbian, QNX, and *BSD also all had unified event notification before Linux, but Linux is still the best OS in the world ever! Or so people keep telling me.

For the desktop it's not the best choice. For workstations, it's exemplary. Supports a wide range of workstation class hardware. Rock solid. I've had some run ins with support, but they come through in the end.

Most obvious distro Ubuntu also isn't the best choice for desktop in a homogeneous environment. Best sole choice for desktop distro is SLED. It's well rounded, has excellent support, and the vendor is in the habit of actually touting it as a desktop OS.

This is a good point, and one that comes up each time we have to renew our licenses. "We're paying how much for a free OS?"

You're not paying for the OS. You're paying for support.

However, Red Hat's pricing model plays on their position as market leader. Novell charges much less (you can buy SLE{S,D} through Microsoft licensing agreements for pennies on the pound compared to RHEL's pricing). Canonical charges much less.

Still, per seat of Workstation, it's more in line with Apple's pricing than Microsoft's fo

Don't know the answer to that, but the first mainline kernel to have it is 2.6.33, and it looks like RH6 is using 2.6.32. However, Red Hat has a history of backporting features and bug fixes to their kernel without changing the version, so it's possible. Considering that it takes as long as it does for a major version change (kinda reminds me of Debian there) it would make sense for an Enterprise distro to make sure TRIM support is there.