Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Given that early benchmarks of the Lucid Lynx were less than encouraging, Phoronix decided to take the latest alpha out for a spin and has set it side-by-side with an early look at Fedora 13. "Overall, there are both positive and negative performance changes for Ubuntu 10.04 LTS Alpha 2 in relation to Ubuntu 9.10. Most of the negative regressions are attributed to the EXT4 file-system losing some of its performance charm. With using a pre-alpha snapshot of Fedora 13 and the benchmark results just being provided for reference purposes, we will hold off on looking into greater detail at this next Red Hat Linux update until it matures."

The problem isn't ext4 - it's an ext4 flag that gives you better data reliability in case of a power failure. If you're willing to risk it (or have a good UPS), you can change the flag and get all that performance back.

I also have to say that for a site that does so much benchmarking, phoronix is incredibly unprofessional. How about error bars on those bar graphs? Are caches cleared before each benchmark? Etc.

Catering to niche users at the expense of the majority.Removing functionality from X. Deleting the ability to restore a feature.Making it damn near impossible to troubleshoot X crashes.Ppppppp-p p p ulseaudio

Even though PA has had it's share of compatibility problems, it is working much better now. Things (sound devices) that never worked before actually work now and switching between them is possible -- on the fly --, when they weren't working at all before. It's so great to be able to use high quality audio for music/games, and a USB headset for phone calls.

Windows 7 or Vista instructions: Right-click on the little speaker icon in the bottom right. Click "Playback devices". Right-click on the device you want to use instead of you current device. Click "Set as Default Device". The audio output will instantly switch to that device.

Windows 7 or Vista instructions: Right-click on the little speaker icon in the bottom right. Click "Playback devices". Right-click on the device you want to use instead of you current device. Click "Set as Default Device". The audio output will instantly switch to that device.

I assumed the GP was referring to the ability to move sound from between output devices on different computers. In the middle of playing. (Both machines running PulseAudio, of course) This is what makes PulseAudio worth the growing pains that it has been.

No one has gone through the effort because, despite what you may believe, the great masses don't care if their sound system can play to another computer since PulseAudio still makes doing so tricky. Its infinitely easier to just turn the speakers up louder so whatever is playing can be heard elsewhere.

Airport express does other things besides stream music. The assumption behind pulse audio is that a great many people have multiple devices (which all have pulse audio running) and would want to play music to and from them while still remembering which one is which. The key sticking point is that multiple computers are required. That is an expensive investment. All of the other options that you mentioned are much cheaper that a full blown second computer and are special purpose devices. The use case for s

PA growing pains should not have been forced upon users of the most popular Linux distro. They should have only switched to it when it was actually ready, and it still doesn't seem to be. "Linux" still suffers from horrible sound lag and crackling issues. To get rid of that stereotype, most distros need to move to what works or get it fixed before a final release.

I'm going to agree with the sibling here. I always hear about people bashing pulseaudio, but I've never had any issues with it. I also particularly enjoy the low-latency networked audio features. I can play the audio from my movie on the laptop through the speakers at my computer when it's connected to the TV. Which is great, because the desktop's real close and my TV speakers suck.

Your mileage may vary. I did have slight p-problems with pulseaudio in their earlier versions, now I don't have them anymore, they were fixed for me. Anyway, Pulseaudio is very handy for my bluetooth headset. Rerouting audio streams is also very convenient.

Pulseaudio should be taken outside and shot. I too thought we'd put the troubles behind us but on upgrading to 9.10 I found everything and gone completely to pot again with no audio at all. ALSA at least plays sound but the start up sounds don't quite chime correctly, now I know at some point I'll want to get the thing working again because Pulseaudio has some useful features. However I do have to wonder if Ubuntu's priorities right at all, I shouldn't have to dive into config files and command-line just to get sound working.
Please Canonical just get sound working for everyone, once that's done you can worry about the positioning and colour of the notification dialogue.

I concur.The last two upgrades left me without sound until I purged pulseaudio with fire.

I understand what it's supposed to do. If it would do that, I'd absolutely love it. I'm psyched about the possibilities. But at the moment, I can't get it to function.

I have ALSA, and it works. I need to be able to "sudo apt-get install pulseaudio", reboot, and have that work. I've spent years of my life fucking around with audio under linux. I'm at the point where if it works, I'm not going to

It's virtues are ease of installation and convenience of adding useful software that isn't included in "purist" distros, but Ubuntu is the "AOL"of the Linux world.AOL was once very useful to masses of users. They don't need it any more...

Given the indifference of Ubuntu management to release quality Ubuntu won't be useful much longer. The beauty of Linux is that there are and will remain many alternatives.

I have not found one. After Ubuntu shat pablum residue on my system I tried in order.

Kubuntu, wow KDE is shit when it used to blow gnome awway.Madrivel It just didn't work and is alien enough to be unconfigurable.SUSE It had serious X issues.

Fedora 10, it sort of worked and I could deal with it. Upgraded to 11 and have so far managed to keep sound working for the few hours I need it a day. Via one of the alternative yum repositories efforts 3D works for ATI. I even managed to get Doom3 to talk to pppulseaud

The X issue is CTRL-ALT-BKSP, it's gone and all I got were snide remarks and derision when I asked how to get it back. It's gone from Fedora as well but they had a method to get it restored. It's tedious and pedantic.

I wish they'd stop focusing on increasing performance by a few milliseconds here and there and work out why my upgrades never work, or flash objects turn grey and i have to restart firefox or why my audio is choppy, and why the nvidia drivers make Xorg fail randomly or why I have to press the power button on my PC to take it off after everthing is unloaded.

It also doesn't (always) require a firefox restart. Open a task-viewer (top, htop, system monitor, etc.) Find npviewer.bin (the flash plugin process), then kill it. Do a full reload (Ctrl+F5) of the page. Firefox will restart the flash plugin and it should work, so long as it doesn't crash again. Note that it may be better to restart firefox, given how poorly written the flash plugin is I'd not trust it to die cleanly.

LOL! Flash works just fine without all of this nonsense in Firefox on Windows 7. Never crashes. Just works.

And I didn't pay a dime for my copy of Windows 7. It was just like downloading a Linux distro that works perfectly.

WTF are you guys torturing yourself with this stuff? Go download a copy of Windows 7 and enjoy. Install Virtual PC (free) or VMWare (free) and muck with your toy OS there when you feel bored.

But my GOD! Stop torturing yourself with this crashing Flash Player nonsense!

Funny, the 64-bit Flash Player 10 works just fine in my Debian Linux machine.
I'm probably doing something wrong, perhaps I should mess around and try to break things (I've simply installed the plug-in and it worked - silly me).

Well, perhaps it's the damned 64-bit Google Chrome for Linux I'm running...
Ohmygosh, I've just realised... ALL my OS and its applications are 64-bit! And it's all working, and fast!:(
Bad, bad Linux!

I've noticed there are some (I found only one, actually) audio boards (or their drivers) which are unable to simultaneous playback. Not my case though, and I'm using a SB Live 5.1.

Another possibility is that the other software is using OSS (instead of ALSA) for sound output, while Adobe's Flash uses ALSA (OSS is deprecated, as you may know) and that is causing conflicts.
AFAIR Adobe's Flash used to use OSS befo

I am guessing it used to work before the OS upgrade, so that is a completely valid question.
The attitude "we don't care if software which is not distributed by us breaks on OS upgrade" is not going to fly for long if the OS is to get some real mass usage.

The attitude "we don't care if software which is not distributed by us breaks on OS upgrade" is not going to fly for long

That wouldn't be a problem if third-party software developers would share their source code with Canonical. Someone could debug into it and easily discover the problem. But apparently, source access would break some third-party software developers' business models.

What confuses me is why they're doing performance tests on alpha releases. Obviously the answer is to get page views, but how long will it take people to realize that performance isn't what they're trying for in the alpha..

Or worse, comparing an alpha of one distro to a prealpha of another distro, as if the numbers are at all useful. It tells us nothing about their individual speeds by the time we'll be using them, and tell us nothing useful about the speed of either distro. WTF?

As I understand it, the issue is that the default time between cache dumps to disk is 4 seconds. This is much longer than ext2/3. So, if you yank the power cable during this time, on the next reboot ext4 will have no record of the event ever having occured and will use the previously journaled data instead.
If this is actually the case, then I don't really consider this a bug. It's just a larger cacheing window.
Someone please correct me if I'm wrong.

They resolved an issue which lead to file being overwritten being left empty on a crash. The problem was that they were optimising the write order to make performance better. This lead to metadata being updated too early in some cases so you would get a corrupted file. Now the issue has been resolved which lowered the performance although I think there may be an option you could turn on. So if an application is updating the file you will get the old version or the new version (assuming they have written the program in a half decent way) of the file which is good enough. If you want anything better than that you should be running a UPS which should be correctly configured to safely shut the system down (unlike one system I experienced that had a UPS but then everything crashed when the UPS ran out of battery because the sysadmins were appalling).

That's not the point, the expectation developers and users had changed from Ext3 and the people in charge of Ext4 adamantly and arrogantly claimed the same things you are.

But most people don't have UPSes and expect the filesystem to Do The Right Thing(tm) and that is, try to keep their data intact. Users should not be expected to have to tweak arcane settings, and user programs should not even have the right to alter those settings.

My beef is that the Ext4 Gods decided that performance was more important than user's data, changed some settings and caused a shitstorm when people en masse disagreed with them. Their perspective was essentially, to the programmers, change your code, or, to the users, use a UPS or turn off write caching.

This was beyond arrogant. Apparently the problem is solved because the developers backed down because of the controversy.

The issue was that there were certain operations that behaved differently before and after the "upgrade" to EXT4, and I specifically said:

If it breaks user's expectations... it is a bug.

In this case, Ext4 changes some expectations that users (and programmers, who are ultimately users too.) Now supposedly some of these issues have been resolved. That's good, but I recall some significant discussion on Slashdot in the past and the same naysayers came along saying "Why try to anticipate a power outage?"

* Write new config file to a temporary file
* Rename the temporary file over the top of the old config file
This way if the computer crashes, applications expect the config file to always be valid. i.e. they expect the data to have been written to disk, completely, before the rename happens.

I would say they expect the data to be written to the VFS layer with intact happens-before relations, so the later rename renames the complete new file contents. Whether or not anything is physically written to the d

Wanna know why Ubuntu is the linux flavor to beat? It's fun to use. No messy compiling of the kernel, no conf files to edit to get it up and running, it just works. Especially with the latest revamp of the alsa interface, not to mention the snazzy layout of the repo browser. Track record last few releases has been good.

There's no 'compiling of the kernel, no conf files to edit' in Debian or any other mainstream distro either. Hell, even Arch Linux does well without custom kernel compilations, and most editing of config files as well, IIRC (depends on usage of course; not that I'd recommend it to any newbie anyway).

The 'it just works' factor isn't something unique for Ubuntu: almost all the others have it as well (LFS an exception). The only thing Ubuntu gives you is a package that will mostly fit the average desktop user in the default install. Pretty much like Mandriva and others. Kernel compilation is not and has not been necessary for more than ten years for any of the mainstream distros.

You forgot unnecessary default bloatware (mono) and a propensity to cover up useful text screens with useless graphical screens.
But still, I'd take Ubuntu over Fedora anyday, at least there's some kind of real attitude about support compared to Fedora's ambivalence.

Let me air my grievances too: I have recently set up a vps with ubuntu and was just horrified by the package management. There is aptitude, apt-*, and dpkg-*, all of them are verbose and none of them seem to do what I tell to do.

I wanted to remove apache: "aptitude remove apache" didn't do anything useful and "aptitude remove apache2.2-common" wanted to install something else. Finally I just put ArchLinux in a chroot and was done with it.

That may be an exaggeration, but I kind of agree. I've been using ubuntu since Edgy, steadily upgrading, and am now using Karmic. Starting with Jaunty, and now continuing with Karmic, I've been having multiple serious problems with sound. Karmic is also causing me several problems where they changed something and made sure it worked with Gnome, but it doesn't work properly with other WMs: 1 [launchpad.net], 2 [launchpad.net].

What often really matters are the upstream apps. Often, other than reporting an upstream bug in an application to the developer, there is not much one can really do about bugs in upstream applications like KDE. I am seeing that now with KDE and X.org. Currently, there is a bug in evdev and dga in X that prevents X from working right with a Wiimote. It can't really be fixed by the distributor. Only X.org can fix it.

So far I have:

Broken Sound effects on Stratagus. (Mandriva 2010.0)Broken GLX Support on QuakeForge. (Mandriva 2010.0) But DarkPlaces Quake still works.Broken Wiimote Support in the evdev driver.

These are just a few examples of applications that don't work becaues of a problem upstream.

If your distribution closely follows upstream, and has a good policy on dealing with upstream it can help to report bugs.
The keys to this are
1. Patching the distributions instance of a package as little as possible, so it's as much like upstream as possible
2. Having packagers work closely with upstream to ensure that bugs filed against the distribution are filed against the upstream project.
3. If a fix is made in the distribution- to get that patch offered upstream.

This 'review' is complete rubbish alpha and beta builds are allays much slower than the production versions. They have all types of debug options turned on. I don' see how you can compare them. If one os has more debug options turned on than the other it would be slower. Surely....

First off, let me say that I use Ubuntu 9.10 on my system at work. I am also running CentOS on servers, various Ubuntu on servers and a couple of Fedora systems. As you can see, I have experience with all of them.

So why is this review useless? Because they are testing development systems, which are not optimized, have loads of debugging flags set, and essentially are not ready for prime time. Of course it may be running slower!

IMHO, you should ignore benchmarks until the release candidates, at least. I generally ignore benchmarks on unreleased systems. I do, however, like to read and learn about new features which may be present in early releases.

The PostMark disk performance between Ubuntu 9.10 and 10.04 Alpha 2 was close while Fedora 13 was behind, but again given the debugging options used during the development cycle and its pre-alpha state we aren't worrying too much.

By pulling a computer from a dumpster, outfitting it with a $100 hard disk, and installing Linux, I get a giant file server, saving me $200 on an easy backup solution (vs. Apple's Time Capsule). That makes me $200 richer than I would be otherwise, meaning I can use that money elsewhere. With the money I've saved over the years thanks to Linux and other open-source packages, I will soon be taking a Caribbean cruise. Has your "real" Mac ever paid for your vacation?

By pulling a computer from a dumpster, outfitting it with a $100 hard disk, and installing Linux, I get a giant file server, saving me $200 on an easy backup solution (vs. Apple's Time Capsule). That makes me $200 richer than I would be otherwise, meaning I can use that money elsewhere. With the money I've saved over the years thanks to Linux and other open-source packages, I will soon be taking a Caribbean cruise. Has your "real" Mac ever paid for your vacation?

Little do you realize that your "dumpster" computer pulls quite a bit more power then the power miser time capsule (30w maximum). Considering the cost of electricity a 100w device costs around ~100 a year to run (24/7). So over two years, you are at a negative with your dumpster computer, not to mention the extra time spent setting it up.

Well all the data (google) I can find on the timecapsule is that it takes approximately 12-13 watts. So payoff period is more like 8-10 years instead of 2

While Linux is nice (I develop on it for a living), I find that to many people blindly say its better. Even just considering power management, I find windows or even mac can save you a little money with better made drivers. You need to make sure you have the right device to do the job, and the upfront cost of linux doesn't always justify it's use.

In many years, about the time it breaks even, it'll be up for replacement anyway. By that time, I've earned a couple dollars for free in interest on that initially-saved $200.

I'm not blindly stating anything. Rather, I'm countering a blind statement against it. There are also too many people who accept the marketing spin about corporate products being better on some arbitrary level.

If you're going to claim that running Windows will save me money in the long run, I want proof. Are there any reputable tests s

To chime in, I particularly like how the easy backup solution involves setting up a file server. My easy backup solution involves getting a TB disk from NewEgg and clicking twice on the Time Machine prompt when I first plug it in.

Congrats on your cruise...What backup software did you use? I use backintime for my father who works from his home as a translator and really hates re-doing his work. Albeit, I don't use NAS, just a 500Gb external USB disk drive and a cheap UPS -- power outages were somewhat common in the area. It will still take him about 10 years to fill the disk with hourly incremental backups.

I put a £50 hard disk into an old Mac (I didn't pull it out of a dumpster but I did get it for free) that now serves as my Time Machine backup disk.

Apart from paying pounds for my hard disk, since I live in the UK, how is that different?

It was clear the AC troll initially doesn't represent any sort of starting point for discussion of Apple vs totally open source solutions, and it should also be clear to you that the Time Capsule isn't the only way to use Time Machine and is generally unsuitable for mo

Not being much of a Mac person myself, I only initially compared my setup to the Time Capsule because of the AC's apparent love for Macs.

My fiancée has the only Mac in our house (triple-booting), and at one point I set up my server to act as a Time Capsule for her. It still got the fancy interface.

Really, it's just a matter of preference. I prefer the feel of a keyboard to a mouse, and would rather type out a few commands than drag and drop icons. My initial comment was more against the AC's financial

Money does not concern me. I'm far more concerned with getting the most quality for the price. In cruises, quality really depends on what line you go with. Sure, you can go with the bottom-dollar line, offering little more comfort than steerage on a freighter, but a better line is still quite luxurious. My preferred line is pretty nice. It's not the most expensive line, but offers quite enough amenities to suit me.

Dell and a few other companies have been bitten by the bug. Depending on who you source your parts from depends on the quality (and longevity) of your computers (and reputation). I have no doubt that the parts going into a $120 Intel brand motherboard cost a few cents more each than a similar AsRock or ECS Elitegroup board that costs half as much. You get what you pay for. Intel stuff generally doesn't break in the same decade you buy it. You're lucky to

That's pretty much the experience I've had. My dad bought a Gateway desktop in 2006 (S-939 Athlon 64) and just outside of the [short] warranty period the motherboard started dying. The on-board video started going, so I got him a cheap video card to circumvent the problem. That worked for almost a year until the south-bridge died too. All in all I think the box lasted about 3 years. In comparison, I built my own system with similar specs in 2005, but with decent brand-name parts, and I've had very very few

You're lucky to make it to the end of the warranty period with noname crap from newegg or Frys.

If it breaks before the warranty period expires, you get a new one. If you make it to the end of the warranty period before it breaks, you have to buy a new one. So you may wish to rethink that statement;).

You still have to mail it back to the manufacturer, wait for them to confirm it's broke, and then mail you a new one from china. Plus crack open the computer, and deal with all the MS "you've installed new hardware" BS. For a $40 piece of equipment. Getting a new warranty replacement piece of equipment from a bargain bin manufacturer might take longer than the warranty is good for (three months). Gigabyte and ASUS are better about their parts, but you have to ask yourself "how much BS am I willing to put up

If only I didn't have to write a script for ubuntu to reload my wireless modules periodically. At least its brightness actually controls my backlight, so I didn't have to script a hack like I did for 7.04. There's more bullshit, I just don't remember it now. I only wish that I didn't have to use the cli in ubuntu.

ubuntu has gotten easy enough to work for the technically-capable person who doesn't want to bother too much with details. it's still not something I'd recommend to an average user, unfortunately.

The wireless thing... well, maybe it is. But since Intel as apparently cooperating to get decent linux drivers developed, it's hard to believe that I could do much better except by getting lucky. (http://intellinuxwireless.org/)

If there is no systematic approach by which I can get good, up-to-date hardware that's linux-compatible, then claiming "hardware incompatibility" is kind of a weak excuse.