Posted
by
timothyon Sunday June 10, 2012 @10:16AM
from the nostalgia-for-moderns dept.

First time accepted submitter Jizzbug writes "The X Window System made release X11 7.7 last night (June 9th): 'This release incorporates both new features and stability and correctness fixes, including support for reporting multi-touch events from touchpads and touchscreens which can report input from more than one finger at a time, smoother scrolling from scroll wheels, better cross referencing and formatting of the documentation, pointer barriers to control cursor movement, and synchronization fences to coordinate between X and other rendering engines such as OpenGL.'"

THIS is a very important idea. Sound is blocked from the X protolcol by an Italian-Apple conspiracy funded by the Vatican and the CIA to keep X from its rightful place as the Queen of Multimedia Web 3.0 Compurtrterizining.

It used to be very crash prone, laggey and with a whole lode of audio glitch issues. It has not caused any serious issues for me for about 2 years now, and complaints form new users have dropped dramatically so it seems to have improved quite a bit. You do not want to use it for low latency audio and there are a few specific pieces of hardware that do not work but most complainers either oppose its design principles or still hate it due to long memories rather than current issues.

You do not want to use it for low latency audio and there are a few specific pieces of hardware that do not work but most complainers either oppose its design principles or still hate it due to long memories rather than current issues.

I recently updated from debian squeeze to wheezy, and in the process it reinstalled pulseaudio. Surprise, surprise: my computer crashed frequently until I got rid of it again.

Sound on Linux has been very problematic the entire time I've been using it -- since the late '90s. It's turned into this weird tinker-toy arrangement where nothing quite works right, and debugging problems when you have them is extraordinarily painful.

Right now, the best solution I've found is to nuke all of the ALSA, pulseaudio, and other userland crap and go with OSSv4 -- it's been very stable for me over the last few years, and since it's self-contained at least solving problems doesn't take finding a needle in a haystack. The biggest downside is that (at least, AFAIK), it's not supported by mainstream distros, so if you're not comfortable recompiling your kernel and modules it's not usable.

Maybe, maybe not. Depends on what someone means when they say "crash". For instance, suppose your desktop environment suddenly crashes, leaving you back at a login prompt and losing all your work. To most users, that's a "crash", even though there may be no problem at all with the kernel or drivers, only with the desktop environment which runs in userspace.

So, what are you supposed to use it for? Given that gaming isn't exactly Linux's forte, and audio production is a more likely niche, which requires low latency audio, what problem is pulse trying to solve. Having multiple audio solutions because the primary one can't do low latency is retarded.

Meanwhile, FreeBSD has done multi-channel audio out of the box without any grief since at least 2005.

What makes it so bad, it sits between stuff that does something and the application your using, replicates functionality of pre-existing applications whilst increasing latency and when things do go wrong it seems that purging it from the system has so far done wonders.

YMMV so enjoy, I haven't found a compeling reason for it yet other that its widely adopted and getting harder to avoid. That does not make me smile.

Compelling reason for it? It makes my microphone work right in TF2 via Wine(likely because of resampling - Audigy 2 sound card).For everything else on that system, ALSA is fine with defaults due to the hardware mixer on that card.

On other systems however... I've had great luck with it on my laptop and it's crummy DAC, abd especially being able to hotplug a USB DAC and have the sound come out of there automatically. And I didn't even have to edit config files!Lag is still an issue, but there's a environment

I realize you don't have any, but if you try getting it to work you will discover that pulse audio is the only solution on linux.

Pulse audio exists because OSS, ESD, ALSA, jack, and whatever else there is cannot provide universal plug and play audio for linux. Pulse Audio has had a long road to travel, and it is not truly finished, but it is getting close to plug and play audio in a lot of situations.

I think the reason people don't like it is that it introduces a fair number of problems, and it only has benefits in some rare, specific circumstances

Pulse seems to have been introduced at the wrong level, for what it is. Because it works on top of ALSA, it relies heavily on some little used functions, such as getting the true decibel level of the volume controls. (This causes the PA volume controls to fail for some hardware, such as muting the audio at 25 %). On the other hand, it doesn't make use of all ALSA functions, so it does resampling and mixing in software, instead of relying on (possibly superior) hardware. It also doesn't expose all functionality of the underlying devices, and I think it was difficult to get passthrough of digital audio to work about 6 months ago. So it's a rich API built on top of another rich API, offering little benefit, and introducing some bugs.

I shoud concede that it's a cross platform API, while Advanced Linux Sound Architecture doesn't work on things like BSD. Still, BSD people look at you strangely if you try to get Pulse working, and tell you to use OSS or OSS4.

That's because Pulseaudio was designed to solve issues that for the most part have never existed on BSD systems. The BSD's evolved their existing OSS based audio subsystems to fix the few issues it had, whereas Linux chose to adopt a poorly implemented new system. I speak from experience, having tried to write an OSS shim for NetBSD that emulated the ALSA MIDI API, and became frustrated by the incomplete, innacurate documentation. I was also bemused by the ALSA API itself which looked like it was designed to be object oriented, but actually implemented by people with no real understanding of good OO princples.

Exactly. I've had multi-channel audio working almost out of the box on BSD since at least 2005. It was a breath of fresh air after trying to get audio working *properly* on Linux in a multi-channel, multi application aware manner, dealing with enlightenment sound daemon for some stuff (but games wouldn't work with it), etc.

To be honest, I agree with it's choice to hardware mix/resample in software - Most cards(by volume) are just dumb DACs, and the few that aren't(like my audigy 2) have enough bugs to make it useless to try - Just use ALSA straight on those cards, or only use Pulse for that application(which works perfectly well when you have a hardware mixer).

Pulse killed the possibility of multitrack recording studio on Linux. You can still look back and witness the dramatic die-off of multitrack Linux uses and discussion that coincide with the introduction of Pulse. It's like going back and looking at MySpace, or Friendster.

Right before Ubuntu brought in Pulse, I'd finally hit a sweet spot with Linux audio. With ALSA + qjackctl I was able to manage low-latency multitrack audio recording, and simultaneously have discrete control over the audio of all media players. Before pulse, I was able to use my terminal as a giant mixing board, managing recording and various media playback simultaneously. Different mixes and levels for different apps -- I was able to discretely control the audio levels and mixes for *each channel* in surround sound.

Pulse completely destroyed these capabilities, it eliminated the low-latency capability necessary for multitrack recording, and replaced it with frequent crashes, inconsistent behavior, and was tied in so deeply that Ubuntu has never since been capable of the audio layout I'd been using about five years ago.

Pulse is the single worst Linux move I've ever seen. In the interest of removing audio from the kernel space (necessary for low-latency), it simply eliminated what used to be advanced capabilities. Lennart Poettering, author of Pulse simply disregarded these concerns, waved his hands and said "that's not the concerns Pulse was designed to address!"

Pulseaudio is possibly the best thing that has ever happened to linux, the introduction was unquestionably problematic, but dmix that it replaced will not be missed. It also contains features that decrease the amount of interrupts needed, and it didn't work around problems in hardware but fixed. If you feel like it you can STILL use jack, in fact with -rt kernel I ran pulseaudio on top of jack in a 2.5ms maxim

Why not use jackd directly on top of alsa and pulseaudio as a client for jack? You could use pulseaudio for all desktop stuff that doesn't need low latency which I believe would only use one jack slot (or whatever it is called) and all latency critically things could connect directly to jack.

I removed it via a hack... by doing a force hack mv on the pulsecrap bin... hackish very hackish.

Now I just remove the packages in synaptic and let ALSA take over in Phonon like it should.

So just remove it and go back to doing what you were doing.

Yes its an uneeded and annoying to have to do this, but just like the other PITA project, WAYLAND, the too young to have been alive when X or ALSA came about have their heads buried on what is the "better path."

I started learning *nix w/ FreeBSD back towards the end of the 4.x series, and stuck with FreeBSD through the beginning of 6.x. As fond as I am of the BSDs, they are fundamentally incompatible with multitrack *recording*.

Multitrack recording is where you record a track or tracks while listening to and playing along with a prerecorded track. This requires extremely low latency -- you need to be able to play something and hear it back as close to instantaneously as possible. Latency needs to be at a lower ti

So you really think a human can perceive a sound desynchronization of six MILLIONTHS of a second? That's a period in which sound travels two MILLIMETERS. You could move the position of your head imperceptibly and incur a delay of that much. Do you really think moving the violins 2 mm with respect to the horns is going to be perceptible in a concert hall?

Weird... I wanted that feature, and that's exactly why I was installing PulseAudio for a year before Ubuntu picked it up as a standard. PulseAudio makes per-app mixing just work, whereas before Pulse came around I had never seen any OS do that since the BeOS.

It lies at the majority of the problems I have with my WebOS phone. Why they chose pulse on a device that DEPENDS on doing audio with multiple sources and outputs simultaneously is beyond me. Buggy and unreliable as hell.

Hmm. This sounds a bit like the KDE4.0 fiasco, except there the KDE guys stupidly said it was ready for mass adoption, even though it clearly wasn't. And then they did the exact same thing with GNOME 3.0.

It seems the Linux distro maintainers really don't bother to test their builds very much.

Some years back it was quite tricky to get working in distributions that didn't pre-configure it, required equal proportions of skill and dumb luck. If you stuck to the more user friendly stuff like ubuntu, mint, etc. it's likely you never would have experienced issues. I remember a great deal of frustration before finally just going back to alsa with arch. Damn thing was impossible...

But, like the former difficulty of wifi, it's mostly a distant memory. Mostly...

I've been using Ubuntu/Xubuntu for the last 6 years, and I like playing games in emulators like MAME and Mednafen. Pulseaudio causes delays, sometimes of several seconds, with the libsdl pulseaudio package installed.

I've never had a problem with it either. I also quite like Gnome 3, systemd, etc. Satisfied people are less likely to make a song and dance about their experiences though, so the moaning always dominates.

They don't whine and bitch and moan. So they are far less visible. That's the problem with stuff in general. Most people that aren't having problems simply aren't motivated to declare that things are fine.

I use pulseaudio on two different computers, on Linux Mint and Kubuntu. Never had any problems with it.

That said, I don't know that it's the greatest solution to the problem. It seems like OSS4, where multiple programs can write to/dev/dsp simultaneously (as I understand it), is an architecturally superior solution to doing this in userspace. Then again, it seems like it'd make even more sense to build sound into X (or its successor), so that people running remotely will have both video and audio redire

Sound works fine on FreeBSD, no need for ugly hacks like PulseAudio, just in-kernel low-latency sound mixing and a full OSS4 implementation, complete with per-application volume controls, surround sound, and all of the features you'd expect of a modern operating system.

Sound works fine on FreeBSD, no need for ugly hacks like PulseAudio, just in-kernel low-latency sound mixing and a full OSS4 implementation, complete with per-application volume controls, surround sound, and all of the features you'd expect of a modern operating system.

That's one thing the BSD's got right. However Linux had an old and unmaintainable version of OSS, so ALSA had to kill it off in Kernel. PulseAudio is a nasty bloated buggy piece of crap, even old ESD works far better IMHO.

So the same guy that made system boot configuration and init scripts a huge pain in the ass with systemd is also responsible for screwing up our sound support? Somehow I am not surprised by this....

It was called 'NAS': Network Audio System. AFAIK it was output only, supported stereo audio, and had more or less died out by 2005. I used it a bunch prior to that as a much much reliable replacement to ESD/pulseaudio however. It just worked, allowed me to stream audio to remote X sessions, and did it with pretty low network overhead. As a bonus it used whatever your DISPLAY variable was set to as the remote end.

Coordinating with other engines? Isn't that the kind of thing that lets one use Wayland partially as a standalone server side to side to X? Is this the 'feature' of stepping down and letting other servers or engines develop?
Maybe we'll evolve this way past the Xorg.conf and its documentation, good riddance, moving from a wrinkled legacy to a more sane and friendly approach. I love X when it works, it's unbearable when it doesn't.

Except one of the biggest contributors to Wayland is Keith Packard, the guy who forked X.org from XFree86. I'm pretty sure he has a good grasp on windowing issues. Not sure he's the person to write something from scratch, though, especially when all the low-level I/O details were already worked out decades ago; they don't have any experience with viable alternatives. The RPC mechanism in Wayland is truly horrendous.

I don't know about problems with X windowing system, but I bet some of them will stick with Wayland even though it's supposed to be new stuff.Still I really hope I'll never have to dig trenches just to get it back running, or at least running it in some kind of fallback/safe/minimal mode.This is my only hope concerning vaporware and (may I add) pink unicorn glittering p(h)ony features Even though my sarcasm didn't climb this tree of comments, I'm happy I got to read a comment to he point. Thank you.

It's so not complicated to export a window to another display. Just set an environment variable to tell it where to display and authorize the machine on the remote display. Nothing to it. Despite that, the companies I've worked at that have needed to do this just set up VNC and work on the machine remotely instead.

If you want to do something like Sun, where you authenticate and it finds your session out there and pops it up on your machine, that's a bit more complicated. Pretty cool, but more complicated.

'remote windowing feature'? That's like saying http has a 'remote web page download feature' because you can connect to an http server from another machine. The whole point of X is that it is a network protocol from the ground up. It's designed for environments where applications are run over networks; unfortunately nowadays the PC model of computing has won, which is why 'remote windowing' looks like an extra 'feature'.

'remote windowing feature'? That's like saying http has a 'remote web page download feature' because you can connect to an http server from another machine. The whole point of X is that it is a network protocol from the ground up. It's designed for environments where applications are run over networks; unfortunately nowadays the PC model of computing has won, which is why 'remote windowing' looks like an extra 'feature'.

It MAY LOOK this way, but "cloud" computing is nothing more than the resurection of time share systems of the past on mainframe and minis.

XDMCP and X via SSH will play an even more important role in this "new cloud" world.

The fact that the developers of certain software, cough wayland cough, was not even alive when X and thin computing or using a TTY60 on a dial up modem, shows that forgetting history is just as applicable to computer/technology as it is to the real world.

It MAY LOOK this way, but "cloud" computing is nothing more than the resurection of time share systems of the past on mainframe and minis.

Only if you define "cloud computing" as "running jobs on shared infrastructure with quotas". Mainframes and minis of the past didn't have replicated filesystems, didn't scale to multiple sites and certainly didn't let you run potentially untrusted code in a sandbox.

Cloud computing is to car pooling as a mainframe is to mass transit. It's certainly cheaper and more efficient to send 1000 people from A to B by bus or train than to use the equivalent number of cars, but it only works if a critical mass of peop

Did Linux ever get an equivalent to DirectDraw? I know there is svgalib, but I thought that was equivalent to full screen DOS programs on Windows 98, since it could not share the screen. The news about coordinating rendering engines sounds neat, like you could safely get access to video memory and bypass any windowing systems, but still cooperate with a windowing system.

Not exactly, but the concept of direct access to video memory became unimportant. There's really no use for it at this point. Graphics hardware is sufficiently complicated that there is no useful way to "just get a pointer to video memory." The concept really no longer exists. If it did exist, it would be completely different on different hardware. And, it would be horribly slow.

Instead, you have OpenGL. You can make a texture on the CPU, upload it to the GPU, and draw it while "cooperating with the w

"The Linux framebuffer is a way to write directly to the video framebuffer"

No , the linux framebuffer is essentially abstract video hardware so the video drivers can be moved into the kernel instead of X windows having to have them. Unfortunately this means every write to the display now has to go via the kernel instead of going direct. X using its own specific video card graphics drivers avoids this and is in consequence a damn sight faster. And the programs I tested were using base Xlib on a raw X server

Unfortunately this means every write to the display now has to go via the kernel instead of going direct.

No, if you mmap() the framebuffer you (as in the application) write directly to video memory (of course, what you say is true if the application uses lseek() and write() instead, but why would it?). The reason it's slower is because drivers make the GPU do all the heavy lifting. The only abstract thing about the Linux framebuffer is that you don't have to map the memory yourself, the kernel drivers do it for you.

I remember a few years ago hearing that there were plans to incorporate NX technology into X, what's the status for that?
I run NX sessions over a slow internet connection to a remote machine and it works well, standard remote X is unusable for me.