Even after reading the FAQs and descriptions from the related project pages, I'm still not sure if I really understand what those software packages are good for?

Well, ALSA is an Hardware Abstraction Layer for audio devices, so, if one would like to output sound to a sound device, one should write an ALSA driver for his application.

Ok.

And JACK, aRts, esd etc. are some kind of "sound servers", making it possible to direct sound to them and eventually do some crazy stuff with the audio streams, like add an echo or simply mix them together, right?

So, without an sound server, it would be like:
app -> ALSA -> sound card

and with sound server:
app -> sound server -> ALSA -> sound card

Right?

Well, basically, I have disabled all those sound servers in my make.profile, since I'm thinking "What do I need those for if my favoured apps have an ALSA output plugin?".
Even all of my KDE apps work without aRts.

So, from a pure user point of view, my questions:
What benefits could I get from one of those sound servers, e.g. if switching my bmp ALSA output to one of those?
Where's the difference between them?

Is it possible to solve the following problem with one of those sound servers, and if so, which one would be the best to choose?
Basic problem is, that I would like to adjust the volume of my multimedia apps independently from each other, for example:
- mute xine while still being able to listen to my skype buddy I'm currently talking to
- only increase the skype volume or my favourite media player
etc.

If it's not possible to do that stuff with one of the sound servers currently available, how else could I achieve those goals? Or is it already possible to do that with ALSA? Oh, yes, and it should be easily configurable via a GUI of course :)

If you have a sound card with no hardware mixing, and don't have ALSA set up with dmix correctly, sound servers will let you use two programs that generate sound at once. As long as they both use the same sound server for output, of course. Some of them also add some network capability.

Software mixing is the big one as far as I can see.
However my desktop has a vaguely decent (ie. not built-in) soundcard, so it doesn't need it - ALSA works just fine.
My notebook could use software mixing, but I don't bother because I don't have one audio stream on it that often, let alone two

I won't touch aRts since it got installed when I first installed Linux; it was a total disaster._________________What are you, stupid?

In the programs that you run, just set the audio mixer control setting to "software".

Yes, I know about that option, but, with software mixing enabled, it's only possible to adjust the volume to a maximum of the current hardware mixer level, e.g. if my PCM device is set to 50%, then increasing the volume in my app to 100% won't superseed the 50% of my PCM.

Is that behaviour just a design decision or is it generally impossible to superseed the system volume with a software mixer?

Another problem with that is, that there are apps out there who don't offer you a choice between hardware and software mixing. amaroK for example does software mixing by default without the option to change to hardware mixing. The xine-gui crackles when adjusting volume with software mixing enabled, and LICQ for example doesn't offer a volume adjustment at all.

mnxAlpha wrote:

If you have a sound card with no hardware mixing, and don't have ALSA set up with dmix correctly, sound servers will let you use two programs that generate sound at once. As long as they both use the same sound server for output, of course. Some of them also add some network capability.

k, as far as I understand the ALSA documentation, dmix is enabled by default in versions 1.0.9_rc2 and above. And as far as I remember, it was always possible to have multiple apps play sound on my system, so, either my sound card supports it or dmix was and is setup correctly on my system.

btw:
How could I reliably determine if my sound card supports hardware mixing of streams?

Besides the network capability, I still can't see what a sound server could be good for in my case. As far as I understand it at the moment, ALSA can do everything a sound server can do.

Archangel1 wrote:

I won't touch aRts since it got installed when I first installed Linux; it was a total disaster.

What kind of problems did you have with aRts? The only thing I noticed once I tried it out was, that sound output was disturbed under heavy load, dramatically more than with direct ALSA output.

Well, I don't have any experience with audio development, neither under Linux nor under another OS, but, if every app is using ALSA to output his sound with, then, in theory, it should be possible to adjust the volume on different streams in a generic way, with ALSA, or not?_________________# 1:
PIII 450, Intel 440BX, 448 MB SD-RAM ATI Rage IIC AGP 8 MB, SB AWE 64 PnP

JACK is designed for low-latency (professional) audio applications and allows applications to share (audio) data easily. I plan to use JACK for my synthesizer. It also has an easier API than Alsa imho._________________"As your attorney, I advise you to drive at top speed. It'll be a goddamn miracle if we can get there before you turn into a wild animal."

JACK is designed for low-latency (professional) audio applications and allows applications to share (audio) data easily. I plan to use JACK for my synthesizer. It also has an easier API than Alsa imho.

Jack is pretty good actually. It's like you said for professionals but it's low latency makes it attractive for normal users as well. Jack will very likely become the standard sound server for linux... Ardour for example doesn't even run without jack