Why asoundrc?

What is it good for, why do I want one?

Neither the .asoundrc nor the asound.conf files are required for ALSA to work properly. Most applications will work without them. They are used to allow extra functionality, such as routing and sample-rate conversion, through the alsa-lib layer.

The .asoundrc file

This file allows the you to have more advanced control over your card/device. The .asoundrc file consists of definitions of the various cards available in your system. It also gives you access to the pcm plugins in alsa-lib. These allow you to do tricky things like combine your cards into one or access multiple I/Os on your mulitchannel card.

Where does asoundrc live?

The asoundrc file is typically installed in a user's home directory

$HOME/.asoundrc

and is called from

/usr/share/alsa/alsa.conf

It is also possible to install a system wide configuration file as

/etc/asound.conf

When an alsa application starts both configuration files are read.

Below is the most basic definition.

The default plugin

Make a file called .asoundrc in your home and/or root directory.

vi /home/xxx/.asoundrc

copy and paste the following into the file then save it.

pcm.!default {
type hw
card 0
}
ctl.!default {
type hw
card 0
}

The keyword default is defined in the ALSA lib API and will always access hw:0,0 — the default device on the default soundcard. Specifying the !default name supersedes the one defined in the ALSA lib API.

Now you can test:

aplay -D default test.wav

The naming of PCM devices

A typical asoundrc starts with a 'PCM hw type'. This gives an ALSA application the ability to start a virtual soundcard (plugin, or slave) by a given name. Without this, the soundcard(s)/devices(s) must be accessed with names like hw:0,0 or default. For example:

aplay -D hw:0,0 test.wav

or with ecasound

ecasound -i test.wav -o alsa,hw:0,0

The numbers after hw: stand for the soundcard number and device number. This can get confusing as some sound "cards" are better represented by calling them sound "devices", for example USB sounddevices. However they are still "cards" in the sense that they have a specific driver controlling a specific piece of hardware. They also correspond to the index shown in

/proc/asound/cards

As with most arrays the first item usually starts at 0 not 1. This is true for the way pcm devices (physical I/O channels) are represented in ALSA. Starting at pcm0c (capture), pcm0p (playback).

We use subdevices mainly for hardware which can mix several streams together. It is impractical to have 32 devices with exactly the same capabilities. The subdevices can be opened without a specific address, so the first free subdevice is opened. Also, we temporarily use subdevices for hardware with a lot of streams (I/O connectors)&bnsp;— for example MIDI. There are several limits given by used minor numbers (8 PCM devices per card, 8 MIDI devices per card etc.).

For example, to access the first device on the first soundcard/device, you would use

hw:0,0

to access the first device on the second soundcard/device, you would use

hw:1,0

to access the second device on the third soundcard/device, you would use

hw:2,1

The Control device

The control device for a card is the way that programs modify various "controls" on the card. For many cards this includes the mixer (but some cards, for example the rme9652, have no mixer). However, they do still have a number of other controls and some programs like JACK need to be able to access them. Examples include the digital I/O sync indicators, sample clock source switch and so on.

Aliases

With the 'PCM hw type' you are able to define aliases for your devices. The syntax for this definition is:

Plugins

Q: What are plugins?

A: In ALSA, PCM plugins extend functionality and features of PCM devices. The plugins deal automagically with jobs like naming devices, sample rate conversions, sample copying among channels, writing to a file, joining sound cards/devices for multiple inputs/outputs (not sample synced), using multi channel sound cards/devices and other possibilities left for you to explore. To make use of them, you need to create a virtual slave device.

To see a full list of plugins and options, go to the alsa-lib documentation. The following is a brief introduction.

A very simple slave could be defined as follows:

pcm_slave.sltest {
pcm "hw:1,0"
}

This defines a slave without any parameters. It's nothing more than another alias for your sound device. The slightly more complicated thing to understand is that parameters for 'pcm types' must be defined in the slave-definition-block. Let's setup a rate-converter which shows this behaviour.

Which automatically converts your samples to a 44.1 kHz sample rate while playing. It's not very useful because most players and alsa converts samples to the right sample rate which your soundcard is capable of, but you can use it for a conversion to a lower static sample rate for example.

For conciseness, this can be also rewritten using nested device definitions:

pcm.rate_convert {
type rate
slave {
pcm
rate 48000
}
}

A more complex tool for conversion is the pcm type plug. the syntax is:

You will convert the sample during playing to the sample format: S16_LE, one channel and a sample rate of 16 kHz. As you called aplay with the verbose option -v you see this options as it appears as it comes from the original file. with:

aplay -v test.wav

You see the original settings of the file.

Software mixing

Software mixing is the ability to play multiple sound files or applications at the same time through the same device. There are many ways to have software mixing in the Linux environment. Usually it requires a server application such as ARTSD, ESD, JACK... The list is large and the apps can often be confusing to use.

dmix

These days we have a native plugin for ALSA called the dmix (direct mixing) plugin. It allows software mixing in an easy to use syntax and without the hassle of installing/understanding a new application first.

A very interesting and potentially extremely useful aspect of this plugin is using it combined with the default plugin name. In theory this means all applications that have native ALSA support will share the sound device. In practice not many applications are able to take advantage of this functionality yet. However if you have time to test and report your findings to the application developers it is worth a try:

Virtual multi channel devices

If you would like to link two or more ALSA devices together so that you have a virtual multi channel device it is possible. However this will not create the mythical "multi-channel soundcard out of el-cheapo consumer cards". The real devices will drift out of sync over time. It is sometimes helpful to make applications see for example, one 4 channel card to allow for flexible routing if they can't easily be made to talk to multiple cards (making use of JACK being one example).

Bindings explained

The above example for a virtual multi channel device uses bindings to make the connections work. The following is a more advanced asoundrc for 2 RME Hammerfalls which is a professional multichannel sound device. Below is a full explanation of how bindings work.

There are two sound cards which are linked with a wordclock pipe. That allows them to keep sample sync with each other which is very important for multichannel work. If the sample rates are not in sync then your sounds become out of time with each other.

Each sound card has a number of physical channels 19 + 10. They are represented in /proc/asound/cardx as pcmXc (capture) and pcmXp (playback). Where X equals the number of the physical input/output (i/o) channels starting at 0.

If you look at the lines:

type multi;
slaves.a.pcm rme9652_0;
slaves.a.channels 26;

You can see that the card has been nicknamed "a" and given a range of 26 channels. You can assign the card any number of channels you want but you can only use as many channels as the card has physically available. The bindings start at the first available pcm device for the card ie. pcm0c pcm0p - and move upwards sequentially from there.

The first binding points to the first available pcm device on the card represented as "a". The second binding points to the second available pcm device on "a" and so on up to the last one available. We then assign a channel number to the binding so that the channels on the new virtual "soundcard" we have created are easy for us to access.

Another way of saying it is:

address of.the first channel on my new soundcard.using my real soundcard called "a";
make this address of.the first channel on my new soundcard.be the first pcm device on my new soundcard;

address of.the second channel on my new soundcard.using my real soundcard called "a";
make this address of.the second channel on my real soundcard.be the second pcm device on my new soundcard;

Referenced applications

aRTsd - the aRTs sound server is the basis of desktop sound for KDE.

ESD - the Enlightened Sound Daemon mixes several audio streams for playback by a single audio device.

Ecasound - a commandline multitrack recorder and editor with various GUI apps.