minivosc ALSA driver

Introduction

This is a brief documentation/tutorial on creation of snd-minivoscALSA ( Advanced Linux Sound Architecture ) driver. The name minivosc should stand for minimal virtual oscillator, and aims to be an example of a minimal ALSA driver, that simply represents a soundcard with a single capture interface, which streams a predefined waveform (and thus behaves as an oscillator in music technology terms). Note that playback is not handled in this driver (nor any sort of realtime control of the oscillator, such as pitch).

While all these documents certainly provide valuable introductory points, they aren't excesivelly verbose about basic problems inherent in programming soundcard drivers; and they do not provide a full working code example of a driver. Iwai's document discusses an example of a hypothetical PCI device, whereas Collins' tutorial works with a real, though undisclosed device.

Minivosc, on the other hand, is a 'virtual' device driver, in the sense that it does not communicate with real external hardware - and therefore can be used to illustrate problems in soundcard device writing, that exist entirely on the PC side.

This tutorial/write-up aims to serve as documentation of the development of minivosc, and in doing that, to introduce basic problems in soundcard drivers in as simple terms as possible; and as such, to serve as an addition to already existing ALSA driver resources.

Starting points

Initially, the search for source code suitable as a starting point, began with looking in the ALSA source files integrated as part of the current linux kernel source (2.6.32 on the development machine at the time). In those, the most obvious place to start is sound/drivers/dummy.c, which produces the snd-dummy driver. It should be a good place to start, because snd-dummy is also a virtual driver (in the sense that it doesn't need external hardware); however, in spite of the name, this example is not trivial at all for a beginner to understand (see below for further discussion on dummy.c).

Additionally, one can go along Ben Collins: Writing an ALSA driver, and produce a minimal driver code that will compile and load. However, such a driver will not do anything in particular when it is 'captured' (read from) or 'played' (written to), and as such it is difficult to use it as an example for gaining further insight into internals of ALSA. In spite of this, minivosc copies its snd_pcm_hardware structure (and some other code portions) from it.

The alsa-devel mailing list helpfully supplied a pointer in the post: "(alsa-devel) Help with dummy.c (where/how to write?)" to a file present in ALSA sources (but not kernel sources), drivers/aloop-kernel.c, representing a virtual 'loopback soundcard' device. It is this file that is taken as a base for minivosc - in fact, it can be said that minivosc.c is a somewhat simplified version of aloop-kernel.c.

Source files

As mentioned above, the minivosc source can be browsed here, or checked out from svn through:

It simply consists of a Makefile and minivosc.c source file. Follow the instructions in 'Building and running' in order to work with it; note that by default it has debug build, and debug statements (viewable in /var/log/syslog) enabled - see the 'Debugging' section for more.

In addition, ALSA driver beginners may want to take a look at the bencol source, that can be browsed here, or checked out from svn through:

The Makefile contains entries to build any of these source files as snd-bencol.ko; (un)comment relevant lines before building. Also, note that from the three, only bencol-alsa-timer.c can produce some sort of a waveform (the others build and can be insmod-ded, but fail at capturing).

Building and running

One does not need to rebuild the entire linux kernel (which can take up to several hours) in order to build the kernel module for the minivosc driver. Simply, the build dependencies needed for building the linux kernel need to be installed (see Kernel/Compile - Community Ubuntu Documentation); after this, the files minivosc.c and a Makefile can be placed in a folder; and then in a terminal, after cd-ing to the folder, the following can be issued:

make clean && make

which should result with a kernel module file in the same folder, snd-minivosc.c (which follows the ALSA naming convention, where related kernel modules are prefixed with 'snd-')

If this module was built as part of the linux kernel, when one could have used 'modprobe snd_minivosc' to load the module, and 'modprobe -r snd_minivosc' to unload it. However, since in the above example the module is built separately, then we should, instead, use:

Note that snd_minivosc does not show up as a playback device at all - it is shown strictly as a capture device. Also, note that if you run alsamixer, and try to select the minivosc card, the program will respond with "This sound device does not have any controls.", which it indeed doesn't (however, the corresponding mixer controls code sections could be easily copied from aloop-kernel.c). Note that another great way to inspect audio devices is by using the alsa-info.sh script.

In the device list above, snd_minivosc is the second soundcard (card 1), and it has one capture interface (subdevice #0). Therefore, in order to capture from it, we can issue:

arecord -D hw:1,0 -d 2 foo.wav

where -D hw:1,0 would refer to choice of second card, first capture interface. Note that if we do not specify any format parameters, man arecord states that:

The default is one channel.
...
If no format is given U8 is used.
...
The default rate is 8000 Hertz.

which means the above command will ask for 2 seconds of 8 KHz mono stream with 8-bit resolution (that is, each sample will be represented by 8 bits - a byte, or an unsigned char) - and that is pretty much the only format that snd-minivosc will accept, as well. If the arecord command executes succesfully, then we can also use an audio editor like Audacity to record (capture) from the minivosc soundcard.

Also, note that in Ubuntu 10.04, PulseAudio is started by default; when it's running, it allows access to the Sound Preferences mixer in Ubuntu. While pulseaudio is running, one can insert minivosc driver module without a problem; however, trying to rmmod the module afterwards will fail. In that case, one can try to shutdown pulseaudio with

pulseaudio --kill

and then try to remove the module afterwards. To get back access to the Gnome mixer, start pulseaudio again by using:

pulseaudio --start

Understanding ALSA driver architecture

Understanding the ALSA driver architecture can be quite a mouthful, as there are plenty of functions and structs that need to be present in order for a driver to function. Before we review those, let's first revisit the context of use of an ALSA driver, and consider the following diagram:

Soundcard driver context

The diagram represents a simplified abstraction of a mono in, mono out soundcard, connected to a PC through some sort of a bus (PCI, USB, ISA...). Obviously, for each input or output, we need an ADC and DAC device (respectively) present on the soundcard; all the rest of the digital circuitry needed to interface these convertors to the bus (and the rest of the PC) is abstracted in the diagram as "Controller". As the CPU is, ultimately, in control of the bus, we can consider the soundcard driver to be a piece of software running on the CPU, which handles the transfer of data, in each direction (playback or capture), between the soundcard and the rest of the PC (meaning CPU and memory).

In this case, the minivosc driver will - without actual hardware - present a soundcard with a single mono input to the rest of the system (and thus, the whole playback direction as on the diagram above would not exist for it).

At this point, it is important to mention a few words about the Linux driver model (see Documentation/driver-model/). We can inspect /sys/devices in a bash shell:

This tells us that the system recognizes 'isa', 'pci', 'platform' etc. devices (with a note, that USB devices show under the 'pci' bus). Now, this is important, because an ALSA driver must receive a pointer to a corresponding driver structure in the _init and _exit functions:

hence struct usb_driver usb_audio_driver is used, along with usb_register(&usb_audio_driver) in _init

However, since dummy.c and aloop-kernel.c (as well as minivosc) do not represent any real hardware - they will instead utilize the platform driver model (see /driver-model/platform.txt); that is:

struct platform_driver XYZ_driver is used, along with platform_driver_register(&XYZ_driver) in _init

Driver / device initialization

Assume now, that the diagram above represents a soundcard connected to the USB bus. Since USB devices are meant to support hot-plugging, the driver should be able to handle the situations where the device is plugged or unplugged while the computer is still on. Hence, the driver must differentiate between the moments when the driver is loaded or unloaded (in our case, that is when insmod and rmmod commands are executed); and the moments when the device itself is connected to, or disconnected from, the bus. The Linux kernel (see Anatomy of a kernel module object) and ALSA driver architectures provide several such predefined functions, which in the case of minivosc are:

Note that this is not the full scope of predefined functions (for more, see Iwai's documents); however they are the necesarry minimum needed for minivosc to perform. Here is a brief rundown of these functions - first the driver and device initialization functions:

In this case, once the minivosc driver is loaded via insmod, it always runs the alsa_card_minivosc_init and minivosc_probe functions one after another; and these two functions are enough to get the ALSA system to recognize and list a soundcard.

In addition, the following structures should be defined for the driver and device initialization functions - in minivosc.c:

Hardware parameters and PCM Interface functions

Now, it needs to be defined what happens when the driver gets used by userland audio software (such as arecord or audacity). Typically, audio software will request the driver to play back (or capture) at a given format (number of streams as in mono or stereo, choice of sampling rate and sampling resolution); the driver then should transfer data from userspace memory to the soundcard (in case of playback) or transfer data from the soundcard to userspace memory (in case of capture) at the requested format. These types of operations are handled by so called PCM operations ALSA functions. Note that in the case of ALSA, the 'PCM' doesn't mean specifically pulse-code modulation, as noted in "ALSA project - the C library reference: PCM (digital audio) interface":

Although abbreviation PCM stands for Pulse Code Modulation, we are understanding it as general digital audio processing with volume samples generated in continuous time periods.

The allowed audio formats that the driver will accept, as well as the PCM operations functions, should be defined as structures, which in the case of minivosc are:

_pcm_open - runs each time a (sub)stream is opened (i.e. when you execute arecord; or press 'Record' in audacity).

_pcm_close - runs each time (sub)stream is closed (i.e. a couple of seconds after: arecord finishes executing; or pressing 'Stop' in audacity during active recording).

ioctl - special communication with hardware - since we have no actual hardware, we simply specify ALSA's snd_pcm_lib_ioctl

_hw_params - allocates kernel memory for a substream, according to requested format, through use of snd_pcm_lib_malloc_pages

_hw_free - frees kernel memory for a substream

_pcm_prepare - "This callback is called when the pcm is 'prepared'. You can set the format type, sample rate, etc. here. The difference from hw_params is that the prepare callback will be called each time snd_pcm_prepare() is called, i.e. when recovering after underruns, etc." (writing-an-alsa-driver.pdf)

_pcm_trigger - "This is called when the pcm is started, stopped or paused. ... At least, the START and STOP commands must be defined in this callback." (writing-an-alsa-driver.pdf)

_pcm_pointer - "This callback is called when the PCM middle layer inquires the current hardware position on the buffer. The position must be returned in frames, ranging from 0 to buffer_size - 1. This is called usually from the buffer-update routine in the pcm middle layer, which is invoked when snd_pcm_period_elapsed() is called in the interrupt routine. Then the pcm middle layer updates the position and calculates the available space, and wakes up the sleeping poll threads, etc." (writing-an-alsa-driver.pdf)

A typical sequence of the basic PCM operations steps, that can be seen in minivosc debug messages, is:

... which means, _pcm_prepare, _pcm_trigger and _pcm_pointercannot be left empty, if we expect the driver to work :)

Device structure

As different functions of the driver may need access to information at different times, we must provide a structure that can be accessed and modified by these functions. In the case of minivosc, we use a single structure to represent both the device and the only available substream:

Note that in most part, these variables are taken from aloop-kernel.c; however both aloop-kernel.c and dummy.c are capable of handling multiple capture and playback substreams - and thus in those drivers, several structs (instead of a single) are used, because arrays of structs must be implemented so as to represent multiple substreams.

Note also, that the "_open" PCM operation is the first time when the ALSA system makes a real pointer to a snd_pcm_substream available; hence, it is in this callback where we need to make sure that we set ourselves the pointer minivosc_device->substream to the real pointer passed by the system; otherwise, the rest of the PCM functions will not have the right pointer to work with (and hence kernel oops and crashes can be expected).

Timing and memory (buffer) management

We now arrive at a slightly more complex part of an ALSA driver. We have already mentioned that minivosc corresponds to (or simulates the context in) the diagram above - except with only a single DAC (and no ADC); and thus with only the 'Capture' data transfer direction present. Even though this driver can thus, by definition, only support a single direction of data transfer (from the card to the PC), there could be several strategies involved with this:

The PC repeatedly keeps on asking the card if it has data to supply (polling); if it does it handles the data transfer (copies data from the card to PC memory).

The soundcard generates a signal when it has data ready for the PC; upon this signal, the PC stops whatever its doing, and it handles the data transfer (copies data from the card to PC memory) (interrupt)

In principle, either of these approaches could be used so that the PC would receive data of one sample (which in minivosc case is 8 bit, or a byte) at a time - however, that would be inefficient use of computer resources. That is why within ALSA, data transfer encompasses multiple samples - chunks - at a time.

Going back to the Ben Collins tutorial, where capture is discussed, we can already start guessing why the minimal example built from that tutorial will not do anything: "The buffer I've shown we assume to have been filled during interrupt." (Ben Collins: Writing an ALSA driver: PCM handler callbacks) - seemingly, an interrupt generated by a device; however the interrupt function is in any case not provided.

So we can take a look again at aloop-kernel.c and dummy.c, where we can find that:

In other words: if there isn't an actual hardware to generate interrupts; then we must set up some sort of a timer, that will repeatedly trigger a function (that would correspond to a polling function) in our virtual soundcard driver. In this case, minivosc copies the timer API approach from aloop-kernel.c.

At this point, let's take a look at which PCM functions get called in minivosc after it had been triggered for start:

When the time set for the timer expires, the _timer_function callback function runs, which calls _pos_update .

If _pos_update detects a difference in jiffies, it calls _xfer_buf which in turn calls _fill_capture_buf

in this case, when all is finished, _timer_start is called again

_pos_update can also be called independently by _pcm_pointer.

in this case, usually there is no difference in jiffies (delta is 0), in which case _pos_update quickly exits, not calling any other function.

At this point, lets include an excerpt from Jiffy (time):
In computing, a jiffy is the duration of one tick of the system timer interrupt. It is not an absolute time interval unit, since its duration depends on the clock interrupt frequency of the particular hardware platform.
...
Within the Linux 2.6 operating system kernel, since release 2.6.13, on the Intel i386 platform a jiffy is by default 4 ms, or 1/250 of a second. The jiffy values for other Linux versions and platforms have typically varied between about 1 ms and 10 ms.

So, to be more precise - _pos_update actually measures time (in jiffies), elapsed since the last call to _timer_start, as the variable delta; only if delta is more than 0 jiffies, a call to _xfer_buf is made, requesting a transfer of ammount of samples (data) that corresponds to the elapsed time in jiffies, according to the requested sampling rate, resolution and number of streams. Let's use this for the log above:

A sampling rate of 8000 KHz, means we have to transfer data for 8000 samples each second for a single (mono) stream.

A sampling resolution of 8 bits = byte, means we have to transfer 8000 bytes each second.

And 8000 Bps will be equivalent to (8000 / 250) = 32 bytes per jiffy (given a jiffy is 4ms for a 2.6 kernel)

Thus, when delta==2, then 2*32 = 64 bytes will be requested for transfer; when delta==1, then 32 bytes will be requested.

On the development machine, a typical pattern of changes of delta looked like:

which means the requests for byte transfers will repeatedly change between 32 and 64 bytes.

Now, how does the rest of ALSA know that such a requested transfer has been executed succesfully? It does so by asking the driver, what is its current position in the buffer, by calling its _pcm_pointer function; _pcm_pointer should return the buffer position in frames (and in our minivosc case, since we use a mono 8 bit stream, a frame will be equivalent to a size of a single sample, which is a byte). The important thing to remember is that here 'buffer' does not refer to the sizes of these 'individual' transfers of 32 and 64 bytes - it refers to the size of the PCM buffer of the substream, which is determined in _prepare!

This is why we need to keep a variable for the position within this PCM buffer, minivosc_device->buf_pos, within our device struct; we can then update this variable for each 'individual' transfer, and return it back whenever the ALSA middle layer asks for it through _pcm_pointer. (this becomes obvious, if we comment all commands that update minivosc_device->buf_pos - in that case, running a capture from Audacity will visibly show the record cursor being unable to move, and the process will eventually fail.) Note that in aloop-kernel.c, the main calculation of buf_pos occurs in _xfer_buf (in minivosc, it depends on the choice of copying algorithm).

Also note that, after how much time after _start does the timer expire and _timer_function runs, is calculated in _timer_start as:

While here, let's also mention snd_pcm_period_elapsed. The _timer_function, after calling _pos_update and _timer_start, checks if mydev->period_update_pending is 1 - if so, then it calls snd_pcm_period_elapsed. The condition of setting period_update_pending to active is if (mydev->irq_pos >= mydev->period_size_frac) in _pos_update, where:

This tells us that we must differentiate between size of PCM buffer, pcm_buffer_size (1536 bytes) - and pcm_period_size (48 bytes). In simple terms, we could understand this as: as soon as a new batch of 48 bytes have been written in the PCM buffer, the ALSA middle layer should be informed by calling snd_pcm_period_elapsed; and it is this call that finally, after all the buffer operations performed within the driver, makes the data available to audio software like arecord that can proceed with, say, recording this data to disk.

Interrupt Handler
The rest of pcm stuff is the PCM interrupt handler. The role of PCM interrupt handler in the sound driver
is to update the buffer position and to tell the PCM middle layer when the buffer position goes across the
prescribed period size. To inform this, call the snd_pcm_period_elapsed() function.
...
Interrupts at the period (fragment) boundary
This is the most frequently found type: the hardware generates an interrupt at each period boundary. In
this case, you can call snd_pcm_period_elapsed() at each interrupt.
...
High frequency timer interrupts
This happense when the hardware doesn't generate interrupts at the period boundary but issues timer
interrupts at a fixed timer rate (e.g. es1968 or ymfpci drivers). In this case, you need to check the current
hardware position and accumulate the processed sample length at each interrupt. When the accumulated
size exceeds the period size, call snd_pcm_period_elapsed() and reset the accumulator.
...
On calling snd_pcm_period_elapsed()
In both cases, even if more than one period are elapsed, you don't have to call
snd_pcm_period_elapsed() many times. Call only once. And the pcm layer will check the current
hardware pointer and update to the latest status.

More on memory (buffer) management

At this point, let us recall that minivosc simply repeats a short waveform, in order to generate a continuous tone. This waveform is specified as the array wvfdat within the driver code. Additionally, at each repetition, the waveform can be 'lifted' - that is, a constant value can be added to it - which is controlled by the minivosc_device->wvf_lift variable.

In the case of actual capture hardware, the driver would have to first collect the data from the card in some intermediate buffer (array) - and in the case of minivosc, that intermediate buffer is in fact wvfdat; the only difference from the hardware case being, that it is pre-filled with data (and in a real soundcard, it would have to be continuosly updated with data from the soundcard).

Thus, we can state the following: regardless if we talk about a virtual or a real hardware driver, a key part of the driver job, is to transfer data from an intermediate buffer/array (here wvfdat) to the PCM buffer for that substream (here minivosc_device->substream->runtime->dma_area) - in 'individual' transfers of chunks, whose size is determined by the time elapsed since the last 'individual' transfer (or in other words, the time between two consecutive _timer_functions).

And, since it turns out that, in this case, the intermediate buffer size (21) is less than the 'individual' transfer chunk size (32 or 64), we come to an interesting situation, not accounted for in the original aloop-kernel.c - displayed on the diagram below (the colors on the diagram match the colors used in the list above).

Buffer visualisation diagram.

As shown in the buffer visualisation diagram, due to intermediate (waveform) buffer/array size (wvfsz) being smaller than 'individual' transfer chunk size (count), we need to loop through the waveform buffer/array in order to fill a chunk request - and the waveform piece will not end at the end of the chunk request. In other words, the data will not be aligned at boundary.

That is why, although 'individual' transfer chunk size is not a real array, we have to treat it as such, because we need to keep a pointer for it (here dpos). In other words, if we want seamless looping of the waveform buffer, we need to keep three buffer pointers:

dev->wvfpos - where are we in the wvfdat array

dpos - where are we in the current chunk request

dev->buf_pos - where are we in the PCM substream buffer (...->dma_area)

To illustrate this, there are three copying algorithms one can choose from in minivosc's function _fill_capture_buf - simply by uncommenting the corresponding #define (and commenting the others):

COPYALG_V2 - here, bytes are copied one by one from wvfdat to dma_area through assignment in a loop

COPYALG_V3 - a copy of copy_play_buf function's algorithm from aloop-kernel.c

Note that V1 and V2 calculate their own buf_pos in _fill_capture_buf, and they can both demonstrate seamless looping of the waveform:

Seamless loop example (V1 or V2) - the large spikes represent different buffer sizes of audacity and arecord.

Seamless loop V1.

Seamless loop V2.

V3 uses the same calculation of buf_pos as originally in aloop-kernel.c, and it does show that the waveform looping in that case is not seamless:

Problems in looping between audacity and arecord, due to differing buffer sizes, when the buf_pos is calculated in _xfer_buf as in aloop-kernel.c.

Problems in looping...

Also note that by uncommenting the #define BUFFERMARKS, we can insert bytes marking specific 'edges' of the buffer, which visibly illustrates the differences in PCM buffer sizes between arecord and audacity (note, you can use these buffer marks, even if no algorithm for copying is used, that is, without a waveform - however, buf_pos still has to be handled):

Illustration of buffer marks - and difference of buffer sizes between arecord and audacity.

buffer marks - 'zoomed' out.

buffer marks - 'zoomed' in.

Note that while audacity seems to 'speed up' its buffers after some periods, it will still report the same 32 and 64 bytes requests in the logs as arecord (?!)

One digital value is called sample. More samples are collected to frames (frame is terminology for ALSA) depending on count of converters used at one specific time. One frame might contain one sample (when only one converter is used - mono) or more samples (for example: stereo has signals from two converters recorded at same time). Digital audio stream contains collection of frames recorded at boundaries of continuous time periods.
General overview
ALSA uses the ring buffer to store outgoing (playback) and incoming (capture, record) samples. There are two pointers being maintained to allow a precise communication between application and device pointing to current processed sample by hardware and last processed sample by application. The modern audio chips allow to program the transfer time periods. It means that the stream of samples is divided to small chunks. Device acknowledges to application when the transfer of a chunk is complete.
Transfer methods in UNIX environments
In the UNIX environment, data chunk acknowledges are received via standard I/O calls or event waiting routines (poll or select function). To accomplish this list, the asynchronous notification of acknowledges should be listed here. The ALSA implementation for these methods is described in the ALSA transfers section.
...
ALSA transfers
There are two methods to transfer samples in application. The first method is the standard read / write one. The second method, uses the direct audio buffer to communicate with the device while ALSA library manages this space itself. You can find examples of all communication schemes for playback in Sine-wave generator example. To complete the list, we should note that snd_pcm_wait() function contains embedded poll waiting implementation.

snd_pcm_hw_params_set_access is used to set the transfer mode I've been talking about at the start of this document. There are two types of transfer modes:
* Regular - using the snd_pcm_write* functions
* Mmap'd - writing directly to a memory pointer
Besides this, there are also two ways to represent the data transfered, interleaved and non-interleaved. If the stream you're playing is mono, this won't make a difference. In all other cases, interleaved means the data is transfered in individual frames, where each frame is composed of a single sample from each channel. Non-interleaved means data is transfered in periods, where each period is composed of a chunk of samples from each channel.
To visualize the case above, where we have a 16-bit stereo sound stream:
* interleaved would look like: LL RR LL RR LL RR LL RR LL RR LL RR LL RR LL RR LL RR LL RR ...
* non-interleaved might look like: LL LL LL LL LL RR RR RR RR RR LL LL LL LL LL RR RR RR RR RR ...
where each character represents a byte in the buffer, and padding should of course be ignored (it's just for clarity).
Note that I emphasized 'might' in the non-interleaved case. The size of the chunks depends on the period size hardware parameter, which you can adjust using snd_pcm_hw_params_set_period_size. But in most cases, you want interleaved access.

So, given that we have SNDRV_PCM_INFO_MMAP_VALID in our _pcm_hw struct, and we never use snd_pcm_write* functions (but instead we use say memcpy to transfer data) - it would be safe to say that in minivosc, the mmap transfer mode is being used.

> I am working on the dummy driver provided with the ALSA. I took it from
> the linux kernel 2.6.20.1
> I build the module and load it. The XMMS seems to play ok (doesn't hang
> and all) but none of
> the recording application seem to record from the driver.
>
> Can the driver in current state work as the loop back cable between
> applications?
No, it's really dummy driver which eats playback samples and returns zero
samples for capture. Try use the snd-aloop driver.

The great thing is: you don't need a supported sound card anymore, as ALSA now has a dummy driver that does nothing! (No, it really does nothing, but some programs will work now that they believe there is a sound card available).

If we look again at the dummy.c driver, and we try to apply the same approach as in minivosc - that is, we simply try to copy bytes into substream->runtime->dma_area right before snd_pcm_period_elapsed is called - we will experience SEVERE crashes/freezes. The reason for this is a variable fake_buffers being set to 1 - in which case, the dma_area is, in fact, not allocated at all!

Therefore, the patch below shows some minimal changes that need to be implemented on dummy.c; so that a few bytes at the beginning of PCM buffer are written during capturing (which results with pulses at PCM buffer boundaries in the captured audio):

Let's just note that in this case, we use memset to write 4 bytes at the beginning of the PCM buffers; if we request (say via arecord) a 8-bit mono stream, then we will see four samples in the captured audio; if we asked for a floating point (32 bit) mono stream, then we will see a single sample in the captured audio (which makes sense, since a float is usually encoded using 4 bytes, that is - sizeof(float) is 4).

Debugging

One of the most problematic things in driver development is their debugging as kernel modules - and especially problematic are errors such as memsetting a null pointer (which is what happens, if we try to write to dma_area in dummy.c, while fake_buffers = 1). In such a case, the computer freezes, without time to generate printk kernel debug messages in /var/log/syslog - and the only way out from such a freeze is a hard reboot (power off and on).

In such a case, pretty much the first thing that pops to mind is to step the code in a debugger and identify the offending line. However, since in case of drivers we are talking about kernel modules (not userspace programs), the procedure for debugging them is not trivial.

Fortunately, there is a kernel debugger built into the Linux kernel since version 2.6.26, known as kgdb light. What this means is that, while for earlier kernels this functionality required recompiling the kernel - for kernels newer than 2.6.26, we can simply add arguments like

kgdboc=ttyS0 kgdbwait

to the GRUB boot entry for the operating system - and then when the OS boots, instead of loading the desktop etc., the boot process will in fact halt, and wait for a signal from the GNU debugger gdb. This signal needs to be delivered through a serial connection - and so, debugging a kernel using kgdb assumes having a second machine that will run gdb for debugging, connected to the machine that runs the kernel / module to be debugged via serial cable.

Notably, since newer PCs don't even have a real RS-232 serial port, the only remaining approach to debugging with kgdb is on a single PC, through usage of a virtual machine. Here VirtualBox OSE was used (although in principle also Qemu or KVM could be used, since the Intel Atom processor used here does not support hardware virtualization, VirtualBox is as good as any), a virtual hard drive created from it, and Ubuntu 10.04 command line version was installed using the Ubuntu minimal CD image. VirtualBox can then be set up in its Settings / Serial Ports to: 'Enable Serial Port', and 'Create Pipe', where 'Port/File Path' would be a file like /tmp/vboxpipe.

Then, after adding 'kgdboc=ttyS0 kgdbwait' as GRUB2 boot options to the virtual image OS installation, we can boot the virtual image; after a while the booting process starts, and should show:
kgdb: Waiting for connection from remote gdb.

Then, in the host OS environment, in one terminal we can run
$ socat UNIX-CONNECT:/tmp/vboxpipe TCP-LISTEN:8040

after which control is passed from gdb to the kernel running in the virtual image, and the virtual kernel image completes booting. After that, breakpoints can be made by running:

$ echo g | sudo tee /proc/sysrq-trigger

at the virtual image bash prompt; or by using

#define BREAKPOINT() asm(" int $3");

in the driver kernel module code, and then calling BREAKPOINT(); wherever in the driver code we want. Obviously, if we have a breakpoint in, say, "_prepare" function, we first insmod the driver module in the VM image OS, and then should call arecord in the VM so that the driver is activated there.

Note that the vmlinux file used in the gdb call above is, in fact, the symbol file for the kernel; and the only way to obtain it is to rebuild the kernel. So although you don't need to rebuild the kernel simply to be able to break into gdb, you must rebuild the kernel in order to obtain the symbol file, and be able to step through source - without a symbol file, the gdb session above would look like:

Of course, after you rebuild your kernel (it should automatically be set to generate debug symbols), you should also install your new debug kernel in the VM OS - and set it to boot by default, with the kgdboc=ttyS0 kgdbwait options appended to its boot entry.

Finally, since the driver modules we're working with here are not built as part of the kernel, gdb will need their symbol files as well. As such, it is best to built the driver modules within the virtual machine OS, and then copy the .o file to the normal file system so it is available to gdb. Here's how a sample session might look like (assuming 192.168.1.15 is the 'real' IP address of the host OS):

Note that for correct stepping within gdb, the source files should be at the same path in both the virtual image OS and the host OS - so, if the source files for the kernel module are in /path/to/minivosc-src in the VM filesystem, the same directory should exist (and have the same source files) in the host filesystem as well. Also, a debug build should be enabled in the Makefile for the driver module (and it is so already for minivosc).

Finally, once freezes are not an issue anymore, one can simply use printk command throughout the driver module code - no VM image needed; the output of printk can be found in /var/log/syslog or /var/log/messages under Ubuntu 10.04. The minivosc driver code is by default set with these messages enabled, and they can be followed by running, say,