linux-audio &laquo; WordPress.com Tag Feedhttps://en.wordpress.com/tag/linux-audio/
Feed of posts on WordPress.com tagged "linux-audio"Tue, 20 Mar 2018 02:35:51 +0000https://en.wordpress.com/tags/enhttps://dennismungai.wordpress.com/go/linux/setting-up-analog-surround-sound-on-ubuntu-linux-with-a-3-3-5mm-capable-sound-card/
Tue, 06 Feb 2018 22:45:22 +0000Brainiarc7https://dennismungai.wordpress.com/go/linux/setting-up-analog-surround-sound-on-ubuntu-linux-with-a-3-3-5mm-capable-sound-card/A while back, I received the Logitech Z506 Speaker system, and with Windows, setting it up was a pretty plug and play experience. On Linux, however, its’ a wholly different ballgame. For one, there’s no Realtek HD Audio control panel here, so what gives? How do you around this problem?

Introducing the tools of the trade:

You’ll want to use a tool such as hdajackretask , pavucontrol and pavumeter for the pin re-assignments and audio output monitoring afterwards respectively. The tools are installed by running:

sudo apt-get install alsa-tools-gui pavumeter pavucontrol

When done, launch the tool with administrative privileges as shown:

gksudo hdajackretask

From here, you’ll then need to re-assign each required pin. Note that this tool, depending on your sound card, will most likely detect them by the color panel layout (see the back of your card and confirm if its’ pins are color coded) or by the jack designator.

Either way, when you’re done and you select “Apply”, you’ll need to reboot and the settings will apply on the next startup.

Of note is that for /etc/pulse/daemon.conf , the following changes must be made (with your preferred text editor):

(a). For 5.1 channel sound, set: default-sample-channels = 6

(b). Ensure that enable-lfe-remixing is set to yes.

(c). The default channel map option for 5.1 audio should be set as:

front-left,front-right,lfe,front-center,rear-left,rear-right

How the tool works:

The tool generates a firmware patch (under /lib/firmware/hda-jack-retask.fw ) entry that’s also called up by a module configuration file (under /etc/modprobe.d/hda-jack-retask.conf or similar) , whose settings are applied on every boot. That’s what the “boot override” option does, overriding the sound card’s pin assignments on every boot. To undo this in the case the configuration is no longer needed, just delete both files after purging hdajackretask.

An example:

To get the Clevo P751DM2-G‘s Audio jacks to work with the Logitech Z506 surround sound speaker system that uses three 3.5mm jacks as input for 5.1 surround sound audio, I had to override the pins as shown in the generated configuration file below (confirm with the screen shots attached at the bottom for my use case, your mileage may vary depending on your exact sound card):

(b). Contents of the /etc/modprobe.d/hda-jack-retask.conf file after setup:

# This file was added by the program 'hda-jack-retask'.
# If you want to revert the changes made by this program, you can simply erase this file and reboot your computer.
options snd-hda-intel patch=hda-jack-retask.fw,hda-jack-retask.fw,hda-jack-retask.fw,hda-jack-retask.fw

Then rebooted the system. Confirming the successful override by running grep on dmesg on boot:

Confirming the 3.5mm audio jack connections to the sound card on the laptop/motherboard setup:

On the rear of the Logitech system, all the I/Os are color coded. In my case, I swapped the GREEN line with the YELLOW line such that the GREEN line feed would correspond to the Center/LFE feed, as it does on Windows under the Realtek HD Audio manager panel. Then, on the computer, I connected the feeds in the order, top to bottom: Yellow, Green then Black at the very end.

Final step after reboot to use the new setup:

Use pavucontrol (search for it in the app launcher or launch from terminal) and under the configuration tab, select the "Analog Surround 5.1 Output" profile. This is important, because apps won’t use your speaker layout UNTIL this is selected.

When done, you can verify your setup (as shown below) with the sound settings applet on Ubuntu by running the audio tests. Confirm that audio is routed correctly to each speaker. If not, remap the pin layout again using hdajackretask and retest again.

Screen shots of success:

As attached:

Now enjoy great surround sound from your sound card.

]]>https://betadailytelegraphatnewscorpau.wordpress.com/go/rea-real-estate/6d048e71394be55d138f69b1fe3fbbca/
Wed, 26 Jul 2017 14:00:00 +0000https://betadailytelegraphatnewscorpau.wordpress.com/go/rea-real-estate/6d048e71394be55d138f69b1fe3fbbca/https://betadailytelegraphatnewscorpau.wordpress.com/go/finance-syndicated/9b473f81c567b1e6e4e0e31b9643da5c/
Wed, 19 Apr 2017 00:03:00 +0000https://betadailytelegraphatnewscorpau.wordpress.com/go/finance-syndicated/9b473f81c567b1e6e4e0e31b9643da5c/https://pauljacobevans.wordpress.com/go/linux/pisound-the-audio-card-for-the-raspberry-pi/
Tue, 21 Mar 2017 02:34:16 +0000Paul Jacob Evanshttps://pauljacobevans.wordpress.com/go/linux/pisound-the-audio-card-for-the-raspberry-pi/Kids today are being loud with their ‘drum machines’ and ‘EDM’. Throw some Raspberry Pis at them, and there’s a need for a low-latency sound card with MIDI and all the other accouterments of the modern, Skrillex-haired rocker. That’s where PiSound comes in.

Of course, the Pi already comes with audio out, but that’s not enough if you want to do some real audio processing. You need audio in as well, and while you’re messing around with that, adding some high-quality opamps, ADCs, DACs, and some MIDI would be a good idea. This is what the PiSound is all about. …read more http://pje.fyi/NgtZJv

After a few evenings of half-hearted attempts to port my Windows code and make the changes needed to run on Linux, I finally got my head around what was needed, and it works! Unfortunately I’m not at the house where the amp and speakers are so I can’t try it ‘in anger’ but at least I can tell that I’m getting what sounds like correctly-filtered Spotify or CD from the three stereo outputs.

On a ten year old Dell GX520 it’s using about 16% of the CPU, and when you add in Spotify at about another 16% plus the snd-aloop driver and all the other stuff going on in an internet-connected PC, it comes to about 40% CPU, which is a bit higher than I had hoped – there’s a tiny amount of fan noise. Maybe there is scope to improve the efficiency of the crossover software: at the moment I am reading and writing 32 bit integers to/from the sound cards (one is a dummy sound card of course) but doing all the processing in floating point which therefore involves converting each sample twice with a potentially expensive operation. Maybe this can be speeded up. And I can always find a faster, cooler PC of course.

[13/07/15] In response to a comment, the point of all this is not just to implement basic crossover filtering, but to correct the drivers’ individual responses based on measurements, producing zero phase shift for each driver, and therefore perfect (or as close as possible) acoustic crossovers and zero overall phase shift. EQ such as baffle step correction is overlaid onto the filters’ responses without costing anything extra in CPU power. Individual driver delays are also added. I am not claiming this is unique, but nor is it commonplace. In terms of an active crossover it is the no-compromises version.

I have had this system working for a couple of years on a Windows PC, but Linux will be a cheaper and more elegant solution.

[UPDATE 18/0715] I have it running with the speakers with a choice of two sound cards: Asus Xonar DS and Creative X-Fi. It’s just a case of changing a few characters in the xover config file.

The control loop algorithm for maintaining the average sample rate at input and output (and avoiding any resampling) is an interesting problem to solve and I have had fun trying different algorithms based on PID loops and plotting the result out as a graph. The output sample rate is fixed, set by the card, and has to be inferred from the time between calls to send chunks of data to the output card but there will be a level of jitter on this due to the other things that the multi-threaded program is doing. We know the precise sample rate at the input (the snd-aloop loopback driver) because we are setting it. The aim is to keep the difference between number of samples read and number of samples output to the DACs at a constant level, but as we are sending and receiving chunks of data the instantaneous figure is fluctuating all the time. I presume that similar calculations are being performed in the adaptive resampling that would be usual when connecting together digital audio systems with differing sample rates – the difference being that this would affect the audio (subtly, but it undeniably would), while the aim of my scheme is that the timing adjustments merely affect the fill level of a FIFO, the sample rate being rigidly fixed and defined by the DAC.

[UPDATE 31/07/15]

Feeling confident, I bought an Asus Xonar U7 USB 7.1 sound card. This is based on the CM6632A chipset. I got it working but… trying to set the format to signed 32 bit within my program failed when addressing the device as “hw”. It also failed with S24_3LE and various other sample formats. However, 16 bit was accepted. Consulting the web, people commonly seem to have this issue with both CM6631A and CM6632A on Linux, and their workaround is simply to use “plughw” instead. However, if the “hw” device rejects a format, then, supposedly, the hardware cannot support it. All the “plughw” device does is automatically allow the OS to convert samples from the format you are using into one that the card can use. So I have a feeling that the card is only running in 16 bit mode, regardless of what my code is sending it.

If an application chooses a PCM parameter (sampling rate, channel count or sample format) which the hardware does not support, the hw plugin returns an error. Therefore the next most important plugin is the plug plugin which performs channel duplication, sample value conversion and resampling when necessary.

[03/08/15 UPDATE] Got back to the house where my system lives after the weekend, and was able to try my Asus Xonar U7 again. This time it accepted S24_3LE! Could this be the issue with hot-plugging versus not hot-plugging that other people on the web have seen? I have a feeling that my previous tests were with the U7 hot-plugged into a PC that was already on. Anyway, I now seem to be in business with the U7 and it sounds good.

]]>https://therationalaudiophile.wordpress.com/go/audio/trying-linux/
Mon, 09 Mar 2015 09:44:07 +0000therationalaudiophilehttps://therationalaudiophile.wordpress.com/go/audio/trying-linux/UPDATED 16/03/15 Approximately every two years I find myself inspired to have a go with Linux. I install Ubuntu on an old PC and congratulate myself on having finally made the right choice. Everything works fine: all the devices are auto-detected correctly, and although the graphics and text are a bit lumpy, it looks as though it can do everything Windows can do. It never lasts. Within a short time I try to do something beyond the basic web surfing and word processing and it doesn’t quite work. So I go to the web, and of course there’s usually a solution buried in a forum somewhere, and it invariably involves editing a config file. But along the way I may have found several other ‘solutions’ that didn’t work, and for each I maybe edited a different file or changed something using some little app I’ve installed. At the end, even though the system may be working, I am never quite sure how I got there, nor confident I could reproduce the same working system on another PC.

Well, the time has come again, and I am typing this using the latest version of Ubuntu. Everything is wonderful so far, and even Spotify is running flawlessly. Specifically, though, I want to get my active crossover system working on Linux, not Windows. My experience with Windows 7 running on slightly older PCs is not good. I have a laptop approximately 5 years old which will grind almost to a halt for several minutes every day, performing some sort of scan of itself, and I don’t know enough to do anything about it. The desktop PC that I use for the active crossover is slightly better, but it, too, takes quite a while to ‘warm up’ and is also prone to the occasional glitch while playing music, due to deciding to update its anti-virus database – I am sure it was not a problem with Windows XP. In contrast, running Ubuntu on an older desktop PC without much RAM, the experience is one of ‘solidity’. I am not experiencing the operating system going AWOL for several seconds at a time. But it comes at a price. I really, really don’t want to have to understand the details of any operating system, and Windows is good for the person who maybe wants to dip into a bit of programming (a distinctly different activity from IT) without having to worry too much about the really low level details. Windows feels as though it is ‘self-healing’. Every time the PC is turned on it starts scanning itself, checking for inconsistencies, downloading updates. New hardware is detected automatically and the user never edits configuration files. Ubuntu feels a little different. By all means correct me if I am wrong, but the impression I get is of a system that is dependent on lots of configuration files that are not hidden from the user. Of course these files get changed by the operating system itself (just as Windows must change its hidden configuration files) and there are little applications that you can install that simplify changing the parameters of various sound cards, say (more on this later). But occasionally the configuration files must be edited by the user using a text editor. One typo, and the PC may refuse to boot!

As I mentioned, I am hoping to run my active crossover stuff on Linux, not Windows. In order to achieve this I must loop continuously doing the following:

Extract a chunk of stereo audio from an ‘input port’ that receives data from my application of choice (media player, Spotify etc.)

Assemble the data into fixed-size buffers to be FFT-ed.

Process with FIR filters to produce a separate, filtered output for each driver.

Inverse FFT.

Squirt the results out to six or eight analogue channels, or if feeling ambitious, HDMI (that would be the dream!).

It’s a very specific, self-contained requirement. I can handle numbers 2 to 4, no problem. 1 and 5 are the tricky ones, and seem to be a lot trickier than they, perhaps, might be. They weren’t all that easy in Windows, either, but I eventually came up with a scheme that kind of worked.

Here’s where it gets very specific: under XP I was able to use a single Creative X-Fi surround sound card as both the ‘receptacle’ for PC audio which I could then access with my application, and also as the multichannel DAC that my application could squirt its output to. Under Windows 7 the driver for the sound card was ‘updated’ and I could no longer access it as the receiver for general PC audio – I could still have used it for S/PDIF, analogue Line In etc., however. In the ideal world, the ‘receptacle’ would just be some software slaved to the output sample rate, I think, but I don’t know how to create such a piece of software – it would appear to Windows to be a driver I would guess. I could buy a piece of software called Virtual Audio Cable but I could never be sure whether that would always be re-sampling the data, and I’d rather avoid that. In the end, I used a method that I knew would work: I slaved a ‘professional’ audio card to the X-Fi using S/PDIF from the X-Fi. The M Audio 2496 can slave its sample rate to the S/PDIF (using settings in the M Audio-supplied configuration application) so I was able to send PC audio to the M Audio and my application could extract data from its ‘mixer’ at the same sample rate. Keeping the input and output on separate cards like this has some advantages when it comes to making measurements of the system while it is working, I think.

As a start I will probably try to do the same thing under Linux. I am attempting to use an Asus Xonar as the multichannel DAC, and another M Audio card I had lying around as the slaved source. It’s almost certain that I could achieve the objective without a second sound card, but I really don’t know how to do it [update 30/06/15: maybe I do know how to do it now]. Linux audio seems to have several ‘layers’ that I don’t understand (but as yet I have no view of them as layers, more as spaghetti). Really, I would like not to have to know anything about them at all, but this seems unrealistic. I have established the following:

I can do lowish-level audio stuff using the Alsa API. I can refer to specific cards by names that I can bring up with certain command line (shell) queries. Are these names guaranteed to stay the same in between boots? I don’t think so, but there are ways of editing the config files to associate names I choose to specific cards – I think.

There is a highly comprehensive system called JACK that allows “JACK-aware” programs to have their audio routed via a user-configurable patchbay. It can handle re-sampling between separate cards transparently. Brilliant, but I don’t think Spotify is “JACK-aware” for example so I’m not bothering with it. [Update 30/06/15: I want to avoid any form of re-sampling anyway]

Ubuntu has PulseAudio installed already (I think) and using an application (that I had to install) called Pavucontrol I can direct Spotify, and presumably other apps, to send their outputs to any of the sound cards in the system. Does this get written to a file and saved when I exit it? I think so. PulseAudio may be the thing I need, possibly being capable of creating software “sources” and “sinks”. But is it always resampling the audio to match sample rates even when that is not needed? More investigation needed. [Update 30/06/15: Pulseaudio cannot be guaranteed not to resample. I have removed it from the machine].

I installed a little program called Mudita24 that gives me most of the functionality of the app that is supplied for M Audio cards under Windows. It will let me slave the M Audio to S/PDIF. But without a lot of rummaging around on the web, finding this solution was not obvious. Will the results be saved to a file so I don’t have to call this up every time? I don’t know. [Update 30/06/15: the M Audio-compatible drivers don’t seem to work properly. I have abandoned this idea].

I found a “minimal” example program that can send a sine wave to an output via Alsa. The program is anything but minimal and allows the user to select from a large number of alternative sample rates, bit depths etc. etc. and has copious error reporting. My version of “minimal” is much shorter! I adapted the program for eight channels, and am sending a separate frequency to each of the Xonar’s outputs. It seems to be working quite solidly. I can’t be absolutely sure that the Xonar isn’t applying surround sound processing to the signals yet, though. Question: should I be programming using Alsa or PulseAudio? [Update 30/06/15: answer is most definitely ALSA only].

I don’t mind if everything is low level, nor do I mind if the operating system handles everything for me. What I am not keen on is a hybrid between the operating system doing some things automatically, and yet having to manually edit files (I haven’t done that yet, though) or having to install little apps myself. How are they all tied together? I don’t know.

UPDATE 10/03/15 Installed Ubuntu on my erratic Windows 7 laptop. On the hard drive I had to delete the ‘HP Tools’ partition to do it, as a PC can only have four partitions, apparently, and HP had used all four to install Windows – the things you learn, eh?

For the things I use the laptop for mainly, Ubuntu is knocking Windows 7 into a cocked hat. It actually responds instantly and doesn’t hang for tens of seconds with the disk light on constantly and the mouse pointer frozen. It’s taking some getting used to!

UPDATE 15/03/15 It is becoming clear to me that there is only one sensible solution for what I am trying to achieve (an active crossover / general DSP system under my control that can be applied to any source including streaming) that is guaranteed not to resample the data, nor is dependent on sound card-specific features, or needs two sound cards. Let me run this by you:

Media player apps need something that looks like a sound card to play into. Some apps will only play into whichever card is set as the default audio device.

If it’s a real sound card that’s being played into, I need to extract the data before it reaches the analogue outputs. This just may not be possible with many sound cards, and it is impossible to know without trying the card – no one cares about this issue normally.

I process the data into six or eight channels and then I need to squirt the results out to, effectively, some DACs (or HDMI). This is most likely a real, physical multi-channel sound card.

I believe that the media player’s sample rate is defined by the sound card it is playing into. If so, this is akin to asynchronous USB mode i.e. the media app is slaved to the sound card’s sample rate.

I would like to avoid sample rate conversion (and this would still be needed to convert between 44.09999 kHz and 44.10001 kHz i.e. there is no such thing as “the same sample rate” unless they are derived from the same crystal oscillator).

There is a Linux driver called snd-aloop which can act as a virtual audio node, recognisable by media player apps as a sink, but also recognisable by other apps as a recording source. I could send media player output into this virtual device, recognise it as a source for my application, process the data and send the multi-channel audio to a consumer-level DAC card without it needing any special features. However, there is a subtle problem: aloop’s sample rate is derived from the system-wide “jiffies” count. It will not match the sample rate of the DAC card even if they are both nominally 44.1 kHz.

I see just one sensible solution: I have to modify the aloop code so that, when the information is available, it gets its sample rate synchronisation from the DAC card. I could either modify aloop and send it this synchronisation information via a ‘pipe’ or shared memory (if that’s possible) from my active crossover application, or I can make my active crossover application a virtual sound card driver itself. Either way, I would need to register the driver with the system so that it can be set up as the default audio device (using the usual GUI-based sound preferences).

To any Linux programmers out there: does this sound sensible and do-able?

More later.

Update 30/06/15: It seems that there is an updated version of the snd-aloop driver which incorporates a dynamically-adjustable sample rate via the Alsa PCM interface. This could be precisely what I need.

I’d like to say it is partially full time work that has kept music in the background. I think another reason is the extended bed rest in 2011-2012 destroying the momentum of playing almost every day. That has come back in flashes though.

I think it is something more that is keeping me from “pulling the trigger” and working on this project. No, not the money, although that is certainly a factor. I think it’s just a belief in myself and completely taking responsibility for the project from start to finish. There is a hesitancy with recording. I’m not 100 percent sure of what I am saying or ..admitting, but I think it is maybe something deeper.

Well, where do I go from here?

Keep assessing whether what I am saying is really true and try some new methods. I’ll save the assessing for another time on the virtual counselors couch and tell you about the plans in the works to get things going.

First, I need a computer at home that I can seamlessly work between the VAMS studio and home. At VAMS they use a MAC DAW (Digital Audio Workstation) and I used to have a 5,1 iMac at home, but I sold it to good ‘ol Joe Durgo. I want to write a song about him one day called No Ordinary Joe, but that is another matter. Another lotta gray matter actually!

So I sold the comp to Joe so we could work together on some video projects for him and I bought another Mac for my wife, a Macbook from around 2010. The idea was that we were going to share the Macbook between her video editing and my music recording but it is not feasible. She is very passionate about her DIYs on her channel (not to mentioned talented!), and I don’t want to start “booking time” around spontaneous creativity. So I need my own Mac.

I went through a spell of using Linux for music editing over the last few months but it was a fail. I love Linux, I mean love it, but for music recording, I just don’t have the capabilities. I have made friends with a very bright young fellow that could help do some IT for me on Ubuntu Studio or maybe even Studio 1337, but that would require having him ready to repair at a moments notice. Too much to ask a new friend.

So I’ve been combing craigslist, Free Geek, and a few other online sources for a Mac and had a good bite the other day on one. I just need to raise another 100 and I’m safe to get it. So that is one obstacle that will be solved soon.

The second obstacle is getting into “playing shape” and my plan is to bring the guitar into work and play for a half hour at lunch. I’m working eight hours a day but they are mixed between all kinds of times so I often find I’m working in the evening a bit, some on weekends and it is bleeding into my plan of getting into shape at night. That will eventually happen when I wrestle the work dog to the ground and pin him, but until then, it’s finding times that are a bit more unconventional. I think the acoustics at work will be good…it is a storage facility!

I’ve also decided that I need to do this myself. I guess that is heavily implied by saying I’m looking for a Mac at home, but it needs to be stated. The best I can do with having someone else help me is either pay 40 dollars per hour, which is a great deal for professional sound engineering, get a buddy to work for nothing (not acceptable), or to do it myself. I just am a bit daunted by the learning curve. I seem to be on many learning curves these days. So many I’d say I’m bending dizzy. But, what’s another one but a good opportunity to learn. Plus, there are always people who will help out a bit here and there.

So it’s part time at VAMS on my own (with the odd little bit of help from Dave or Graham possibly), and time at home on my own. That’s the plan!

Now, to the “not believing in myself” area..well, I wish I never wrote that and since I have a no edit policy going right now on this blog, I won’t take it out. But I wonder about that part…

Here is a screen capture of Ardour (my Linux DAW), Guitarix (effects/amp modeling), and the JACK audio connection system (signal routing/MIDI). My current task is getting the right “wires” crossed so that my guitar output is sent into Ardour, through a Guitarix plugin, and back out through my hardware interface in an audible form. This is usually a pretty straightforward thing but since it’s Linux, of course not.

I have been able to play around with Guitarix a bit outside the Ardour bubble (as I will call it for lack of a better word), and when I used less complex routing through JACK it worked well, but that setup doesn’t provide the functionality I’m looking for. Having watched my friends manipulate waveforms with ease in Pro Tools, I think I have about figured it out in Ardour. The difference is the application of effects to the signal as a preceding or following step in recording; in one case, the recorded audio file contains the effect and there is no changing it. The other case is being able to effect a pre-fx signal in real-time, and to add and alter the effects and processing later without touching the underlying audio file. This is an important work environment that allows for endless tweaking and nudging come mixing time. The trick then is making good decisions and that comes down to hearing what works instinctively.

I intend to utilize this space to document my process as I produce and mix songs for the long-gestating Indignados record. Among other things I want to keep track of the current problems that I am working to solve and provide a resource for anyone interested in looping music and producing with free software in Linux.

HYDROGEN drum machine

To Do:

Study and experiment with Yoshimi synthesizer, Hydrogen drum machine, and Calf LV2 plugins. Calf instruments missing presets due to known bug. May need to build Calf suite from source. Experiment with recording interface and Calf processing plugins. Record guitars for songs A and B. Sequence drums for A and B.

_______

Yoshimi will get its own write up if I figure it out and like it. Seems to be well-regarded in the LA (Linux Audio) community.

Hydrogen is the go-to drum machine for LA and I have worked with it a bit.

I am implementing Calf in Ardour, within the Ubuntu ecosystem. Yesterday I installed the Lubuntu environment, although I haven’t noted many differences. Ardour was showing unusually high DSP load (I’m not exactly sure what that means but I think it was misallocating memory resources) and I thought the Unity environment with all its bells and whistles, combined with a fairly comprehensive DAW, running virtual instruments might be a bit much for my little Dell Vostro 1220 laptop. Lubuntu is supposed to be “lighter” hence the preceding “L”.

I am often asked, “Why Linux?”

It is precisely because I can do something like record my music on a less-than-ideal bit of hardware like my Vostro that I have endeavored to engineer using these tools. I feel it is a frugal approach to computing, and because I am a musician I see it also as having the potential to finally and for always liberate music for the masses. Just as the web has transformed our way of transacting and sharing, open-source free software is reaching the point where the means of digital creation can be put to use by all.

One thing I recall again and again is how astounded even the most competent sound engineers of the 20th century would be at what tools are now available. With a cheap laptop, an internet connection, and a reasonable amount of gumption, I have cobbled together a unique, peculiar, if not-altogether-user-friendly toolkit. Aside from the cost of the laptop, internet access, electricity, sometimes maddening frustration, and about eight months of working with Linux and FOSS software, I have created this “suite” for close to nothing. I paid a few bucks for Ardour, my Linux DAW (Digital Audio Workstation) but all other software (including, of course, the Linux kernel and Ubuntu ecosystem) came without strings attached. The time commitment required to learn Linux and some general computer programming knowledge has been substantial, and had a lot to do with a lack of experience with the inner workings of computers.

In the Linux world it is expected only that you will take the time to learn about the tools provided before pestering others to fix your problems. Of course, these people are often very well versed with all sorts of computing problems and can be a great ally in making sense of things. Part of the reason I chose Ubuntu was because it has a history and therefore things are well documented and for the most part organized. The lack of spoon-feeding is fortunately balanced by this documentation and the experience of others who have paved the way. As a result of working in this domain, I have accumulated a broad general understanding of computer architecture and design that I intend to apply to my musical workings.

One last thing worth mentioning as regards the intersection of computer programming and music:

About a year ago I discovered a project called Overtone. It is a live music coding library implemented in a Lisp dialect called Clojure, and I have been slowly building understanding of programming languages and design while also writing music in more conventional ways (if you can call looping conventional!) This has been at the root of many of my studies over the past year, and I am pleased to say I am well on my way to being able to use some new and very interesting tools as part of my project. As I become more proficient with Clojure, Overtone, and Shadertone — the latter being a live graphics programming library — I expect some surprising new sights and sounds to enter the fray.

]]>https://apeinprogress.wordpress.com/go/tech/linux-audio/midi-multicast-over-ethernet-from-raspberry-pi/
Fri, 03 Jan 2014 20:49:29 +0000apeinprogresshttps://apeinprogress.wordpress.com/go/tech/linux-audio/midi-multicast-over-ethernet-from-raspberry-pi/I’ve been plugging away slowly on my Raspberry Pi project of the moment: a MIDI sequencer/controller that doesn’t quite know what it wants to be yet (think I’ll grace it with its own post). After getting rid of my hardware synth a few years back, I don’t want to buy a new one just to test a project in its early stages.

As i’d previously used TiMidity++ as a MIDI server, I decided to install that on the Pi. It installed with no issue, and I quickly got the Pi producing sound with some Java code I appropriated from somewhere on the Internet. Alas, a problem arose when I tried to play a MIDI file; the Pi struggled under the load and shrieked with pain as it dragged out a single note over the space of what seemed like hours. It was at this point I gave up with TiMidity on the Pi.

A week or two later, on feeling compelled to move the project along, I decided to send MIDI over the network to a server running on the desktop. After failing to use netjack2 over wifi, I decided to get a cheap ethernet hub for more efficient data transfer. At the same time, I happened upon a great little program which ended up doing exactly what I needed.

After stumbling on QmidiNet, I quickly found its inspiration multimidicast. This neat little tool allows you to send/receive MIDI on ALSA sequencers on your network. After using aconnect to route midish to multimidicast on the Pi, and multimidicast to TiMidity on the desktop, I was good to go. When I started playing the MIDI file on the Pi, the desktop churned out the sound delightfully. I was most pleased!

After my brief rejoice, I soon careered into a new obstacle – midish doesn’t support real time editing of MIDI tracks, which I intend to be a feature of my project. I’m not quite sure what my next steps are yet, but I’ll hack through it somehow!

If anybody’s interested in a more step-by-step guide, let me know. I’ll try and whip something together when I’ve got some spare time.

]]>https://stommager.wordpress.com/go/gear/the-perfect-transport-pt-3-software/
Fri, 23 Aug 2013 17:31:37 +0000stommagerhttps://stommager.wordpress.com/go/gear/the-perfect-transport-pt-3-software/Laptops, or any personal computers for that matter, are not designed to be used as a high-end audio transports. It just so happens that they beat probably every dedicated audio transport in terms of price. Therefore it’s worth putting some effort in proper configuration of the former to get the quality of the later for a fraction of the price. That’s exactly what I did when my old CD player failed. Instead of buying a new one, I’ve invested in used netbook. This post summarizes my experiences in the software setup.

When I decided on buying Dell Mini 9, I wasn’t thinking much about the software. I just thought that I would install the bundled Windows XP, foobar2000 and listen happily ever after. But when I finally got the netbook I started to wonder… is Windows is the best OS for my music player? Of course there are number of articles on the Internet describing how to get bit-perfect audio out of Windows into an USB DAC, but I just couldn’t shake off this feeling of scepticism. You know how it is, Windows is designed for the lazy users, for the ones that want everything to work automatically, without any input from their part. This inevitably results in very complicated structure and limited configuration capabilities. In particular, I was afraid that although I would install and configure the proper software, Windows would still use some internal resampler or other treacherous process and ruin my efforts. I don’t trust folks at Microsoft who always seem to think they know what’s best for their users.

Then I discovered that ALSA finally implemented support for my DAC: E-MU 0404 USB audio interface. I was even more pleasantly surprised to find out that my external sound card worked flawlessly right out of the box with Ubuntu 12.04 LiveCD (Windows 7 still requires additional driver setup in order to use the E-MU). I would never guess that Ububtu will have better hardware support than Windows… times change. Everything looked really promising until I got into details of audio configuration. It turned out to be highly complicated with many different modules communicating back and forth, doing all sorts of crazy conversions and mixes. I was really disappointed because that’s exactly what we must avoid in order to get bit-perfect stream into the DAC. The only comfort was that, unlike Windows, Ubuntu gives reliable manual control and that the knowledge of it’s inner workings is widely available online. Therefore I decided to stick with Ubuntu.

I quickly identified that the main source of troubles is a component called PulseAudio. It’s main purpose, as I understand it, is to take sound streams from all sources, convert them to a common format, mix them together and pass to a component responsible for hardware (soundcard) access. This is reasonable, since typical user uses different sound inputs, sometimes simultaneously. But, as already mentioned, I’m no typical user and I don’t intend to use my Dell Mini as a typical computer. In my case, there will be only one source of sound and so no server/mixed functionalities are needed.

The question is: is PulseAudio really affecting the sound quality? Or maybe it is transparent to the audio stream if it’s coming from one source only? Well, I found some incriminating evidence in the system. First of all, PulseAudio has a configuration file with a settings that suggests, that it has only one output format. This doesn’t seem right. My audio interface supports many different sound formats, compatible with vast majority of audio files. When I play 24bit 48kHz file I expect that it will be transparently sent into by E-MU, which supports this format amongst many others. Unfortunately PulseAudio has this single fixed output format and it converts my file into the default 16bit 44,1kHz before sending it further. I’ve inspected E-MU’s running mode and found that indeed it is always working in the PA’s output format regardless of the played file format. So, the software conversion is a fact. Whether it inflicts the sound quality I cannot say as I haven’t done proper testing, but the point is it should not be there.

There are two main ways to remove PulseAudio from the processing chain. The first one is to uninstall it from the system or to use a simpler version of the system that just doesn’t include it by default. The second is to find the proper player capable of communicating directly with lower segments of audio processing chain, bypassing the default, evil mixer. I’ve decided to try out the later first. My favourite forum, head-fi.org has numerous threads on Linux audio players. I’ve tried some of the proposed names and found one to be particularly good, the DeaDBeeF.

It’s very simple, which is an asset for me. I’d say that it is written well. It doesn’t hang, it responds quickly, it plays gapeless tracks flawlessly, but most importantly it has the right configuration capabilities, for instance we can (and should!):

select the default output, in particular select ALSA instead of the default PulseAudio

set the output device

remove all DSP plugins

turn off ALSA’s resampling

The output devices come in many variants as ALSA recognizes many ways in which it can interact with the same hardware. In my case, the best option was “E-MU 0404 USB: Direct hardware device without any conversions”. This already sounds like music to my ears.

After setting everything up I repeated my test and found that this time E-MU’s running mode corresponds to the audio file format. Additional confirmation comes from the fact, that music’s volume is now totally independent of the system’s volume control and PulseAudio control panel shows no playback when in fact the music plays. At this point I might very well uninstall the damn thing, but since it is not recommended by the Ubuntu team I decided not to.

At the moment of writing this post I’ve already made numbers of listens on this setup and I must say that I’m glad of all the decisions I’ve made along the way. I made a comparison: played the same CD through my CD player (fortunately it’s not entirely broken) and Dell Mini and switched the input on my E-MU. The result was that I had a feeling that the new system sounded a bit more detailed, but I’m not sure if I would tell them apart in a blind test. Theoretically the SQ should be a better (due to the fact that DAC is now synchronized with it’s internal clock and that we have eliminated the CD player’s potentially lossy error correction), but I’m not sure if I can hear the difference. Nevertheless I’m very glad that I have managed to find a high quality transport, a worthy replacement and that I stayed within my budget of 150$ (the actual unit cost was merely 85$, but I had to buy additional storage for the FLAC files for 47$). Functionally there are many advantages of this setup with only one small disadvantage: I have to rip every new CD. Other than that, all is convenient, reliable and works very fast. Real pleasure to listen to.

]]>https://thorwil.wordpress.com/go/icons/new-ardour-logo/
Mon, 04 Mar 2013 21:00:45 +0000thorwilhttps://thorwil.wordpress.com/go/icons/new-ardour-logo/Ardour is an application for recording, editing and mixing music. It is licensed under the terms of the GPL 2.

The upcoming 3.0 release seemed like a good opportunity to take another look at the logo I designed in 2006. A selection of drafts from back then, ending with the final design:

I had to ask myself: Is this logo (still) appropriate for Ardour?

The upcoming 3.0 release will be a digital audio and MIDI production application, available for Linux and Mac OS X. It is designed for frequent and prolonged use, being able to deal with huge amounts of material, complex signal pathways, precise and intense editing. Reliability, correctness and precision are of utmost importance.

The logo should take a matching stance, be sharp and have a strong presence. I think the old version does a fine job in this regard. It also happens to be well established and liked by the community (of course not by everyone). Back then I decided to use a free-form wave shape, less stylized, more realistic. Now I think a shape with even subdivisions will make the logo appear more precise.

I worked my way through variations of the curves that describe top and bottom of the wave, the number of teeth, their shape, relative height of the type and its consequences on letter spacing:

Now that the Audio Creation SIG has our own spin (whether or not it’s official yet), I’d like to try moving forward with revisions to improve the Musicians’ Guide. I have virtually no spare time, and that’s my reality for the forseeable future, so instead I’d like to encourage everybody who uses Fedora’s music/audio software to contribute!

When you contribute to documentation for free software, you’re making it easier for somebody else to take their first steps into unknown territory. Comprehensive, accessible documentation is, in my opinion, one of the most important tools we have when trying to spread free software to new users.

If you see something that you can do (or do part of), just add your proposed changes as a comment on the Bugzilla issue. Maybe you can rewrite an entire chapter or section, one paragraph a day. Later on, I’ll pick up your revised version, add the Docs Project-specific markup, and publish your changes with Fedora 19!

The biggest challenge I have in maintaining this 270-page document is that, because I know there are *tons* of different areas for improvement, it’s difficult to know where to start. Feedback from real users is invaluable in helping to know where to spend my time. Even if it’s as simple as fixing a typo, clarifying a sentence, or including/excluding additional information, your feedback is extremely important.

And heck, let us know when things are going well, too!

]]>https://itsecworks.com/go/security/linux/linux-audio-troubleshooting/
Thu, 01 Nov 2012 16:33:44 +0000itsecworkshttps://itsecworks.com/go/security/linux/linux-audio-troubleshooting/Imagine I had Linux Mint 12 and I connected my headphone to my linux machine, after disconnecting it I have lost my sound!?! I have alarm sounds in my monitoring system for many companies and it was just silence, no alarms…
I had to check the settings with the alsamixer gui (alsamixer command) and something was muted automatically..
I have read many forums and wiki sites before I have found the easy solution, unmute everything. I made a script to collect my actual working audio settings for troubleshooting purposes. If it later goes wrong again, I run the script once more and I will compare the 2 outputs. If there is a difference, I will know at least what has changed, in which way should I go forward.

output of the /etc/modprobe.d/alsa-base.conf file. For more info see http://alsa.opensrc.org/MultipleCards
output of the inxi -SAxc 0 command. inxi is a full featured system information script. It gives information about the audio drivers, cards. More info on http://code.google.com/p/inxi/
output of dpkg -l *pulse* and dpkg -l *alsa* commands to see what and which version is installed.
output of ps axfu | grep pulse and ps axfu | grep alsa command to see what runs actually.
output if aplay -l command. aplay is a command-line sound recorder and player for ALSA soundcard driver. It can list the soundcards and digital audio devices (here only the playback hardware devices).
output of lspci -v command. It lists all PCI devices inclusive the audio devices.
output of cat /proc/asound/version command. Shows the version of the audio driver.
output of head -n 1 /proc/asound/card0/codec* command.
output of amixer command. it is a command-line mixer for ALSA soundcard driver. With no arguments will display the current mixer settings for the default soundcard and device.
output of amixer controls command. It shows a complete list of card controls.

This project started as a test of Ardour 3’s new MIDI and synth-plugin features (still in beta). In this role, it served in uncovering and fixing a number of issues and grew into something a little more ambitious over time.

When thinking about what I could draw as a cover image, Driddee jumped into my mind. Similar to the music, creating this image was a test run, with Krita. Took a bit to get comfortable with it, but now I’m rather pleased. I need more practice, obviously :)

]]>https://thorwil.wordpress.com/go/music/get-on-board-the-blues-guicussion-remix/
Tue, 17 Jul 2012 08:13:05 +0000thorwilhttps://thorwil.wordpress.com/go/music/get-on-board-the-blues-guicussion-remix/Dave Phillips recently published a great Blues track in a not so great mix and made the material available on request. Since others covered the gentle just bring out what’s there (by Fons Adriaensen) as well as tasteful addition of drums (by Jason Jones), I just had to do something a little different.

]]>https://linuxaudiolive.wordpress.com/go/hardware/m-audio-transit-on-ubuntu-12-04/
Fri, 04 May 2012 17:51:52 +0000linuxaudiolivehttps://linuxaudiolive.wordpress.com/go/hardware/m-audio-transit-on-ubuntu-12-04/I freshly installed Ubuntu 12.04 and the ubuntustudio meta package. To get the m-audio transit card working I followed my older post (m-audio transit and ubuntu linux), i.e. installing madfuload which comes in the repository and adjust the udev-rules. They do not work the way they come installed.

So I used the content from corrected-madfu-rules and saved it as /etc/udev/rules.d/41-madfuload.rules. The according file in /lib/udev/rules.d I left untouched. The “41” instead of “42” is there because it reads “…Pick a number higher than the rules you want to override, and yours will be used. …” in /etc/udev/rules.d/README.

Something hang on my machine when trying the outcome so I rebooted the machine, although this might not be necessary for everybody. After that the sound card works as it used to.

]]>https://zeboks.wordpress.com/go/real-time/linux-audio/testing/second-step-f15-audio/
Sun, 11 Sep 2011 16:11:05 +0000myshiphttps://zeboks.wordpress.com/go/real-time/linux-audio/testing/second-step-f15-audio/Reconsidering what I said in the first post, I should warn insofar as using CCRMA repos, even for just a kernel install, can be misleading, considering the state of the whole update process. See this post.

It seems that the latency results for a normal 2.6.40.4-5.fc15.i686 are little bit worse than those of my Debian Sid install with a pengutronix 2.6.33.7.2-rt30-1-686, but they could be developed in time. Most important, they are far better then those of the CCRMA kernel for F15 without the init=upstart tag.

So I just stopped hacking this way for now, peace.

]]>https://zeboks.wordpress.com/go/real-time/linux-audio/testing/fedora-15-install/
Wed, 07 Sep 2011 16:41:45 +0000myshiphttps://zeboks.wordpress.com/go/real-time/linux-audio/testing/fedora-15-install/I am a Debian user, with a not so-short experience of Linux systems. Not expecially deep into the core of their secrets, as many others, I still do like to experiment with the huge possibilities of the different versions. Being a musician, I have finally afforded to install Fedora 15 on a 500 Gb USB disk.

I am now listening to some Sun Ra’s “Calls for All Demons”, after having set up the daemons of the system and, yes, it’s a quality audio. But before to get here I had to pay my dues, so I decided to give my contribution for others who may be interested to begin with Fedora as a system with a slant for music.

———-
First of all, if you install side-to an existing Linux-grub2 booting machine, do this (Link) and you’ll avoid to have to boot manually at the grub prompt, or trying fancy custom_40 ‘voicings’.

You may encounter pulse-audio problems. I did, and I decided to use the axe: I uninstalled it (pulseaudio and pulseaudio-alsa-plugins). It’s up to you and the card you have. I did it at the end after reading this alsa wiki, assuring me that the actual alsa system does not need config files to work at the basic level. True.
Subsequently, I think I’ll prefer to play a little with alsa better than with pulse. Some time before I was asked from Kde system settings in Debian Sid to get rid of my alsa devices and stick on pulse and answered no, suspecting the proposal.

Finally, read carefully the Fedora Musician’s guide and be careful with CCRMA repos. For me the path has been the following: to have the kernel working but not their core package, to start. If you need low latency the normal Fedora kernel is not bad at all, yet, with planet CCRMA PAE kernel and the music software of the normal repositories, your deal can be made. Install your music software after you have the real-time kernel booted succesfully, maybe without the “core” CCRMA package, if you do not still know too well what you are doing.

If you do not have pulse audio, config-files of each application will permit you to point to alsa instead than pulse, that is becoming more and more the default user-end.

If your machine it’s not a dedicated machine and you need all your stuff, internet connection, etc., you will want to skip the init=upstart tag in your grub.cfg and tweak the rest with the help of this page, rosegarden is a good application to test the state of your system-timer, maybe you’ll have to use force=hpet in your grub.cfg.

It took me three days to tame the new personality at home, and it was not easy. Now Debians and Fedoras are friends and Sun Ra orchestra is on “There will never be another you”, cool.