If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

PulseAudio Ported To Android, Compared To AudioFlinger

01-16-2012, 11:20 AM

Phoronix: PulseAudio Ported To Android, Compared To AudioFlinger

A developer at Collabora has brought PulseAudio to Google's Android operating system. In the process of this port he has closely compared the performance and features of the once-notorious PulseAudio stack to that of Google's AudioFlinger...

Not everything Google writes is the smartest or best implementation of whatever they take on. I think Google was targeting much simpler (less capable) devices with the original design of AudioFlinger, and they didn't really have the software mixing, low-power, time-based scheduling type functionality in mind -- partly because there might be CPU bottlenecks or software bugs introduced with more complicated code such as PA's.

Also consider that Google "just needed to get something out the door that works" when they were in a hurry to get Android shipped and sold to device manufacturers. We did something similar with ACCESS Linux Platform at ACCESS Systems several years ago (I've since moved on); the reason we didn't use PA is primarily due to the fact that PA was in the early immature phases at the time and it wasn't really much better than what we had. But if we'd had had a build of PA from the future -- say, early 2011 -- back in 2007, I think it would've been a no-brainer to use it.

As for me, I'm very excited to use PA in place of AudioFlinger on my Thunderbolt, so I'll be watching this work closely. Of course if PA on Android depends on Icecream Sandwich then I won't be able to use it until there's ICS for Thunderbolt...

Comment

Does PulseAudio help to solve Android #1 issue regarding audio, its latency ?

I don't think there's much latency in the core of AudioFlinger, but there's definitely a lot of latency in audio over Bluetooth. If you only use bluetooth headsets (A2DP) then yeah you're going to experience the delay inherent in that protocol and there's not much PA can do to help that.

Comment

I don't think there's much latency in the core of AudioFlinger, but there's definitely a lot of latency in audio over Bluetooth. If you only use bluetooth headsets (A2DP) then yeah you're going to experience the delay inherent in that protocol and there's not much PA can do to help that.

The main problem with Android actually is its lack of low audio latency API.
Developers cannot target Android for good audio/music production apps (like there are for iOS), as there seems to be no way to reduce latency to an acceptable level (below 20ms).

Comment

Indeed, that's a big win in this case, I hope they can reduce it a bit again

176 ms isn't all THAT bad in the whole scheme of things, especially if a lot of DSP and processing is going on for the sound, which eats a lot of CPU time on that little CPU in your phone.

20 ms is pretty freakin' fantastic. You aren't going to get much better than 20 ms. Even if the DSP is SoC, you need some kind of non-zero buffer for software mixing, and PA software mixes *everything* (whereas AudioFlinger usually does NOT have the software mixer on, for performance/CPU reasons).

I think the minimum latency the hardware is capable of on a complete idle system is something like 6 ms from the time the audio is pushed by the decoder to the time you hear it in your headphones (assuming there is negligible latency in the output hardware, which is the case for analog out). That PA is able to achieve 20 ms -- only slightly more than three times longer than the core pipeline -- is pretty awesome.

Also keep in mind that PA in time-based scheduling mode will produce as much latency as the applications will let it get away with, basically. For a music player, a latency of 2 seconds saves a LOT of CPU time, because the CPU can go into deep sleep states between buffer pushes, then wake up, push a huge buffer, and go to sleep again. And if you decode into RAM before starting playback you can save even more, because you can just do a zero-copy map from the decoded PCM buffer into PA's streaming output DMA buffer.

But for DAW-like programs on a phone (which sounds crazy in the first place but whatever), tsched can tune the latency lower (and increase power consumption accordingly) if the app requests it. But 15 - 20 ms is about as low as you can get, and if you start to multi-task at all you'll get clicks and pops on a single-core CPU (or maybe even a dual-core CPU if you're doing heavy 3d compositing while multi-tasking).

I still think I'd rather have a pipeline that introduces a very small amount of latency but is as flexible and power-saving as PA, over a pipeline that has zero added latency but doesn't have the features or power savings of PA. It's hard to have both (indeed the maintainers of JACK2 recognized that you can't really have both, by recognizing the separate usefulness of PA and JACK.)