> Hi,
>
>
> On Tue, 16 Feb 2010, Klaus Schulz wrote:
>
> I think 64bit (double precision) throughout the chain guarantees for
>> lowest losses. The more
>> you calculate the worse it gets. What does it help to run your plugins (
>> e.g. libsamplerate)
>> at 64 bit if the results get cut off later on.
>>
>
> but I'm not convinced this is in practise such a leap forward. I mean
> already *ten* years ago, we were discussing about 64bit sample formats for
> LADSPA:
>
>
"*ten* years ago..." Is this supposed to be an argument. Ten years ago
people were living in caves. ;)

> "LADSPA 64bit FP support ?", Mar/2000
>
> http://www.mail-archive.com/linux-audio-dev@email-addr-hidden/msg00342.html>
> ... LADSPA ended up standardizing on 32bit and so far this hasn't proven to
> be a problem.
>
> Basicly 32bit floats provide 24bit of precision, and for passing sample
> data around (including to audio files and hardware interfaces, 24bit suits
> most needs). Processing is a different matter, but here 64bit floats can
> be already used.
>
>
I think it is only about the math and rounding. This has nothing to do with
the final bitdepth. If you run several plugins and you have several 32/64
bit conversions you'll face losses.

> And it's not just us Linux folks. OS X Core Audio defaults to 32bit floats
> as well:
> http://developer.apple.com/audio/xaudiooverview.html> And also our friends at MS are following suit (with Windows 7):
>
> http://blog.szynalski.com/2009/11/17/an-audiophiles-look-at-the-audio-stack-in-windows-vista-and-7/>
> And while "all-64bit" might provide theoretical improvements, it has
> potentially very real performance impact. One thing that is still on short
> supply on modern machines, is processor caches (or more precile, fast access
> to enough big chunks of memory). By doubling the sample size you are also
> doubling the working set size. And if this pushes your inner loop working
> set out of L1 cache (which is a very real possibility with common audio
> params), your performance will plummet. And with the memory architectures of
> today's machines (with long pipelines), the performance difference will be
> very notable!
>
>
> Come on. You're always striving for the best. You provide the best audio
>> engine I came across
>> and then your come up with "and this would be harder to implement " and
>> "it should
>> do for most of the people".
>>
>
> Well, this is actually a fairly easy change to make.
> libecasound/sample-spec.h defines the sample type, and with a simple change
> you can replace s/float/double/. A few places in the codebase need fixing
> (notably LADSPA support will break as one then needs to convert the buffer
> every time going into, and coming out of, LADSPA domain; and similarly for
> JACK). But basicly majority of the codebase is already ready.
>
> But I still see very little gains from doing this, and lots of possible
> drawbacks (especially to performance).
>
>
I run FIR filters in double precision - this makes a difference. Most SRC is
done in 64bit. This is not only marketing.

When it comes to performance the user should decide. In Brutefir I choose
"double" or "single" precision - it is that easy. If you run offline
processing, performance is not an issue anyhow.

>
> 64 bit throughout the chain is IMO the way to go to bring losses down to
>> the absolute
>> minimum. OSX/MS DAWs are ahead of us here - that's bugging me at least.
>>
>
> I do agree it's picking up in popularity as a marketing tool "full 64bit
> audio path", but I'd like to see some independent sources indicating it's
> really a worthwhile in terms of quality.

That's a good point. From a theoretical standpoint it is more then obvious -
isn't it. 64 bit processing througout just causes lower errors. To prove it
by measurements will be pretty tricky
I'd guess. A test on several high resolution audio system would be feasible
I guess. (How about bringing this up on one of the annual Linux Audio
Conferences?)