I suppose that in practice, users just want their music collection to fit on your old 2TB drive rather than having to buy a new 3TB drive, right?

Let's not forget the plus one for the pot... hem for the backup: I always sum twice (at least) the space when think about my laburiously ripped lossless collection and still cloud storage services aren't a feasible alternative to doubling physical disks.

QUOTE

(By the way, I use a 2nd lossless to distinguish those CDs which came with pre-emphasis. As I don't convert them, merely tag them (and decode on-the-fly), I want to be insured against accidentally deleting tags.)

I'd agree with skamp. There's a saving of 10GB between FLAC -8 and LA-HIGH which is nothing. I think it comes down to whether you're using older hardware or not as encoding speed even at these levels is a fraction of ripping time too.

I'll stick with FLAC

Actually the difference is 9GB and between LA -HIGH and FLAC -5 and 8.33GB between LA -HIGH and FLAC -8. Considering the speed between -8 and -5 I'll stick with -5.

Apple did a good thing with ALAC, it didn't create confusion. They compared different algorithms, speeds and codecs and left only one for the user. I am sure whoever uses ALAC got over the headache of which "compression" to use, just to save few MBs, the first day.

I suppose that in practice, users just want their music collection to fit on your old 2TB drive rather than having to buy a new 3TB drive, right? (And all the other features of a modern lossless, like tagging capabilities.)

Yikes! A compression increase of 33% above the average of these codecs' results?

Precisely. There's a “what's your compression ratio?” thread at http://www.hydrogenaudio.org/forums/index....showtopic=97125 . My FLACs clock in at slightly above 900. While space certainly was more of an issue at the time when you had to build a computer which had enough space and mobo for the drives (anyone who needs an IDE card or two?), it kinda still is: even if I were to buy drives now, I would save me a couple of hundred dollars from audio compression, plus the video part of the DVDs.

I actually use flac -8 because even at that setting encoding is still so much faster than my cd drive can read that it doesn't matter. Also, encoding is something I only do one time so even IF it made a difference (within reason) I'd still use -8. Decoding speed and efficiency are the big issues.

Lots of people around here seem to be interested in turning on all the slowest brute-force options. But you're not using anywhere near enough brute force here!Clearly, since you aren't looking at encode or decode time, the right thing to do is run a program that, using an enumeration of all Turing machines, simulates more and more of them for increasing periods of time, checking whether the TM halted with the entirety of your music collection in its output. The compressed format of your music is then just a description of the shortest TM you discover that does that.*

Since a significant amount of data in your lossless files is below the instantaneous noise floor and therefore effectively random, doing this may still not give you tremendous savings over FLAC. (If FLAC compresses to 50% and 1/4 of the bits of the original are effectively random noise, it's entirely impossible for any lossless compressor, no matter how miraculous, to beat FLAC by a factor of 2; this is kinda like Amdahl's Law, with "incompressible part" taking the place of "nonparallelizable part.")

Also, of course, decode time might be rather long and encode time is likely to be many times longer than the lifespan of the universe. But hey, you saved a few gigs! Surely that's worth it!

*See wikipedia's article on Kolmogorov complexity. Note that even then you are unlikely to be able to guarantee you've come up with the shortest description, since some shorter TMs which seem like they're not going to halt could eventually output your music collection and halt. To prove you've got the shortest you would have to prove that all shorter programs either halt without reproducing your music collection or never halt, and the halting problem is in general uncomputable.

Lots of people around here seem to be interested in turning on all the slowest brute-force options. But you're not using anywhere near enough brute force here!Clearly, since you aren't looking at encode or decode time ...

I see only one contributor to this thread to whom your above statements apply. I, for example, don't consider FLAC -8 a brute-force option - if it were, it would try to find e.g. the optimal LPC order via brute-force... which it doesn't. And I completely agree with yourlord.

Since a significant amount of data in your lossless files is below the instantaneous noise floor and therefore effectively random, doing this may still not give you tremendous savings over FLAC. (If FLAC compresses to 50% and 1/4 of the bits of the original are effectively random noise, it's entirely impossible for any lossless compressor, no matter how miraculous, to beat FLAC by a factor of 2; this is kinda like Amdahl's Law, with "incompressible part" taking the place of "nonparallelizable part.")

How do you measure the noise floor?Because I suspect you don't and if you don't - the quote above is rubbish.

QUOTE (jensend @ Oct 6 2012, 16:14)

Lots of people around here seem to be interested in turning on all the slowest brute-force options.

I partially agree with this...partially because testing codec speed is something that anybody can do easily. It's imperfect, but much faster than large scale tests.

2 A_Man_Eating_Duck:Could you please update the first post?Not necessarily after each new codec tested, but I guess many people won't look farther than that.

Personally I don't care too much if encoding time gets longer for insane settings, as long as the speed is still acceptable, like flac's -8 or takc's -p4m. But if it gets too crazy like WavPack's -hh -x6, or the decoding speed reduced to a too costly rate, like falling from 15x to 3x, I would like to avoid those settings.

libFlake and FlaCuda are tuned differently, so libFlake -5 might in fact compress better than libFLAC -8. They also support additional compression levels 9-11, however their use is not recommended, because those levels produce so called non-subset files, which might not be supported by certain e.g. hardware implementations.

During 2003 the prices of DVD drives and discs have gone down. People have moved to FLAC from Monkey Audio....Fast decoding speed is appreciated much more than 4-5% of better compression.

Just wanted to mention my experience with lossless files on disc usually the max speed of the CD/DVD drive is the limiting factor. To clarify, when converting files made with TAK -p4m from CD/DVD to LAME MP3 the CPU is not hitting 100%.

I agree that I would spend the extra time spent on the encode side for some savings as long as decode speed is not terribly affected, but I doubt I would notice any decoding speed difference between TAK -p4m and FLAC -0 with files on a disc.

Although if you're into purism, -high -noseek will squeeze out another few bytes by sacrificing seekability. Which for .la files is worth doing, since the format, if anything, is good for archiving, without the immediate need for playability (i.e. fast decoding). The latter is what you'd be using FLAC, TAK, ALAC of WavPack for.