Is that the reason why material with 96 kHz is currently reduced to the same filesize as material with 48 kHz? Are higher sample rates currently treated like if they are played back at a lower speed/rate, thus lowering the frequency spectrum accordingly, which results in a wrong calculation of how many bits can be removed?

Wrong might be a little strong - the sample rate is used correctly where it needs to be but the codec-block-size equates to less time and the granularity of the spreading function will be higher and possibly a bit too coarse.

QUOTE (Hancoque @ Aug 3 2008, 11:59)

Does that mean that if noise shaping is disabled less bits are removed? That would reassure me.

Exactly the same (except if noise-shaping causes clips) bits-to-remove values will be achieved for noise shaping 1 and 0.

lossyWAV 1.1.0b attached to post #1 in this thread:

implementation of increasing fft length for increasing sample rate;

improved logfile output and --detail output;

reference threshold constants for rectangular dither and triangular dither have been calculated so added noise should be the same for dither off and any dither level between 0 and 1 - the number of bits-to-remove will however reduce with "increasing" dither.

Is it intended that the dithering noise is also shaped? A frequency analysis reveals that it's also mildly shaped. I always thought that rectangular or triangular dither corresponds to white noise.

Which version of lossyWAV are you using? I ask as the fix to ensure that added noise remains the same when dither is applied has only been introduced at 1.1.0b.

Also, with respect to the triangular dither, I have used a method (discussed on these forums previously) which uses 1 new random number per sample and recycles the previous random number, i.e. tpdf_random = new_random - old_random; (random values in the range +/-0.5). In my implementation and to achieve intermediate dither types I have used: tpdf_random = new_random * dither_type_multiplier - old_random; (dither_type_multiplier in the range 0..1). This may have had an effect on the shape of the dither.

[edit] One other thing: --dither 0 <> no dither. If the dither parameter is used then dither will be applied, somewhere between rectangular and triangular, depending on the parameter value after --dither, i.e. --dither 0 = rectangular dither; --dither 1 = triangular dither; --dither 0.5 = something in between rectangular and triangular dither. [/edit]

I don't know if it's good to reuse old random values. Why not just use two independent random values? Is the performance impact too high? As it's not used by default I don't think that would be much of an issue.

I have redone the above graph and now differentiated correctly between dither off and dither 0. I also included ReplayGain values to illustrate the theoretically perceived loudness of the noise.

I don't know if it's good to reuse old random values. Why not just use two independent random values? Is the performance impact too high? As it's not used by default I don't think that would be much of an issue.

I have redone the above graph and now differentiated correctly between dither off and dither 0. I also included ReplayGain values to illustrate the theoretically perceived loudness of the noise.

I'll find my pest control gear and go bug-hunting - the -s 0 -D 1 curve looks suspiciously like it has been noise-shaped.

There is no real performance hit using two independent random values (especially as I have changed from a fibonacci based RNG to a multiply-with-carry RNG) - I'll have a look at that too.

Of course the dither should/will be noise shaped. It should/will sit next to the quantisation stage within the noise shaping loop.

Current random number minus previous random number is a well accepted way of getting slightly high passed dither.

Hancoque, no one is suggesting that you should use dither. It's there for people who incorrectly think that they need it - not for those who know that they don't (nor for those that don't know or care!).

Of course the dither should/will be noise shaped. It should/will sit next to the quantisation stage within the noise shaping loop.

Current random number minus previous random number is a well accepted way of getting slightly high passed dither.

Hancoque, no one is suggesting that you should use dither. It's there for people who incorrectly think that they need it - not for those who know that they don't (nor for those that don't know or care!).

Cheers,David.

1) Dither is added pre noise shaping;2) Thanks - it will remain so;3) I'll leave the option to dither up to the user.

Now that higher sample rates are officially supported, do I understand it right that they require a higher codec block size?

QUOTE (Nick.C @ Jul 28 2008, 21:14)

I will modify 1.1.0 to increase the FFT lengths at 69.08kHz, 138.15kHz and 276.3kHz, i.e. 64 to 128 to 256 and 512 samples respectively and correspondingly for the other lengths with a similar increase in codec-block length, 512 to 1024 to 2048 to 4096 samples.

So, does this equate to the following sample rate to codec block size analogy?

I started doing some personal listening and file size tests with 1.1.0b and I must say I am very impressed. The idea behind it all was quite something and now we have it implemented with TAK, FLAC and Wavpack. I use TAK on my computers because that is meant for foobar2000 playback and I use mp3 for my DAP because it is compatible and can be mp3gain'd.

So far, I have been using --extreme and I am in love with the results and it looks like I might switch the all lossless library to the lossyTAK library and if my calculation is correct I should expect the lossyTAK library to be 61% of the size of the current library at -p5. So roughly 305gb down to 186gb and have yet to notice anything that could be a problem sample. Of course, I still have the original image files of the CDs ripped using EAC safely tucked away on DVDs so if I do replace the entire library and a problem is discovered it is quite easy to recover from with the DVD, all done to increase disc longevity and to maintain perfect rips.

Good job, keep it up.

On a theoretical note,

This might move the lossless formats even more into the mainstream although the end file is not technically lossless but it so high quality of an output that the problem samples that exist in the lossy formats do not exist here.

I am all for increasing the overall usage of lossless over lossy and with the lossyWAV a new field of battle has been opened up that might sway those who think lossless is TOO big for use to the lossless side.

An error occurred opening the input file; it is likely that it does not existor is not readable.ERROR: for encoding a raw file you must specify a value for --endian, --sign, --channels, --bps, and --sample-rateType "flac" for a usage summary or "flac --help" for all options

That one works great when you pass one or more filenames, but when I tried with *.flac I got a crash of LossyWav.exe again.It seems that we need a construction with a FOR (%%I) IN %1 DO loop here to handle wild cards.

That one works great when you pass one or more filenames, but when I tried with *.flac I got a crash of LossyWav.exe again.It seems that we need a construction with a FOR (%%I) IN %1 DO loop here to handle wild cards.

This command line does the job:

CODE

FOR %X in (*.flac) DO lossyflac.bat %X

But this one skips filenames that include spaces.

If you enclose the second %X in double quotes it will handle the files properly.

Quote: Nick.C"My intention is to understand and implement SebastianG's new noise shaping method, but for that I will also have to introduce / find a PSY model of some kind."

Do you have any clue of how long this will take ? 3 months, 6 months or a year ?Is SebastianG giving you any accelerated private lessons ?It's not that I want to hurry you or being rude in any way , but I care a lot for lossywav ... it's already my favorite lossy codec ... & I plan to convert tera of lossless to Lossy|Tak -P|-p2e ... (without lossless backup) so I care a lot for this new noise shaping method if it can make me save some kbps (& also for the new special Tak setting for lossywav )

without a 1.2.0 development thread I am asking myself everyday:1: is it a TODO thing that is already actively worked on in the shadows. 2: is it a TODO thing that is just an idea. 3: are you in vacation with wife & kids. so I'd rather simply ask ...

as I have been disapointed by vaporware feature from Christopher 'Monty' Montgomery in the past ... I am very suspicious about open source developers claims ... so excuse me if I sound rude I just want to test your determination to get lossywav to the max of its possible efficiency. (I don't want the "new noise shaping method" to become the "bitrate peeling" of lossywav)

I know you just released V1.1.0 a month ago & that I shouldn't already be longing for more ... but now that you are a developer you'll have to learn that end-users are relentless vampires !!! LOL