21 April, 2017, 07:19:11 AM

A novice here - I’m using Adobe Audition 3.0 to produce various things for a radio station that I volunteer at.

My source files are WAV (44100Hz, 16-bit). The finished mixes also need to be the same due to the playout system that the station uses.

There’s an option in the settings to “auto-convert all data to 32-bit upon opening”. Is there any real benefit in enabling this? Will doing this improve how Audition applies effects (like reverb, compression etc.) or will it make no difference? My understanding is that AA's audio engine works in 32-bit float anyway (regardless of what file you feed it) so there's no point.

Also, what I tend to do with effects is apply them in the edit view first as I find it easier than using the effect rack in the multitrack area. Is there any benefit to applying them in the multitrack view instead...in terms of quality? The reason I ask is I believe multitrack does 32-bit mixing (at least it says that at the bottom of the screen).

Dither Transform Results (increases dynamic range) - disable itThis feature is described like a good thing...really it means that anytime you make any editing there is a dithering applied, for example you edit/process only a small segment and you have that segment with dither too... or you edit/process something twice or more times and you have dither applied twice or more times too. Very nasty idea!

A simple experiment: In the edit view (non-multitrack view), open a 16-bit file, then apply a lot of gain in a destructive way (Effects -> Amplitude -> Amplify/Fade), for example +20dB, then do the same thing again in -20dB.

Now, if the file is clipped, then the editor is not working in float 32 mode.

Alternatively, do the steps above in -70dB first, then +70dB, if the file is noisy after these steps then the editor is not working in float 32 mode.

This option only applies to non float 32 editing mode. If you edit in float 32 mode, dither can only happen when you explicitly convert your file to lower bit-depth, the default shortcut key for this action in Audition 1.5 is F11. See the attached screenshots in this post:

Disclaimer: The experiments above are aimed at explain the operation modes of Audition. You need to conduct your own double-blind listening tests to check if audible differences exist or not in practical usage, for example, reverb and compression, as you mentioned.

There is no downside to working with floating point unless your computer resources are very limited: the files are considerably larger, requiring more temporary disk space; there is more data to operate upon, requiring more CPU cycles and longer read/write times. Neither of these is likely to be relevant on any decent modern computer.

The benefits are the mentioned clipping protection and the fact that the quantization errors are much smaller. In 16 bit you can, however, always avoid clipping by being mindful of what you are working on and what you want to do with it. The extra noise from the larger quantization errors, and the additional dithering of transforms, while completely real, will probably never be audible.

you edit/process something twice or more times and you have dither applied twice or more times too. Very nasty idea!

Any truth in that?

Only if the dither wasn't random, but guess what, it is random. The math behind it is not trivial, but IIRC, you'll have to perform dozens of edits before the noise level from the dither appreciates with any significance. This is discounting the level of the noise that is likely already present in the content prior to editing, which none of the >16 bits and "always dither or die" advocates ever seem willing to accept.

It is possible to distinguish some edits like fades and reverb, but it will require riding the gain well beyond realistic playback levels of the finished content.

Arny once called this paranoid obsession "prophylactic dither," which I found it quite amusing.

Is 24-bit/192kHz good enough for your lo-fi vinyl, or do you need 32/384?