I've seen the emails from Izotope about the new version of RX - RX7 so I downloaded the demo. The big new feature, as far as I can see, is Music Rebalance where you can increase or decrease the level of different parts of a mix - most notably vocals, bass and percussion. From the video demo it looks like you could completely isolate these sections on the right material so I was interested to see how well it worked in practice.

I tried a couple of recordings that had given me problems in the past - the first was a single mic live band recording where the mic had been placed to close to the organ player while the second was a song where the vocal was too low in places.

Music Rebalance doesn't have a dedicated keyboards fader so I had to push the vocal, bass and percussion faders up while reducing the other fader. While this worked to a certain extent, I felt that simple but careful eq actually worked just as well, if not better on this particular material. However, this lead me to wonder whether Izotope might well extend the number of faders available in a future version to cater for a wider variety of sounds - the fairly pure tones of an organ are easy to see on a spectrogram and I found that I could remove the organ completely with manual spectral editing (though this would have been very time consuming if I did the whole song).

The second recording was more successful as raising or lowering the vocal fader worked as expected with very few artefacts for changes of just a few dB. If you want to completely isolate the vocals then artefacts become much more obvious but it still feels amazing to have vocals appear from silence on a track that previously had a pretty full backing. The one issue I found with this recording was that the vocals really needed more compression which could be difficult to do without artefacts as I guess I would need to create one track with only vocals and a second track with no vocals. I'll have a go at doing this and report back.

The big question for me is whether I can justify the upgrade cost which was almost as much as I paid for the full RX software. I'm not sure whether any of the other new features are particularly useful for the work I do and I suspect that Music Rebalance will add quite a few more useful features in future versions so should I wait for RX8?

James Perrett wrote:The one issue I found with this recording was that the vocals really needed more compression which could be difficult to do without artefacts as I guess I would need to create one track with only vocals and a second track with no vocals. I'll have a go at doing this and report back.

Unfortunately I've drawn a blank on this as I was thinking that Music Rebalance was a plug-in but it turns out that it only works in the RX Editor and there is no way to export the audio from the demo version (apart from using 2 audio interfaces and connecting one to the other).

I suspect the fact that you needed to compress the isolated track means the isolated track was probably noise gated to mask the limits of the tool's effectiveness - it sounded like it on the demo I heard.

For me the term "compared to what" comes to mind. There is no need to only compare the isolated track with the inadequate mixdown. For the purpose of demonstrating the tool's effectiveness and limits, we can work from a mixed track where we DO have the original stems, and so can compare the isolated vocal with the original vocal stem. So we can compare the isolation result with the gold standard.

I've found the effectiveness of Music Rebalance to be highly dependent on the nature of the source material. I don't expect it to be perfect, but I have to admit I was floored by the results on one piece, a badly recorded stereo track with a strummed acoustic guitar and a male tenor vocal -- a rescue/restoration job; the vocalist is deceased.

I have revisited this particular job several times over years and have never been able to get anything usable, despite many hours spent with WaveLab, SpectrLayers and even Melodyne. RX 7 and Music Rebalance separated the vocal with very few artifacts first time around, using the first preset I tried!

With the described demo limitations I can't process the results further, but I am confident that minimal additional spectral editing will yield near-perfect final results.

On a second sample, a full band with a baritone vocal, it was less successful but still impressive, so I think it needs to be seen as a time-saver as well as providing the occasional truly magical fix. To me, that alone makes it worth the upgrade price.

Tim Gillett wrote:I suspect the fact that you needed to compress the isolated track means the isolated track was probably noise gated to mask the limits of the tool's effectiveness - it sounded like it on the demo I heard.

From listening to the original mix of my test track it is obvious that the original vocal needed more compression as some syllables are too loud while others are almost inaudible. Simply raising or lowering the vocal level wouldn't work (although Music Rebalance seems to do this remarkably effectively). That's why I would have liked to have been able to work on the results further - just to see if I could end up with a more compressed vocal line without too many objectionable artefacts.

The isolated track in the demo video sounds noise gated because they've told it to not output anything when it can't detect any vocals - there's nothing suspicious there. I'd suggest you give it a try yourself before commenting further as the results are very interesting.

James Perrett wrote:The isolated track in the demo video sounds noise gated because they've told it to not output anything when it can't detect any vocals - there's nothing suspicious there.

No, by "sounds noise gated" I didnt just mean a silent background. I hear phasey, watery artifacts in the voice as often heard when Denoising is applied vigourously. I dont hear this in the original stereo track, ie; when the effect is bypassed.

Unlike any privately done example that you or I might try and then discuss here, this is Izotope's own expert publicised YT demo, which I for one wouldnt expect to better.

Maybe the phasey, watery sounds that I hear are actually in the original stem after processing but pre mixing, but masked when mixed with the backing. Again, all the more reason to publish that premix track so that the public can compare the two vocal lines objectively.

I think you are probably expecting miracles Tim! Yes, I'd guess if you were to manually fine tune for every syllable you might be able to reduce the artefacts but for a fairly automated tool that level of separation seems very good to me. My attempts at producing isolated vocals gave similar results to those in the video. Changing the level of the vocal by a few dB gave very convincing results with few audible artefacts.

James, I endorse a presentation of the tool's strengths and limitations. That would be a balanced, objective presentation, rather than on the one hand fostering expectations of miracles, or on the other, downplaying what the tool can achieve in the most conducive circumstances.

As for expectations, with the online examples I was surprised that it seemed to work as well as it did.