Friday, February 27, 2015

I like to use a lot of analogue modelling plug-ins when I’m mixing, but while I like the sounds I’m getting in general, I always seem to end up with too much noise and it’s driving me crazy! What’s the best way to tackle this problem?

Daniel Jones via email

SOS Reviews Editor Matt Houghton replies: The noise could be coming from more than one place, and could be being amplified by more than one processor, so the first task is to find out where the noise is actually emanating from. Step one is to turn off the noise generators! These seem to be on by default in too many modelling plug-ins. (Really, who wants them to sound that authentic? They’ll be making them break down during sessions and charging a virtual repair fee next!) Step two is to go back and look at the sources you’re processing, to see if there’s any low-level noise there that’s being amplified by the compression make-up gain that’s going on in a lot of these plug-ins. I’m not just talking about compressors and limiters, here — anything that features some sort of saturation or distortion will reduce the dynamic range, and when gain’s applied that will bring up the noise along with everything else.

A common cause of unwanted noise is analogue-modelling plug-ins, which are often too authentic! This UAD Studer model, for example, features a noise control which, by default, is hidden beneath a panel. Several Waves plug-ins also have noise switched on by default, and in some mixes compression and limiting further down the signal chain can raise this to annoying levels.A common cause of unwanted noise is analogue-modelling plug-ins, which are often too authentic! This UAD Studer model, for example, features a noise control which, by default, is hidden beneath a panel. Several Waves plug-ins also have noise switched on by default, and in some mixes compression and limiting further down the signal chain can raise this to annoying levels.Then it’s time to consider how to tackle the remaining noise. If noise is prominent while the wanted signal is playing, consider using a dedicated noise-removal tool like iZotope RX, Waves X-Noise and so on. These can be highly effective, but they’ll sometimes leave unwanted artifacts. If the noise only bothers you between sections of wanted sound, then level automation on the individual sources is an obvious solution — if the noise isn’t there, it can’t be amplified by any plug-in.

Gates do this automatically, of course, but they just don’t sound right to me unless they have a variable ‘floor’ control, or whatever you prefer to call it (the bundled one in Cubase, for example, doesn’t have this feature), as the abrupt cutting off of the noise just serves to draw attention to the fact it was there in the first place. I’d rather have noise right the way through than hear that! But even then, level automation is a more precise option.

A more natural-sounding technique than a gate is to automate the frequency of a low-pass filter so that the filter rolls down the spectrum in sections between the wanted bits of sound. The sound remains, but it is less noticeable, and the transition between sections is less glaring too. This is how the old Symetrix noise gates worked, and where sophisticated noise-removal tools such as iZotope RX aren’t called for, or aren’t working for you for whatever reason, it’s a useful technique to try on hissy guitar sounds; I’ll bet it will work for you too.

I haven’t yet found a plug-in that does this automatically for you, but you can do this in Cockos Reaper using its dynamic automation system (it’s called Parameter Modulation; see http://sosm.ag/reaper-parametermod for details). This can be set up to make the filter frequency move dynamically in relation to the amplitude of the source signal — so as the vocal phrase finishes, the filter rolls off the more noticeable high-frequency hiss. It takes a bit of finessing to get it right, but if you’re already using Reaper, it could be just the ticket!

Tuesday, February 24, 2015

I’m looking to get some acoustic panels of the rigid fibreglass/Rockwool type for my bedroom studio. I’ve read a few things online about possible dangers — in particular about respiratory issues (though I think carcinogenic impact was proven negative). Now, I know you wouldn’t use them if you thought there were a danger, but there’s a lot of guff online about potential issues. As my daughter has slight respiratory issues already, I definitely don’t want to make things worse. Can you offer me any advice on this?

SOS Forum post

We often advocate the use of mineral wool for use in DIY acoustic treatment — it’s safe to use provided you take sensible precautions.We often advocate the use of mineral wool for use in DIY acoustic treatment — it’s safe to use provided you take sensible precautions.SOS Technical Editor Hugh Robjohns replies: They’re not inherently carcinogenic, as far as I am aware, but loose fibres can certainly cause irritation. For that reason, mineral wool should always be covered with a breathable but tightly woven fabric that will prevent the release of fibres. If you’re making DIY panels, then spraying the mineral-wool slabs with a diluted PVA glue helps to keep fibre shedding down, and make sure you wear a mask while handling the stuff.

Commercial mineral-wool-based panels can smell unpleasant at first, due to the glues used, so I always unwrap them and leave in the garage for a week or so, to let the fishy glue smell dissipate before installation!

Saturday, February 21, 2015

I’m a sound engineer planning to have a surround-sound setup. The problem is, I already have a pair of Yamaha MSP5s and I don’t really want to spend the money required to have three more of them for surround use! So, can I have Yamaha HS5s for the centre and rear speakers, and also a different sub (something like KRK 10S)? Will it really make such a difference?

SOS Forum post

SOS Reviews Editor Matt Houghton replies: Well, depending on what sort of surround material you’re mixing on it could be workable — but it will never be ideal, and if you’re serious about doing commercial surround work in the longer term you’re going to want speakers designed to be used together. In the meantime, the trick in this sort of setup is to do all of your critical EQ/balance work in mono or stereo in the first place on your best pair of speakers, and then to pan things out to do your surround mix — safe in the knowledge that your EQ and relative levels already work, and that you’ll have more space for separation when mixing in surround. Your speakers might not match perfectly, but you’ll already have done most of the tonal work, and you’re just trying to get the right idea of positioning. A few years ago when I interviewed Kevin Paul (who, as then Head Engineer at Mute, had just re-mixed the entire Depeche Mode back catalogue for surround sound), I asked about budget setups for home-studio owners who wanted to dip their toes in the world of surround-sound. For similar reasons to those I gave above, he suggested that you could probably get by for a while with a home-cinema surround setup as a secondary monitoring system, with your critical work being done on your usual higher-quality stereo pair.

Cheap home-cinema surround systems like this might help you get a feel for surround-sound mixing, but they’re far from the best tool for the job.Cheap home-cinema surround systems like this might help you get a feel for surround-sound mixing, but they’re far from the best tool for the job.SOS Technical Editor Hugh Robjohns adds: Plenty of professional surround monitoring systems use different, typically smaller, speakers in the rear channels. However, the critical aspect is that they are voiced to sound the same as the front speakers, so that the tonality remains consistent regardless of where any sound is panned. Most high-end monitor manufacturers pay a great deal of attention to this aspect, specifically so that their monitors can be mixed and matched in the way you describe. However, tonal consistency is likely to be less well maintained at the budget end of the market, so it’s something you’ll need to assess first hand.

As Matt has said, the work-around is to make all your critical EQ and balance judgements on the higher-quality front L/R speakers first, and only after you have the stereo track working well to think about re-panning things for surround. You will probably then notice distracting tonality changes as sounds move onto the other speakers, but you’ll have to restrain yourself from reaching for the EQ controls, as you’ll otherwise be equalising for the speakers rather than the content! You may well need to tweak the relative balance of things after panning to compensate for the inherent panning-law effects, but be careful to tweak only because of panning offsets, not because a speaker’s response is over or under-emphasising the signal.

I’d add a small word of caution about using a domestic home-theatre system. Yes, the five mains speakers will all be identical and will have the same tonality, which is helpful. However, they’ll be compact and most of the bass will be diverted to the subwoofer via in-built bass-management arrangements. The potential problem is that home-theatre subwoofers are generally designed with the emphasis on delivering impressive explosions, not tuneful bass. Most seem to have a one-note bass quality so, once again, make all your bass EQ and balance decisions on your good-quality stereo speakers, not the home theatre system!

Thursday, February 19, 2015

I’ve found your Mix Rescue articles really useful, but I was struck by how much detail Mike Senior goes into. I get why he does it, but I don’t have a feel for how long it takes. If I’m mixing like this, should I be spending hours, days, weeks, or what?

Dave Jackson, via email

SOS contributor Mike Senior replies: I get asked this question frequently, but it’s difficult to answer in the abstract because the time required varies tremendously between projects. Sometimes I’ve spent more than two weeks mixing one song, but at other times I’ve finished three or four mixes in one day. Mix Rescue projects typically take between three and five days.

SOS’s Mix Rescue features often go into huge amounts of detail, but a good proportion of the time and effort involved is often spent correcting issues that could have been more quickly corrected when writing or tracking. A pure mixing job should not take you more than a couple of days — and can often be completed much more quickly. SOS’s Mix Rescue features often go into huge amounts of detail, but a good proportion of the time and effort involved is often spent correcting issues that could have been more quickly corrected when writing or tracking. A pure mixing job should not take you more than a couple of days — and can often be completed much more quickly. Why the wide variation? My job is to turn the supplied multitracks into a finished product, and as the production values of ‘finished–sounding’ vary enormously depending on the style of music and its projected market, the amount of time required to reach that quality threshold inevitably varies too. In terms of pure mixing activities (processing, effects and fader moves), a sensibly recorded small acoustic session might need little more than simple balancing and panning to give a nicely representative organic sound, and therefore require only a few hours per song to complete. Large–scale chart–targeted productions, on the other hand, might require two or three days to make sense of a blizzard of different programmed and overdubbed sonic elements, while keeping the overall sonics within extremely tight mass–market stylistic tolerances in terms of mix tonality, short-/long–term dynamics, and vocal/hook intelligibility.

The reason most Mix Rescue projects take longer is that they almost always involve more than just mixing work. For example, I usually end up spending a day or so editing, simply because most Mix Rescuees haven’t realised how carefully the leading releases in their target style manage timing and tuning issues. In many cases, the recordings demanded to implement a given style simply aren’t there either. I might get DI’d acoustic guitars where the genre calls for miked–up sounds, say, or there may be no appropriate double tracks/layers, or the electric guitars may be too distorted, or the drums may have had ancient worn–out heads... The list is endless, even before you add a truly inventive catalogue of inadvisable recording methods into the equation! Every one of these tracking–stage misjudgments costs the mix engineer time, either in trying to salvage something useful from what’s provided, or in working around crucial omissions.

The time requirements really balloon where the project needs more creative production input. On a sonic level, if no–one has really committed to decisions about what the record should sound like during tracking, that leaves the mix engineer having to take a (more or less educated) guess, and that frequently involves a good deal of trial and error. The same applies if the arrangement or structure of the music aren’t serving the music well. Or if the artist is concerned that there simply isn’t enough melodic/harmonic interest in the parts as recorded, such that additional MIDI parts, recorded overdubs, or editing/mixing stunts become necessary to increase the amount of ear–candy.Hopefully that gives you some idea how long my Mix Rescue projects take. But, as I said, much of this time is spent on things other than actual mixing. In my view, a mix that takes me more than two days in any style raises serious questions about the recording and production techniques used prior to mixdown. While there are clearly many things that can be ‘fixed in the mix’, that’s an extraordinarily inefficient way of working if you have any alternative! I can’t tell you how often I’ve wished, while doing the Mix Rescue column, that I could travel back in time and ask the reader to spend just 10 more minutes doing something while tracking that would have saved me hours (quite literally) of remedial work. To quote Trevor Horn, “the mix is the worst time to do anything”!

Tuesday, February 17, 2015

I'm currently involved in composing music tracks for a self-help medical audio program and I would like to be able to create tracks that combine subliminal messages with music. Subliminal messages work best when hardly heard — just a very little bit — so I'm looking for some automatic way of keeping the vocal signal just audible as the music rises and falls in level. I read your article about automatic ducking techniques in Cubase, and was hoping to use this information to achieve my goal, but I still haven't been able to achieve what I'm after, namely to get the vocal level to follow the music level. I read the instructions several times, and played around with the settings, but it still doesn't work. Can you advise?Via SOS web site

SOS contributor Mike Senior replies: The Cubase Notes article (SOS May 2009: /sos/may09/articles/cubasetech_0509.htm) you're referring to won't directly help you to achieve what you want, because it gives instructions on how to reduce the level of one signal (the electric guitars in the article's example) in response to another signal's level increase (the lead vocal in the article's example). A better alternative would be to follow the instructions in my June 2010 Cubase column (/sos/jun10/articles/cubase_0610.htm), where I describe how to simulate the effects of Waves Vocal Rider using a side-chain-enabled Expander plug-in. In Mike Senior's June 2010 Cubase column, we showed how to simulate the effects of Waves' Vocal Rider using a side-chain-enabled Expander plug-in; we can see part of the process here. This technique automates the vocal to correspond with the level of the backing track: particularly useful for our reader, who is creating tracks that incorporate subliminal messages.In Mike Senior's June 2010 Cubase column, we showed how to simulate the effects of Waves' Vocal Rider using a side-chain-enabled Expander plug-in; we can see part of the process here. This technique automates the vocal to correspond with the level of the backing track: particularly useful for our reader, who is creating tracks that incorporate subliminal messages.Q. How can I automate the level of vocals against a backing track?

The down side of that approach, though, is that expanders with external side-chain access aren't particularly common, so here's an alternative scheme that uses a triggered compressor instead (these are more common):

Create a parallel channel fed from the vocal. (You could also just duplicate the vocal track, but this is a little less elegant because it makes later processing of the overall vocal tone less convenient.)

Compress the parallel channel.

Invert its polarity.

Now trigger its gain-reduction from the music channel. (If you have several music channels, then perhaps send them first to a group bus, so you can easily feed the compressor side-chain from there.)

Assuming that your plug-in delay compensation is working, this setup should give you the automatic level riding you're after. When the mix is quiet the parallel channel is less compressed, and will cancel the lead vocal more (reducing its level), whereas, when the mix is loud the parallel channel will be more compressed and will cancel the lead vocal less (increasing its level).

Friday, February 13, 2015

I recently came across a plug-in that incorporates both VU and PPM metering, and it got me thinking: what exactly is the difference between the two?

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: These are both, strictly speaking, obsolete analogue metering formats! In short, the VU meter shows an averaged signal level and gives an impression of perceived loudness, while a PPM indicates something closer to the peak amplitude of the input signal. However, in our modern digital world, neither meter really performs adequately, and the current state of the art is enshrined in the new ITU-R BS1770 standard, which is being adopted very rapidly around the world in the broadcast sector and elsewhere. This is an excellent metering system that provides a new and very accurate Loudness Meter scaled in LUFS — which does a much better job than the VU — along with an oversampled True Peak Meter scaled in dBTP, which does a much better job than the PPM. I urge everyone to use these meters in preference to everything else!

However, for historians, the VU or Volume Unit meter was conceived in 1939 and originally called the SVI or Standard Volume Indicator. It was developed as a collaborative project by CBS, NBC and Bell Labs in America and, since the meter scale was calibrated in 'volume units', that's the name that stuck! The SVI/VU meter is amongst the simplest of all audio meter designs and essentially behaves as a simple averaging voltmeter, with a moderate attack (or 'integration') time of about 300ms. The needle fall-back time is roughly the same, and the full meter specification is enshrined in the IEC 60268-17 (1990) standard.

A VU meter's display is influenced by both the amplitude and duration of the applied signal. With a steady sine-wave signal applied to the input, a VU meter gives an accurate reading of the RMS (root-mean-square, or average) signal voltage. However, with more complex musical or speech signals the meter will typically under-read, and a sustained sound will produce a significantly higher indication than a brief transient signal, even if both have the same peak voltage. In theory, a VU meter should respond to both the positive and negative halves of the input audio signal, but the cheapest implementations sometimes only measure one half of the waveform, and so can provide different readings with asymmetrical signals compared to full VU meters.

The simplicity of the VU meter design makes it relatively cheap to implement, and so VU meters tend to be employed in equipment that requires a lot of meters — such as multitrack recorders or mixers — or where accurate level indication is not essential.

The reference level indication is 0VU, but the audio level required to achieve that could be whatever the user wished. The original SVI implementation included an adjustable attenuator to accommodate any standard operating level up to +24dBu (US broadcasters still use nominal reference levels of +8dBu). Modern VU meters usually omit the user-adjustable attenuator and are typically set to give a 0VU indication for an input level of either 0dBu or +4dBu. The latter is the most common 'pro standard', but a lot of manufacturers use the former alignment, including Mackie. In general, then, the SVI or VU meter tends to show the average signal voltage, and gives a reasonable indication of perceived loudness.

The Peak Programme Meter or PPM is a much more elaborate design and pre-dates the VU, as its development started in 1932, with the meter we know today appearing in 1938. Despite the name, PPMs don't actually indicate the true peak of the signal voltage. Early units employed a 10ms integration time (Type II meters), while later units reduced the integration time to 4ms (Type I meters). These short integration times were selected specifically to ignore the fastest transient peaks, and as a result the PPM is often referred to as a 'quasi-peak' meter to differentiate it from true-peak meters. Typically, very brief transient signals will be under-read by about 4dB. The reason for ignoring brief transients was to encourage operators to set slightly higher levels than would otherwise be the case, on the assumption that any transient overloads in recording or transmitting equipment would be inaudible, which is generally the case for analogue overloads of less than 1ms. Q What’s the difference between PPM and VU meters?

Whereas the VU meter has fairly equal attack and release times, the PPM is characterised by having a very slow fall-back time, taking over 1.5 seconds to fall back 20dB (the specifications vary slightly for Type I and II meters). The reasoning for the slow fall-back was to reduce eye-fatigue and make the peak indication easier to assimilate. The specifications of all types of PPM are detailed in IEC 60268-10 (1991), and the scale used by the BBC comprises the numbers 1-7 in white on a black background. There are 4dB between each mark, and PPM 4 is the reference level (0dBu). EBU, DIN and Nordic variants of the PPM exist with different scales. The EBU version replaces the BBC numbers with the equivalent dBu values, while both the Nordic and DIN versions accommodate a much wider dynamic range.

Wednesday, February 11, 2015

I have all the acoustic treatment I can fit into my home studio already, and have a decent amp and good–quality (PMC) passive monitors, but I have trouble judging low frequencies — so I’m thinking of adding a subwoofer. What should I consider when selecting one, and is it a good idea in my situation?

Bill Gambles, via email

SOS Reviews Editor Matt Houghton replies: You don’t state what model of speaker you have, but PMC claim that their TB2 model, for example, offers a ‘useful’ frequency response down as low as 40Hz. I can think of very few music–making scenarios where you should need particularly accurate monitoring lower than that — and in those few cases you’d need a room that could cope. If your room can’t cope, and you really do need to judge the level of a 30–50Hz sine wave, then it’s a pretty trivial matter to check on a modern frequency analyser plug–in what’s going on.

Subwoofers aren’t necessarily the right answer to your bass–monitoring woes, particularly in studios that are set up in domestic spaces — no matter how high the quality of subwoofer.Subwoofers aren’t necessarily the right answer to your bass–monitoring woes, particularly in studios that are set up in domestic spaces — no matter how high the quality of subwoofer.With this in mind, I’d suggest that you start not by thinking about subwoofers, but by attempting to check what level of bass your speakers are actually putting out into your room: play some bass–rich material over them and stand in a corner of the room, where the bass build–up is likely to be greatest, and walk around the room boundary. If you can hear an increase in very low frequencies, then lack of bass from your speakers isn’t your main problem — and adding a sub will probably just prove to be an expensive way to make matters worse.

If your speakers are doing their job, you need to do something about the room. You say you’ve already installed as much acoustic treatment as you can, but perhaps you can reconsider the nature of the acoustic treatment you’ve installed. To achieve remotely accurate low–frequency monitoring in a domestic space the room must be treated with ample bass trapping. The idea is to absorb low-frequency waves so that they don’t bounce around the room causing all those nasty peaks and nulls. It’s pretty much impossible to install too much bass trapping, but often impossible to install enough! We’ve covered this subject many times over the years, but for ideas on relatively compact bass traps check out our Studio SOS feature from July 2006 (http://sosm.ag/studiosos-0706).

Of course, you may live in a rather grander residence than the one I pictured from your description, and perhaps have a large room or double garage at your disposal, with plenty of room for adequate bass trapping. In this case a sub might be worth considering — but even then, only once you’ve made efforts to treat the room properly. If you decide that you really do need a sub, then there’s a whole host of questions you need to answer, not just which model is best. Thankfully, our Technical Editor wrote an in–depth article on this very subject back in April 2007 (http://sosm.ag/all-about-subwoofers). I’d suggest reading that before you reach for your credit card!

I thought I had the whole M/S thing down until I listened to a commercial record and realised their stereo image was wider than mine and yet still perfectly mono compatible!In order to convert a mono source to an M/S pair, I bus the source audio to two separate tracks. On one track, the audio is unchanged and routed to the stereo bus centre (which I label 'Mid'). The other I delay by around 10ms, then split it to the left and right stereo bus, with the right side inverted (I label this track 'Side'). The 'Side' track cancels when I sum to mono.The problem I'm having is that the stereo image is not very wide. While it is clearly in stereo, it does not reach the extremes of the stereo field as it would by utilising the Haas effect. When I use a simple Haas trick, I achieve the width I desire (ie. a hole in the middle), but it is not acceptably mono compatible. Is there a trick I am missing to achieve the width that I desire in my pseudo-M/S setup, yet also maintain mono compatibility?

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: When you use this method of creating a fake stereo signal from a mono source, the apparent width is determined entirely by the amount of Side signal relative to the amount of Mid signal. No Side means mono. Loads of Side means wide perceived stereo image. Too much Side signal means a hole in the middle and quiet mono!

So you should be able to make the track as wide as you want — even to the level of a hole in the middle — just by further pushing the level of the Side signal. Changing delay time also affects perceived width and size. Larger delays (30-70 ms) create more hall-like effects, while shorter delays (5-30 ms) are more subtle and less 'roomy'.

However, this kind of M/S-based fake stereo is never as convincing as real stereo. You inherently end up with a mush of frequencies spread across the sound stage; your original mono source is spread across the image like butter on bread. There is no discrete spatial positioning, and no coherent imaging. Basically, it isn't real stereo, it never can be real stereo, and comparison with a real stereo recording is pretty pointless and always disappointing!

Moreover, created in the way you describe, the stereo image will tend to be bass-heavy on the left-hand side, because the relatively short delay you are using will tend to allow low frequencies to sum in phase on the left and out of phase on the right. This can be cured by inserting a high-pass filter before (or after) the delay line to remove bass from the Side signal. Set it to about 100-150 Hz to ensure the bass content stays central.Top: The arrangement our reader is currently using to produce a wider stereo image, which results in reduced bass on the right-hand side. The fake stereo image arises because some frequencies are stronger in one side than the other, due to the offset comb-filtering resulting from combining the original and delayed signals with the same and opposite polarities. More 'fake Side' ('S') level results in a wider perceived image. In the lower diagram, the fake 'S' signal has been high-pass filtered, avoiding bass cancellation in the right-hand side.Top: The arrangement our reader is currently using to produce a wider stereo image, which results in reduced bass on the right-hand side. The fake stereo image arises because some frequencies are stronger in one side than the other, due to the offset comb-filtering resulting from combining the original and delayed signals with the same and opposite polarities. More 'fake Side' ('S') level results in a wider perceived image. In the lower diagram, the fake 'S' signal has been high-pass filtered, avoiding bass cancellation in the right-hand side.

The Haas effect — which you can employ by panning the original source to one side and a short-delayed version of it to the other — can sound very wide indeed, but usually isn't very mono compatible. The only way to make a single source fill a stereo sound-stage in any kind of convincing way — in my humble opinion — is to record it in stereo in a decent-sounding acoustic space, or use a really good reverb processor to achieve a similar thing. There are other techniques that can be used to create pseudo-stereo effects using dedicated stereo-width enhancers and variations on the distortion and chorusing theme, but they all tend to change the tonality to some extent, which may not be what you're after.

I'm looking to set up another computer alongside my main computer to slave all my CPU-draining VST instruments and effects. I'm currently using Cubase 5, but looking to upgrade to 6 soon, and would like to utilise the VST System Link option. After doing some Internet research, I know System Link will only work with certain audio interfaces. I have a budget of around $500 to buy the two interfaces, and I'm wondering what you could advise me to get for that? It would be great if you could give me some information on setting up and using System Link, as there is very little on the Internet to help!

George Morton via email

SOS Reviews Editor Matt Houghton replies: Personally, I wouldn't recommend using System Link, as there are alternatives around that allow you to link two machines without requiring a second audio interface.

The one I have most experience with is FX Teleport, by FX-Max (www.fx-max.com/fxt). There's a free demo that you can try using any type of network connection, including USB and Firewire, but if you decide to use it you'll get better results from a faster network connection, such as Gigabit Ethernet. The one additional piece of hardware I'd recommend investing in is a KVM box, which allows you to use a screen, mouse and keyboard with multiple computers. That's pretty much essential when working in this way. I'd thoroughly recommend FX Teleport if another computer is the answer to your problems.

Before you invest, though, do make sure that it is more CPU power that you need. Availability of memory, or hard-drive loading, could also be a cause of problems. It could be that, for example, results are limited by your hard-disk performance, particularly if you are running an operating system, audio files and streaming sample instruments all from the same disk, in which case running those three things from separate drives might help.

Memory is often not a huge problem area, although it can be an issue on some 32-bit systems, particularly where you have a lot of hardware installed. First, there's a maximum of 4GB available in Windows XP 32-bit, of which only 2 or 3 GB is available to each application — and all of the plug-ins running within Cubase count as one programme! On my old XP system, I had 4GB of memory installed, but had only 2.3GB available to applications, due to the way in which Windows allocated memory address space to my various DSP cards.

In 64-bit versions of Windows, this limitation is removed. You might run into problems with older 32-bit plug-ins if you try to run the 64-bit version of Cubase, but in my current system I'm running 32-bit Cubase on Windows 7 64-bit, with the JBridge utility allowing me to run 64-bit plug-ins (such as Kontakt) in their own address space.

If you haven't tried it yet, I'd also suggest experimenting with Cubase's Freeze facility, which allows you to 'freeze' audio and instrument tracks. This essentially performs a temporary render of those tracks and unloads any plug-ins to free up precious computer resources. You can unfreeze at any time if you need to go back and tweak, and even when frozen you still have access to features such as level and pan automation.

Finally, another option is to consider upgrading to a modern multi-core PC. That might not be quite within your budget, but if you've not upgraded for a few years, you'll be amazed at how much more you can do in a single system. One advantage is that you'll only have the noise of one machine to put up with, because remember that the more computers you have running, the greater the sound of whirring fans will be in your studio!

Monday, February 9, 2015

I’ve put together a group of commercial reference mixes. I chose them because I liked the way they sounded, but I noticed that two of them had been clipped. Are these mixes still worth using as references? And how can they clip without distorting in an ugly fashion? Finally, why would the engineer do that, especially with folk music?

Via SOS forum

Is a file really clipping? The only way to tell is by using a true–peak meter, such as this one in Steinberg’s Cubase.Is a file really clipping? The only way to tell is by using a true–peak meter, such as this one in Steinberg’s Cubase.SOS Technical Editor Hugh Robjohns replies: First, let me highlight a small but critically important difference: hitting 0dBFS is not synonymous with ‘clipping’ — it’s a perfectly legitimate situation to have a sample reaching 0dBFS, and a signal is only ‘clipped’ if a sample should have been allocated a higher quantisation value than was available. The only way to check the real situation is to use a ‘True Peak’ meter, as specified in the BS.1770 loudness metering recommendations — there are plenty of those from various plug–in developers. Other forms of ‘clip meter’ in your DAW may illuminate before the onset of clipping, or when there’s one or more samples at 0dBFS, and some may be user–calibrated — you need to be sure what your meter is actually telling you before you pay too much attention to the flashing red lights!

The fact that the material doesn’t sound clipped or distorted would suggest that it is, in fact, reaching 0dBFS by design (through a normalisation process or very precise limiter), but not actually being clipped. However, it’s also worth noting that while it’s possible to hear just a single sample clipping, with some material you won’t hear it even if it lasts more than 16 samples. It’s partly frequency dependent, but also dependent on the reconstructed waveform, which ordinary peak sample meters make no attempt to analyse (hence the need for a true-peak meter).

Are ‘peak–level’ mixes useful as references? Yes, of course they are: it’s the sound character of the mix that you’re referencing, so if the mix sounds good to you that’s all that matters. Note, though, that playing things back at the same loudness is essential if you’re to make meaningful comparisons.

There are, unfortunately, plenty of mixes that are genuinely clipped, which therefore distort to some degree. “Why would ‘they’ do that?” is a question I’ve been asking for decades! It’s technically unnecessary and ultimately destructive, but ‘art’ is often illogical, and pressure from the misguided ‘loudness wars’ lobby has encouraged or forced people to do daft things for decades, sadly.

Can you explain a few things about creating MP3s? I'm currently converting WAVs to 24-bit, 44.1kHz and then converting them to MP3, but I'm not entirely sure what effect this kind of conversion has on the sound. Will my method have a higher-quality outcome than 16-bit WAVs converted to MP3?Via SOS web site

SOS contributor Martin Walker replies: I doubt that you'll hear any difference in practice by increasing the bit depth from 16 to 24. As long as you leave a few dB of headroom to give your MP3 encoder some 'space' to perform a clean result, the main decision to be made with MP3s is the target bit-rate. MP3 files can be created at CBR (Constant Bit Rate) values from 8Kbps to 320Kbps. Spoken word is still perfectly intelligible down to about 24Kbps, which is usually perfectly sufficient for podcasts, talk radio, and so on. Solo acoustic music performances could be acceptable at 48Kbps, although 64Kbps is probably more in line with AM radio quality.

For reasonable-quality ensemble music, many people consider 128Kbps a good baseline, especially if the intended destination is computer speakers or in-car audio systems. However, when listening on a hi-fi or on studio playback gear, many musicians find 128Kbps difficult to listen to, especially since the frequency response falls off rapidly above 16kHz, high‑frequency sounds such as cymbals sound distinctly harsh, and you can often hear a low-level background 'warbling' sound, which is the main reason that some people dislike this rate.

If you're looking for the best compromise for your MP3 files between compression ratio and audio quality, bit-rates of 160Kbps or 192Kbps are generally recommended, with 192Kbps, in particular — often being classed as 'near CD' quality — suitable for complex music or tracks with lots of bass content. Only on expensive playback systems can most people tell the difference between 192Kbps and CD quality.

Further up the scale, if you want some compression but minimal degradation in sound, 256Kbps is a good compromise compared with CD audio, since the frequency response is generally identical to the original up to about 18kHz, and the difference between the two is barely discernible by most people, even on high-end systems. For ultimate MP3 quality, you could choose 320Kbps, but so few people can hear the difference between this and 256Kbps (or real CDs, for that matter) that it's generally a waste of disk space.

Of course, the main point of all these conversions is to reduce file size, and most MP3 encoders also offer a choice of VBR (Variable Bit Rate), in which the bit rate is altered dynamically during your track. Because VBR can rise during complex passages and drop during simpler sections, with some material it can sound significantly better when compared to a similarly sized CBR file, and instead of numeric values you may be offered a quality setting anywhere from 'highest' to 'lowest'. However, VBR is rarely used for online audio streaming because its constantly changing data stream encourages glitches and errors, as it does on some older MP3 players, particularly when fast forward or rewind controls are used.

The above guidelines are fine for the average punter, but as musicians, what should we really be listening for when deciding on bit-rate? Well, because of the way MP3 encoding relies on one frequency 'masking' another nearby at a lower level, any instrument that glides from one frequency to another (such as fretless or acoustic bass, guitar whammy-bar excursions, Theremin or trombone solos) may result in audible artifacts, so listen out for these and use a higher setting if required.MP3 encoding reduces file size partly by Frequency Masking (discarding information that is unlikely to be heard because of nearby louder tones), but it can be fooled by some types of music, such as gliding or pure tones.MP3 encoding reduces file size partly by Frequency Masking (discarding information that is unlikely to be heard because of nearby louder tones), but it can be fooled by some types of music, such as gliding or pure tones.

Another killer combination for the MP3 encoder is a pure solo tone, such as a long, high vocal or flute note, or guitar feedback tone, with complex but quiet instrumentation behind it. Listen out for distortion or general fuzziness where the encoder has decided that parts of the background instrumentation are redundant: they might be with a rock band and screaming guitar solo, but not with a quartet featuring a flute solo.

Ultimately, though, all these choices pale into insignificance if your MP3 files are intended for online streaming. Many sites, such as Soundcloud, YouTube, and so on, convert incoming audio to their own chosen format so, unless you're offering downloadable MP3 files, you're often stuck with whatever quality choices these other delivery sites choose for you (typically around 128Kbps).

I've heard a lot about high-pass filtering tracks to reduce clutter at mixdown, but not as much about low-pass filtering in this context. Would mixes suffer or benefit from doing the same at the opposite end? For example, would it be easier to bring out 'air' in a vocal if other parts were low-passed?

Via SOS web site

SOS contributor Mike Senior replies: Particularly in small-studio environments where the low-frequency monitoring fidelity is questionable, there's a lot to be said for high-pass filtering in a fairly systematic way to head off problems at mixdown. However, widespread low-pass filtering offers fewer benefits, simply because so many instruments in a mix will have harmonics and noise components that extend right up the spectrum. In practice, I find peaking/shelving cuts are, therefore, more appropriate for dealing with typical mixdown tasks, such as frequency-masking problems. Yes, in theory you could make your lead vocal sound airier by low-pass filtering the other parts, but you'd still have to consider how the mix as a whole will sound during moments when the vocal isn't active, so achieving an airy vocal in practice isn't usually as simple as this.Although fairly systematic high-pass filtering is very sensible in home-studio mixing, as you can see in this screenshot from a recent Mix Rescue project, it's rarely beneficial to apply low-pass filtering in a similar way.Although fairly systematic high-pass filtering is very sensible in home-studio mixing, as you can see in this screenshot from a recent Mix Rescue project, it's rarely beneficial to apply low-pass filtering in a similar way.

Having said that, there's nothing wrong with low-pass filtering if you really want to kill the high frequencies of an instrument for balancing reasons. I would most commonly do this with amped instruments, such as electric guitars, which are capable of contributing a lot of undesirable amplifier noise in the top two octaves of the audible spectrum. However, this has to be evaluated on a case-by-case basis, because it's very easy to dull the overall mix if you're not careful.

Having recently started making and recording my own music, I need to start thinking about backing it up. At the moment, I'm just keeping everything on my hard drive, which I'm somewhat nervous about (I've often heard people say that digital data doesn't exist at all unless it exists in at least three places!), so I need to sort out a system quickly. What procedure/system would you recommend?

Julia Webber via email

SOS contributor Martin Walker replies: It's very refreshing to find a musician who even thinks about backing up data at such an early stage: often people only consider the options having dried their eyes after losing a lot of irreplaceable songs. Hard drives can and do go wrong, and catastrophic failures can happen in a microsecond, leaving you unable to retrieve any of your previous files (companies do exist that specialise in bringing data back from the dead, but they tend to be expensive).

So it pays all of us to make regular backups, then we can laugh when disaster strikes and restore our most recent backup rather than lose any data: even if the very worst happens and the entire hard drive goes belly-up, it's entirely possible to plug in a replacement drive and be up and running again within a couple of hours.

First, you need to decide how often you need to back up. To answer this question, just decide how much work you are prepared to lose. Many hobbyists and some professionals are happy to back up once a week, but always back up immediately you've finished an important session as well, just in case. Second, decide how best to organise your data to make each backup as easy as possible: after all, the easier it is, the more likely you are to do it, and consequently the less data you are likely to lose if anything does go wrong.As long as your hard drives are well organised, even a freeware utility like Paragon's Backup & Recovery can be ideal for backup purposes.As long as your hard drives are well organised, even a freeware utility like Paragon's Backup & Recovery can be ideal for backup purposes.

I prefer to organise my hard drives by dividing them into various partitions, each devoted to a specific subject such as Operating System + Applications, Audio Projects, Samples, Updates, My Data and so on. Most modern operating systems let you partition your drives in any way you wish. Although this takes a little more effort at the start of your backup regime, for me the huge advantage of separating your data from the operating system and applications is that you can take global backups of entire partitions using a Drive imaging utility such as Acronis True Image or Norton Ghost. This way, you'll know that absolutely everything on that partition will be contained within each backup file (even those plug-in presets you create that get tucked away somewhere safe and then forgotten!).

The alternative is to leave all your data spread across the one huge default partition for each drive, and use backup utilities that let you specify which files to back up and which to ignore, such as Mac OSX Time Machine and Windows 7 Backup. Some audio applications, such as Wavelab, also offer dedicated backup functions. Once again, this takes time to set up initially, and this approach also relies on you specifying a comprehensive list of files to save, so if you forget something vital, you may come a cropper later on.

Whether you choose drive imaging or a dedicated backup utility, you can create a global backup file but, to save time and storage space later on, both may also offer the subsequent option of much smaller incremental backup files that only contain files that have been added or changed since your most recent backup.

The final choice is where to store your backups. The most important thing is to store them separately from the original data, so that they are unlikely to be damaged with the originals. If your computer has multiple hard drives, a very quick and easy regime is to store backups of one drive onto the other: this protects you if one drive becomes faulty, but not if your entire computer goes up in a puff of smoke.

For greater security, another set of backup data should be stored away from your computer, either on removable media such as USB sticks, CD-Rs, DVD-Rs, or removable or Firewire/USB hard drives. It also makes more sense to store these backups in a completely different location, so that even if your house burns down your data remains intact. Cloud-based online backups, such as Dropbox or Amazon S3 (Simple Storage Service), are very handy if you have a fast connection, although uploading speeds can be cripplingly slow compared to downloads. A much quicker and easier alternative may be to swap backups with local friends or family: you keep a regular copy of their backups and they keep a copy of yours.

Thursday, February 5, 2015

Sound Advice : Mixing
I have recently purchased a Golden Age Project Pre 73 MkII and Comp 54 on the recommendation of someone from the SOS forums, and I am so pleased. I use an RME Babyface and wondered, with my limited hardware, would it be possible to output my final mix one channel at a time through the Comp 54? The reason I ask is that the hardware adds something that no VST seems to be able to do. If someone knows how I could do this it would be great. If it matters, the DAW I am using is Reaper.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: The answer is yes, but it's not as straightforward as it might appear and you need to be careful.

The basic problem is that when you're working with a stereo mix the stereo imaging is determined by the subtle level differences of individual instruments in the two channels. A compressor exists to alter the level of whatever you pass through it dynamically, depending on its own level.

Imagine an extreme situation where you have some gentle acoustic guitar in the centre of your mix image, and some occasional heavy percussion panned hard left. If you process those two channels with separate unlinked compressors, the right channel compressor only sees a gentle guitar and does nothing, while the left channel compressor will feel obliged to wind the level back every time the mad drummer breaks out. While you may like the effect a certain piece of gear (like this Golden Age Project Comp 54) has on your recordings, passing your left and right channels through it separately is not a good idea. The reason for this is that the compressor can only react to what it is fed at any given time. So when the left and right channels are heard together — after being run through the Comp 54 — the sound will be very uneven. You can get around this by setting up an external side-chain input, which will cause the compressor to react to what it gets from the other channel, but with the Comp 54 this is not possible, so another approach altogether might be in order.While you may like the effect a certain piece of gear (like this Golden Age Project Comp 54) has on your recordings, passing your left and right channels through it separately is not a good idea. The reason for this is that the compressor can only react to what it is fed at any given time. So when the left and right channels are heard together — after being run through the Comp 54 — the sound will be very uneven. You can get around this by setting up an external side-chain input, which will cause the compressor to react to what it gets from the other channel, but with the Comp 54 this is not possible, so another approach altogether might be in order.

Listen to the two compressed channels afterwards in stereo and the result will be a very unsettled guitarist who shuffles rapidly over to the right every time the percussionist breaks out (probably a wise thing to do in the real world, of course, but not very helpful for our stereo mix).

If you process your stereo mix one channel at a time through your single outboard compressor, that's exactly what will happen. The compressor will only react to whatever it sees in its own channel during in each pass, and when you marry the two compressed recordings together again you will find you have an unstable stereo image. The audibility of this, and how objectionable you find it, will depend on the specific material (the imaging and dynamics of your mix), but the problem will definitely be there.

Stereo compressors avoid this problem by linking the side chains of the two channels, so that whenever one channel decides it has to reduce the gain, the other does too, and by the same amount. In that way it maintains the correct level balance between the two channels and so avoids any stereo image shifts.

You can achieve the same end result if your single outboard compressor has an external side-chain input, but sadly I don't think the Golden Age Project model does. If it did, what you'd need to do is create a mono version of the stereo mix in your DAW and feed that mono track out to the compressor's external side-chain input, along with one of the individual stereo mix channels (followed by the other). That way, the compressor will be controlled only by the complete mono mix when processing the separate left and right mix channels, so it will always react in the same way, regardless of what is happening on an individual channel, and there won't be any image shifting.

That's no help to you with this setup, of course, but don't give up yet, as there is another possibility. You could take an entirely different approach, and that's to compress the mix in a Mid/Side format instead of left-right. It involves a bit more work, obviously, as you'll need to convert your stereo track from left-right to Mid/Side, then pass each of the new Mid and Side channels separately through the compressor, and then convert the resulting compressed Mid/Side channels back into left-right stereo. Using an M/S plug-in makes the task a lot easier than fiddling around with mixer routing and grouping, and there are several good free ones around.

The advantage of this Mid/Side technique is that, although the Mid and Side signals are being processed separately and independently, the resulting image shifts will be much less obvious. The reason for this is that instead of blatant left-right shifts, they will now be variations in overall image width instead, and that is very much less noticeable to the average listener.

Sorry for the long-winded answer, but I hope that has pointed you in the right direction.

SOS Reviews Editor Matt Houghton adds: I agree with Hugh's suggestion of M/S compression. I regularly use that when I want to deploy two otherwise unlinkable mono compressors, and there's no reason why you can't process the Mid and Side components one at a time. The only issue here will be your inability to preview what you're doing to a stereo source, so be careful not to overwrite your original audio files! However, I sense that it's the effect of running through the compressor's transformers that you're hoping to achieve. In that case, just set to unity gain and set the threshold so that the unit isn't compressing, and then run the signal through it. If it is standard L/R compression you want, you could always get another Comp 54, as although they're mono processors they're stereo-linkable with a single jack cable.

In Cubase, I find that the best approach to incorporating such outboard devices into my setup is to create an External FX plug-in for each device, and then insert that on each channel and print the result. In Reaper, the equivalent tool is the excellent ReaInsert plug-in. This approach not only makes the process less labour intensive in the long run, but means that you can drag and drop the processor to different points in the channel's signal chain, should you want to.

I'm considering AVI's Pro Nine Plus system for my main nearfield monitoring, partly based on the great review they got from Paul White back in September 2005 (/sos/sep05/articles/avipronine.htm). Should I be worried about the fact that the tweeter mounting and porting are both asymmetrical in relation to the main driver? Will this cause slight delay/phasing issues when placed in the classic equilateral-triangle stereo setup?

AVI's Pro Nine Plus monitors (left) have their tweeter mounting and porting asymmetrical to the main driver. This is actually relatively common, as you can see from the Acoustic Energy AE22s and the Dynaudio BM15As (below).AVI's Pro Nine Plus monitors (left) have their tweeter mounting and porting asymmetrical to the main driver. This is actually relatively common, as you can see from the Acoustic Energy AE22s and the Dynaudio BM15As (below).Q. Is asymmetry in monitors a problem?Q. Is asymmetry in monitors a problem?

Via SOS web site

SOS contributor Mike Senior replies: Opinions differ about a lot of aspects of speaker design, as you can easily see even just from comparing the external appearance of a selection of similarly priced monitors. One such moot point is how important symmetrical driver placement is, and the Pro Nines are by no means the only speakers that have their tweeters skewed to one side like this. The Acoutic Energy AE22s and Dynaudio BM15As both feature this kind of setup, and are both nonetheless well-regarded. Although I've not tried these specific speakers myself, the main thing I'd be wary of in principle is that the size of the stereo sweet spot may be reduced. No matter which way you move your head (forward/back, side to side, or up/down) the potential for inter-driver phasing in the mid-range appears to me to be greater than with a more traditional vertically stacked driver configuration. Even if this theoretical concern is borne out in practice, though, the real question is how much it'll matter to you. If you're happy to stay in the sweet spot most of the time, and can check the mid-range balance with a single-driver speaker such as an Auratone (or similar), then it may not be a huge practical concern. Personally, I'd say that if you like the speakers otherwise, don't let the asymmetry be a deal-breaker.

As for the ports, again I don't think their asymmetry should really put you off, and although I find that porting in budget-level monitors can cause all sorts of low-end monitoring problems, I would imagine that these speakers are probably getting into the kind of price range where the potential problematic side-effects of the porting are kept well enough under control that you can work with them for mixing purposes. Certainly, the 90Hz low-end boundary on the published frequency-response figure leads me to suspect that the port hasn't been overhyped, as it seems to be on many budget models, and that counts for a lot in terms of accuracy.

My daughter managed to play a tough piece she's been practising on the keyboard this weekend. She played it so well that we clapped our hands... then we noticed how strange the clapping sounded. It rang on but died very quickly, and for the time it rang on, it sounded very metallic and almost robotic.That was close to the middle of the room. The room is partially treated at the moment, with panels at the side-wall reflection points and ceiling, one on the ceiling, and three corner superchunks. I tried clapping again with some further panels on the side walls directly to the left and right of where I was sitting, and the noise disappeared. I understand enough to realise the sound is the clap bouncing back and forth between the two walls, and I'm guessing that this is what folk refer to as flutter echo. What I'm a little less sure about is whether it is a problem, and what — generally — a hand clap should sound like in a well-treated room.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: If we're talking about the sound in a control room, the point is what the room sounds like when listening to sound from the monitor speakers. It is conceivable that, by design (or coincidence), the acoustics could well sound spot on for sounds from the speakers, but less accurate or flattering for sources elsewhere. And, unless you're planning on recording sources in the control room at the position you were clapping your hands, those flutter echoes might not represent a problem or require 'fixing'.

However, in general, strong flutter echoes are rarely a good thing to have in a control room and I'd certainly be thinking about putting up some absorption or diffusion on those bare walls to prevent such blatant flutter echoes.Flutter echoes in a studio can be distracting and fatiguing, so it's often worth putting up some absorbent foam on bare walls to reduce them. Don't overdo it, though: you need to maintain a balanced acoustic.Flutter echoes in a studio can be distracting and fatiguing, so it's often worth putting up some absorbent foam on bare walls to reduce them. Don't overdo it, though: you need to maintain a balanced acoustic.

You shouldn't go overboard with the room treatment, though, because while working in a control room that has 'ringy' flutter echoes or an ultra-live acoustic can be very distracting and fatiguing, so too is trying to work in a room that sounds nearly as dead as an anechoic chamber!

Of course, traditional control rooms are pretty dead, acoustically speaking, and that is necessary so that you can hear what you are doing in a mix without the room effects dominating things. But the key is to maintain a balanced acoustic character across the entire frequency spectrum. The temptation in your situation might simply be to stick a load of acoustic absorbers on the walls, and that would almost certainly kill the flutter echoes, but in doing so there is also a risk that you'd end up with too much HF and mid-range absorption in the room (relative to the bass-end absorption).

That situation would tend to make the room sound boxy, coloured and unbalanced, and that's why a better alternative, sometimes, is to use diffusion rather than absorption; to scatter the reflections rather than absorb them. The end result is the same, in that the flutter echoes are removed, but the diffusion approach keeps more mid-range and HF sound energy in the room.

The question of which approach to use — diffusion or absorption (or even a bit of both) — depends on how the rest of the room sounds, but from your description I'd say you still had quite a way to go with absorption before you've gone too far.

To sum up, I'd suggest that you're not worrying unnecessarily, and that it would help to put up some treatment to reduce those flutter echoes.

Tuesday, February 3, 2015

I have been making music for years now, and although I have a set of Genelec 8040s that I use during the day (when I'm home), I have been using a set of Audio-Technica M50 headphones for writing at night, when I usually have the ideas and desire to write, but am unable to, due to neighbours and a sleeping wife.However, lately I have been unable to use the cans, as I've been experiencing discomfort and what I believe is the onset or warning signs of tinnitus. It's been a nightmare trying to adapt to not using cans at night, and I find it almost impossible to get anything other than sequencing done at this low volume!I'm wondering whether there are any miracle headphones or bits of kit that would minimise hearing damage or discomfort while still being (relatively) accurate and enjoyable to use.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: Firstly, regarding the tinnitus: it's very common, often temporary and may be nothing to worry about. It can be brought on by something as simple as drinking too much coffee or suffering a mild ear infection, but don't ignore or neglect it. Go and see a medical professional and get checked out! If there is a problem, early intervention could make all the difference.

I don't think there are any 'miracle' solutions in headphones. Basically, it comes down to self-control in establishing the most appropriate maximum level for those particular headphones and sticking to it. The simplest solution is to put a mark on the headphone volume control and exercise enough self-discipline to never turn it up past that. If you reach a stage in your mixing when you're finding that maximum level is too quiet, take a break. Give the ears a little time to relax and reset, and then start again.

More volume is not the answer, though. It might seem more exciting and involving, but it doesn't really help to make better mixes — in fact, it usually makes them worse! The reason is that greater volume allows you to hear through a bad mix more easily, and poor balances aren't perceived as such. Working at more moderate levels — the kind of volume that most end listeners will use — encourages a far more critical approach to the mix, as poor balances sound obviously awful! Mixing becomes much harder, certainly, but also much more accurate and with far better end results. This is true of both speakers and headphones.

By all means turn the volume up if you need to check low-level background noises and so on, but do so only briefly. Try to mix at a modest level, and keep that level fixed. If you continually change your monitoring level, your mix will change continually too!

However, the fatigue you're experiencing may involve more than just sheer volume. The M50s are pretty good for the money, but I think you might find it easier to work with a pair of good open-backed headphones that are more revealing. You might find it helpful to read the comments and suggestions for different models in a headphone comparison article we ran in the January 2010 issue (/sos/jan10/articles/studioheadphones.htm). If possible, try different models before buying, to make sure the weight, headband pressure and size of the ear cups suit your head and are comfortable. Open-back headphones do 'leak' more sound than closed headphones, though, and that may be an issue for your wife!

The M50, being a closed-back design, tends to be less revealing of mid-range detail than a good open-backed headphone, and a consequence of this is a natural tendency to keep cranking the level to try to hear further into the mix, but more volume still doesn't quite reveal what you want to hear! Headphones that exert a strong pressure on the sides of the head can also add to the sense of physical fatigue, and the sealed nature of the earpieces quickly makes your ears hot and uncomfortable, which also doesn't help.

I'd recommend trying some good open-back headphones, like the AKG 702s, Sennheiser HD650s or the Beyerdynamic DT880 Pros. They are expensive, but I think you'll find it far easier to mix with them and you'll be much less tempted to wind the level up, although it is still very important to take frequent breaks to allow your perception of volume to reset! Headphones of this calibre provide a top-notch monitoring system that will last for decades if well looked after, and you'll probably hear all sorts of details that your Genelecs don't reveal, too.If you find that your closed-back headphones are quite fatiguing, it may be a good idea to try some open-back headphones, such as these AKG 702s. Decent open-back headphones are often more revealing than closed-back models, and may therefore reduce the temptation to increase the volume.If you find that your closed-back headphones are quite fatiguing, it may be a good idea to try some open-back headphones, such as these AKG 702s. Decent open-back headphones are often more revealing than closed-back models, and may therefore reduce the temptation to increase the volume.Q. How can I make using headphones less fatiguing?

Obviously, though, there is no physical sensation from the low frequencies when using headphones, as there is when using speakers and that can also be a factor in the continual desire to turn the level up, especially if you're producing music that demands strong bass content. The only way around that is self-discipline and learning to trust your headphones.

As a last resort, if you don't think you have the self-discipline to leave the volume control alone, it might be wise to consider investing in a suitably calibrated headphone limiter. Again, it's an expensive option, but I'd suggest that it's well worth it to protect your priceless ears! There's some useful background information here: www.tonywoolf.co.uk/hp-limiters.htm. Also, Canford Audio offer various types of headphone level limiter that can be installed inside headphones or wired into the cable. These are based on a clever BBC design, which is now mandatory within the corporation to ensure that BBC staff don't expose themselves to excessive SPLs through their phones, and it works extremely well. You can read more about it here: www.canford.co.uk/technical/PDFs/EarphoneLimiters.pdf.

I realise that there are advantages to monitoring on a single 'middly'-sounding small speaker (such as an Avantone MixCube) from time to time while mixing, to get an idea of what the music might sound like on typical cheap consumer playback systems. However, I mix mainly deep house and lounge, which is quite rich in high and low frequencies, and these are easily conveyed by the full-range playback systems in trendy restaurants, cafes and clubs, but not by these small monitors. Does using one while mixing, therefore, actually make any sense for me? Also, if I did decide to get a single MixCube, I guess the best place to put it would simply be in the middle of the desk, but the problem is that that's where my computer screen is! Should I buy a higher speaker stand to go above my screen and angle the Avantone down towards me?

Nicolas Issid via email

SOS contributor Mike Senior replies: If your music were only ever played on larger full-range systems like those you mention, the usefulness of limited-bandwidth referencing would indeed be reduced. However, I'd personally think twice about targeting the sound too narrowly for one type of playback system, and would be inclined to prepare my music for lower-resolution playback in case it, for any reason, gets transmitted for wider consumption — on the Internet, say, or as part of a TV programme, radio advert or computer game.Rather than buying a mixer with lots of outputs and manually routing sound effects to different speakers for a theatre production, why not use a dedicated computer-based 'playout' system, in conjunction with a multi-output soundcard or audio interface? Some suitable software is even available free.Rather than buying a mixer with lots of outputs and manually routing sound effects to different speakers for a theatre production, why not use a dedicated computer-based 'playout' system, in conjunction with a multi-output soundcard or audio interface? Some suitable software is even available free.Even if you mix music primarily aimed at full-range venue playback systems, there's still something to be gained at mixdown from checking your mix on a single small speaker.Even if you mix music primarily aimed at full-range venue playback systems, there's still something to be gained at mixdown from checking your mix on a single small speaker.

That apart, though, I think you're slightly underestimating the value of something like the Avantones, because they're not just about the 'middly' frequency response. Their small-scale, single-driver, portless design makes them much more revealing as far as simple mix balance issues are concerned (ie. for deciding what level each instrument should be at) than almost any even marginally affordable full-range nearfield/midfield monitoring system. This is even more the case if you use only one such speaker, rather than a stereo pair, as you also avoid inter-speaker phasing issues. Overall, I think you'd still benefit a great deal from this kind of speaker even if you mix primarily for larger speaker systems.

As far as speaker placement is concerned, in my opinion it doesn't really matter much where you put it, as long as it's pointing roughly in your direction and you're not getting acoustic reflection problems from a nearby room boundary or other hard surface. The only disadvantage of mounting a single speaker off centre is that it may temporarily skew your stereo perception to one side after you've been listening for a while. Not that this is actually a significant mixing problem in practice, though, because it's very easy to work around.