Professional & Audio Tips

Mastering your Music Before you Release it

Before releasing your music to the world through digital & physical distribution, it is highly recommended that artists get their music professionally mastered.

There are two parts to the mastering process. The first step, which is the one artists tend to be most familiar with, is taking their mixes to the next level by making them sound fuller and making sure that the project sounds great in its entirety.

The second part of the mastering process is preparing the project for both physical and digital distribution. This step requires creating specific versions of the project for different mediums and release formats such as high-resolution release, iTunes, CD duplication/replication and any other form of digital release.

Click here for more information on the mastering services and rates we offer at So Amazin' Studios

Digital & Physical Music Distribution

Physical Distribution

For physical distribution, a CD first has to be encoded as whole with a product UPC code. Each song and CD text is then individually given an ISRC code.

UPC codes are the barcodes that you find on any product and are used to track sales

ISRC codes are used to keep track of royalties collection

CD text is the information (such as song name and artist name) that you see pop up on your CD player

These UPC/ISRC codes are also necessary to upload music for online distribution on platforms such as iTunes, Spotify etc.

Clickhere to read a more in-depth breakdown on UPC/Barcodes, ISRC codes and CD-Text.

Digital Distribution

When looking for a digital distributor, many musicians feel lost when thinking about getting their music on iTunes, Spotify or any digital retail store

The only problem is that none of these digital retail stores will let you upload your music directly. You will either need to be signed to a major (or big indie) record label or use a Digital Music Distributor to get there.

Some of the most popular digital music distributors, also know as "aggregators," are TuneCore, CDBaby, and ReverbNation.

Choosing a digital music distributor can sometimes be tricky because of how they price everything. Below is a chart of a few digital music distributors and how their pricing is laid out. Hopefully, this will help you choose the one that fits best for you

It's imperative that every artist and producer starting out in the music industry should be educated on the topic of protecting their music from people who think they can take advantage of the new kid on the block. Over the years many people have asked me what exactly registering their music copyright will do for them and how to go about the actual process of registering their music.

When speaking about music, many people think that the term "copywriting" is process of registering their music with the government (U.S. Copyright Office). This is one big misconception. Creations of the mind such as inventions and artistic works; including music, are called “Intellectual Property”. Once that intellectual property or idea is expressed in either a tangible form (something that can be physically touched) or fixed form such as a recorded song, it is automatically copyrighted. Essentially, as soon as an idea is created, you automatically become the copyright holder.

So what exactly is a copyright?

In a nutshell, copyright is a law that gives you ownership over anything you create and also exclusively grants you several rights as the owner, including the right to:

Reproduce the work

Make derivative works (a work based on or derived from an existing work. Sampling music would be an example of this)

Distribute copies

Perform the work

Display the work publicly

What will registering your copyright with the U.S. Copyright office do for you?

Registering your copyright with the U.S. Copyright Office proves that you are the owner of a work and acknowledges the date of its creation. This matters because in the case that someone steals your music, also known as “Copyright Infringement”, you will be protected if a lawsuit does take place.

A copyright lasts for 70 years after the death of the author. If the work is that of a corporate authorship (work for hire), 95 years from publication or 120 years from creation, whichever expires first.

There are actually 2 copyright forms that should be registered:

Form PA (Performance Arts copyright aka songwriting copyright) – Which will protect the lyrics and underlying musical composition or melody.

Form SR (Sound Recording copyright)- Which will protect the actual recording of the song

If the copyright claimant is the same for both the sound recording and the underlying musical composition, then only one registration can be made for both by filling out Form SR (on Form SR, there will be a space where one can specify that the claim covers both works).

To register your copyright with the U.S. Copyright office go to Copyright.gov

It is $35 per application form filing (if done online) and $60 for paper filings

One can file one song or a collection of songs under one application for the same rate. The collection would be under one name, the “collection title”

I hope this helps and if anyone has any further questions please comment below or follow me on Instagram @KeyLoww

In the 21st century, everyday, more and more consumers are buying and streaming music online and parting away from buying physical CD's. In the physical world there is a standard to how CD's should be made called "Red Book Standard". Some of those specifications require that CD audio have a sample rate of 44.1kHz and a bit depth of 16 bit. The standard is the same for all CD-DA's (Compact Disk-Digital Audio) that are distributed worldwide.

Sample Rate- A sample is the digital representation of an analog waveform, the number of samples that are captured per second is called the sample rate. The higher the sample rate, the more information of the waveform will be captured (higher quality audio).

Bit depth - Digital representation of the amplitude (loudness) of audio waveform. The higher this resolution is, the more accurate the digital representation of the wave's amplitude is or the higher the bit depth, the more dynamic range that is captured. A 16 bit recording has a dynamic range of 96db and a 24 bit recording has a dynamic range of 144db.

We now live in a world were music is no longer just distributed physically, but now music is mostly distributed digitally online. Mastering engineers are no longer just mastering for physical release but now for digital release. The only problem with digital release is that there is not only one standard that will sound the best it can on different mediums, such as music stores, and music streaming sites. If you want your music to the sound the best it possibly can wether it's going to be released on iTunes, Radio, Soundcloud, YouTube, or simply going to end up as an mp3, then you must take a slightly different approach before exporting your final masters.

Below are a few tips on how to approach your final masters wether they are going to be released on iTunes, Radio, SoundCloud, YouTube or you simply just want the best sounding Mp3's:

Mastered for iTunes

Final ITunes upload is a high quality ITunes Plus AAC File which uses a 44.1khz sample rate and is encoded with a 256kbs target bit rate.

Old AAC was only 128kbs.

iTunes wants files with more dynamic range, they are trying to bring back that 70's vinyl sound.

If one doesn’t manually properly correct this clipping, iTunes will automatically reduce the clipping of what ever file you upload thus, causing reduction in sound quality.

Although iTunes doesn’t reject files for a specific number of clips, tracks which have audible clipping will not be badged or marketed as Mastered for iTunes.

Apple recommends leaving -1 dB of headroom to prevent any clipping from occurring due to the noise added by the AAC encoder.

Mastered for Radio

Sophisticated and powerful audio processing for broadcast transmission systems do not coexist well with a signal that has already been severely compressed or clipped.

Instead of being punchy, the on-air sound produced from hypercompressed sources is small and flat, with out the dynamic range that gives music its dramatic impact.

Broadcast processing will compress your already compressed source and will not sound better or louder on the air! It sounds more distorted, making the radio or speakers sound broken in some cases.

Compression on top of compression will suck the drama and life out from the music.

Mastering Tips

Use Minimal to no compression, leave the audio unsquashed. Let the broadcast processor do its work. The result will be just as loud on-air as hypercompressed material, but will have far more punch, clarity, and life.

Mastered for SoundCloud

24bit/192k sample rate audio files can be uploaded to sound cloud but will be transcoded to 128 kbps MP3 to prepare the audio to stream from the site.

The higher the quality of the uploaded file, the higher quality the mp3 will be.

You can allow users to download your original higher resolution masters or compressed mp3’s.

If you upload an MP3, Sound Cloud will transcode it anyways, resulting in even more loss of quality by introducing more audible artifacts to audio that’s already compressed.

SoundCloud streams such low quality audio files because they are much smaller in size, below you can see the size comparison of a 128kbit Mp3 file to higher quality wav. files.

Mastering Tips

Set limiter margin/ceiling to around -0.3 to -1.0 dBFS to stop most of the clipping that occurs during the encoding process.

SoundCloud does not have a feature like Apple’s SoundCheck, so an audio master destined for SoundCloud has more freedom to raise the overall RMS level for competitive loudness.

Using a stereo imaging tool, narrow the high end between 5-20%. 128 kbps MP3 is the lowest commonly acceptable audio quality. As such, a lot of information is lost during encoding and an extremely wide mix is more susceptible to noticeable artifacts. Ironically, some narrowing can help avoid perceived loss of energy and width.

Audio bitrate is not affected by video quality like in the past. The audio you hear during a YouTube video will usually be either 126 kbps AAC in an MP4 container or 155-165 kbps Opus in a WebM container (royalty-free, media file format), regardless of whether you’re playing 360p, 1080p, or any other resolution.

Prior to 2013, YouTube played:

240p video with audio playback at 64kbps MP3.

360p and 480p video with audio playback at 128 kbps AAC.

720p and higher video with audio playback at 192kbps AAC.

Mastering Tips

24 bit 96khz audio files should be uploaded for best AAC encode.

Mono audio files will be played at 128kbps.

Stereo audio files audio files will be played at 384kbps.

5.1 audio files will be played at 512kbps.

Set limiter margin/ceiling to around -1 dBFS.

Not all encoders are created equal. Render from the video editor in full, uncompressed quality for both video and audio.

Mastered for Mp3

The Mp3 compression format creates files that don’t sound exactly like the original recording because it sacrifices audio information, it is a lossy format unlike wav files which are lossless format that don’t sacrifice any audio information.

In order to significantly make a smaller file size, Mp3 encoders have to lose audio information

Perceptual coding is a coding method that takes advantage of the human ear, screening out a certain amount of sound that it doesn’t think you can hear (elements that are masked by more important elements one can hear).

By changing the bit rate, you can choose how much information an Mp3 file will retain or lose during the encoding and compression process (96 to 320 kb).

Mp3 format flattens out dynamics in a song.

Mastering Tips

At 128kbps, the encoder will remove anything at about 16kHz and above (as shown in the diagram below), so it recommended to use a low-pass filter to cut everything around and above 16kHz with a 6 or 12db per octave roll off.

Doing this will make the important frequency information much better

You’ll lose a little high-end (that can barely be heard), but you will gain more in mid range information.

To retain full bandwidth (20Hz – 20kHz), Mp3 or AAC files need to be encoded at or above 256kbps.

Avoid heavy use of saturation and distortion

Saturation mostly affects all frequencies, the encoder will not know what parts of the distortion are intended to be “musical” and which parts could be removed).

Avoid heavy limiting

Leave mixes with dynamic range

Over compressed mixes can sometimes fill in the sonic spaces that the encoder is looking for, resulting in the encoder making even more compromises and lower sound quality.

Keep peaks below -1dBFS or even -2dBFS if you are working from a 24 bit file

Mp3 encoders do not handle peaks that are near 0dBFS very well and the mix could end up distorting after the Mp3 encode.

Today someone asked me what kind of EQ i use on my mixes and if there is a difference between Linear Phase and Minimum Phase EQ's. To answer this question, yes there is a difference between the two. First lets discuss what phase is, phase refers to waveforms and the difference in time vs. amplitude (loudness) between two sources. In other words, if 2 waveforms do not line up exactly in time with each other then they will begin to sound lower in volume and sound like what most people describe as a "hallow sound", I go into more detail about phase here.

Now, lets talk about how phase comes in to play with EQ's. With analog EQ's, the frequency bands being boosted or cut are subject to phase shifts. This is because it takes time for an analog EQ to process a band when you cut or boost it, thus, resulting in that particular frequency band to be slightly delayed or shifted in relation to unaffected bands, This is phase shift! Manufacturers do their best to minimize the amount of phase-shift as much as possible, that is why these types of EQ's are called Minimum Phase EQ's. With most plug-ins we also encounter phase shifts, this is because plug-ins are meant to replicate the job of analog gear (even its latency), which is why most plug-in EQ's are also minimum phase EQ's

Finally we get to Linear Phase EQ's, so whats the big deal about them! Basically, Linear Phase EQ's get rid of the phase shifting. How does it achieve this? In essence, the plugin or digital EQ shifts anything that is out of phase back in time so it is back in phase, and it does this at very high speeds. Such accuracy can only be achieved digitally.

In my opinion Linear Phase EQ's are better, unless of course you like the way other types of EQ's sound. Just remember, use your ears and not your eyes when determining what sounds best.

Phase basically refers to waveforms and the difference in time vs. amplitude (loudness) between two sources.

Have you ever recorded a drum set or other instrument with multiple microphones or duplicated an audio waveform in ProTools and tried putting it in sync with each other but was slightly off? when you played it back it might have sounded very weird, what many people describe as a "hollow sound". This is what a phase issue is!

Below, in diagram "a" you can see 2 waveforms that are perfectly in sync with one another (the 2 waveforms are 360 degrees in phase), this will result in an even louder audio waveform.

In diagram "b" you can see 2 waveforms that are not in sync with one another. You can also see that the peeks of each waveform are pointing towards each other, this means they are 180 degrees out of phase and the 2 waveforms will completely cancel each other out, meaning no sound will be heard!

So why do we encounter phase issues? In the recording studio when you place multiple microphones to record an instrument, depending on the placement of the microphones, when we record, the actual sound from the source may reach each microphone at different speeds since there is a speed to sound (roughly 1,100 feet/sec). The more microphones that are used in a recording, the higher the probability for phase issues. Phase issues can be fixed by simply listening to all the microphones together in mono before actually recording. one can detect phasing issues easier when listening to a mix in mono, If you do encounter phasing issues try fixing your mic placement or flipping the phase on the channel input of the mic. If its to late to fix phasing issues during the recording process, during mixing, one can also nudge the waveform by a couple milliseconds, this can make a big difference!