I have FL studio 6 I think I probably should upgrade, but I have not done anything with it.

The problem I have I guess is I lack skill in music composition, I used to read all these article on how to make whatever sound in FL Studio and how to use vst's and fx. But I can't make anything original that actually sound good.

I just don't know where to start to learn music theory when it comes to electronic music production.

Very interesting stuff, dude! My first thought was it sounds like excellent music for a reality show soundtrack, especially the first two tracks, you know, something more of the Survivor realm that has that faux-world/ambient music? The third track was also interesting, but the kick and snare were kinda flat and the mix was very dry in parts. But I totally dig the experimentation. (Didn't get a chance to listen to Starry Nights yet.)

thanks man, gotta admit Survivor was the last thing on my mind when i wrote those :lol

had a listen to yours, think the one "burst" is my favourite but it's more my style of music :)

i'm only listening on crappy laptop speakers so i'll refrain from mixing comments suffice to say that it all sounds nice and clear. will have a proper listen later on.

i noticed your a brissie man, i've gigged up there a few times in the past always a great place to play.

1. Is it acceptable to go into the red or not? I know it's ideal to have plenty of headroom in the initial mixdown but so many final mastered tracks I look at seem to sit comfortably in the red for most of the song. Is this just an anomaly of saving a CD to the computer what? And why doesn't it "clip" or sound distorted?

1) No, you should never get into the red. Instead, turn your master volume down and the volume of your speakers or monitors up. The reason why some final tracks seem to sit in the red is simply because most music is over compressed these days, unfortunately, to make the music loud. Also, I think it's best to have your master output sitting around - 3DB or so. To leave headroom for mastering purposes!

1. Is it acceptable to go into the red or not? I know it's ideal to have plenty of headroom in the initial mixdown but so many final mastered tracks I look at seem to sit comfortably in the red for most of the song. Is this just an anomaly of saving a CD to the computer what? And why doesn't it "clip" or sound distorted?

You'll get clipping, or very harsh-sounding digital distortion, when you go above 0 dBu. I'm not sure if there's a standard definition of "in the red" in dBu terms, or if it varies depending on the equipment or software that's displaying the level meter. Usually meters start to show red before they sound reaches the point of clipping.

Just remember that any device that people listen to your music on will have a volume control that will likely not be turned all the way up. They can always make it louder. But there's no way after the fact to restore dynamic range to a recording that's too compressed and overly loud.

Reason is not as hard as it looks, it's true. But it's very difficult to get a good and clean sound of it. I've been working with Reason for 10 years now, but some things about it ANNOY the hell out of me. Things like the worthless EQ's, the lack of VsT support, aged handling of AUDIO files etc. etc.

Within 3 months I will have switched to iMac running Logic Pro 9! Can't wait!

What an awesome thread. I'm partially new to this whole music producer business as I have been a musician on real instruments for almost a decade. This is one of the first things I created on Garage Band using only my MacBook Pro's keyboard. :lol

Reason is good for just getting into the basics of production, but eventually you will want to move to another DAW as you cannot record audio or use third party plugins in Reason. Reason also seems to have a certain sound, which some people are ok with but others may find annoying.

Reason is not as hard as it looks, it's true. But it's very difficult to get a good and clean sound of it. I've been working with Reason for 10 years now, but some things about it ANNOY the hell out of me. Things like the worthless EQ's, the lack of VsT support, aged handling of AUDIO files etc. etc.

Within 3 months I will have switched to iMac running Logic Pro 9! Can't wait!

Why would you need VSTs in Reason ?
And what do you mean with aged handling of AUDIO files ?

Originally Posted by Chasteleth

Reason is good for just getting into the basics of production, but eventually you will want to move to another DAW as you cannot record audio or use third party plugins in Reason. Reason also seems to have a certain sound, which some people are ok with but others may find annoying.

I disagree, if you can work fine with Reason, then there is no reason (lol) at all to switch to another DAW.

That certain sound thing is also not true, at least, I don't hear any difference between Reason created tracks and f.e. Ableton created tracks.

Don't forget that the DAW of choice doesn't really matter, it's what YOU think works best.
There's world famous producers who use Fruity Loops, Reason, you name it.

Why would you need VSTs in Reason ?
And what do you mean with aged handling of AUDIO files ?

I disagree, if you can work fine with Reason, then there is no reason (lol) at all to switch to another DAW.

That certain sound thing is also not true, at least, I don't hear any difference between Reason created tracks and f.e. Ableton created tracks.

Don't forget that the DAW of choice doesn't really matter, it's what YOU think works best.
There's world famous producers who use Fruity Loops, Reason, you name it.

I guess it will depend on the style of music you make, I use a lot of different sample libraries and VSTs as I write soundtrack music. I need orchestral, percussion, ethnic instruments etc and Reason doesn't have these types of sounds. I know this is up to personal taste as well and even though the synths in reason are quite good, I think there are much better VST synths out there such as omnisphere and zebra.

If you're the type of person interested in just using the synths and sampler within Reason then I agree, upgrading to another DAW might not be worth it. If you need to record live instruments though then you will need to upgrade to one of the more functional DAWs.

3) That certain sound thing is also not true, at least, I don't hear any difference between Reason created tracks and f.e. Ableton created tracks.

1) Okey, so since Reason 4, Reason has now got 3 synths. The subtractor, the malstrom and the Thor. Ofcourse, great synths to begin with. But that also means when you're working with Reason that you are subjected to use those synths, and nothing else. That annoys me. The world of VsT's has so much wonderful things to offer, for instance LennarDigital's Sylenth. It sounds way better than Reason's synths. More analogue, fatter. I think that the lack of VsT support will eventually break Propellerheads Reason someday. The fact that there's no support for other synths in Reason just limits me as a producer. And it sucks! Yes, you can Rewire Reason with other DAW's, but Reason has so many flaws, I'm getting sick and tired of it.

2) Reason has no timestretching functions or whatsoever. Logic however has tons of functional and cool audio editing tools!

3) I'm sorry but I think if you would ask 10 producers, that 8 or 10 of them will say that Reason has indeed it's own sound or sound engine. You can actually hear what is made with Reason and what is not. I swear.

I would say there are two main types of DAWs. The genre/style/sound targeted ones like Reason, and the open ones. The latter being Cubase, Pro Tools, Logic, etc.

While I agree it's probably easier to use a targeted DAW for certain styles, I would still recommend an open one. This has nothing to do with sound quality or plugins, but because it will never put any limits on what you can do. Maybe some of the functionality is more complicated, but that's probably because an open DAW are made to be very flexible, in that you can make any style or genre. So many dope dance tracks have been made with Cubase or Pro Tools.

1) Okey, so since Reason 4, Reason has now got 3 synths. The subtractor, the malstrom and the Thor. Ofcourse, great synths to begin with. But that also means when you're working with Reason that you are subjected to use those synths, and nothing else. That annoys me. The world of VsT's has so much wonderful things to offer, for instance LennarDigital's Sylenth. It sounds way better than Reason's synths. More analogue, fatter. I think that the lack of VsT support will eventually break Propellerheads Reason someday. The fact that there's no support for other synths in Reason just limits me as a producer. And it sucks! Yes, you can Rewire Reason with other DAW's, but Reason has so many flaws, I'm getting sick and tired of it.

2) Reason has no timestretching functions or whatsoever. Logic however has tons of functional and cool audio editing tools!

3) I'm sorry but I think if you would ask 10 producers, that 8 or 10 of them will say that Reason has indeed it's own sound or sound engine. You can actually hear what is made with Reason and what is not. I swear.

1) If you want to use other synths, why not record sounds of those, and load them up as samples in Reason ? ;)

2) Hmm, I never felt the need for timestretching functions so far.

3) I think this is always a point of discussion, if you would ask Claude VonStroke, he would probably say that it's nonsense. (he produces in Reason)

I want to first say that when dealing with tech-house or deep house, you MIGHT get away without EQing/compressing too much, but that's because it's.. well, minimal. And you're likely to deal with samples that already have been EQ'ed and compressed.

Let's talk a bit about sample based production. Production where you don't have a drum set with 8 mics and a live band, but rather use samples, MIDI and stuff like EZ-drummer or battery or such

The problem with neglecting compression, EQing and reverb when dealing with sample based production is that you'll end up with an incongruent sound. You might use one sample from one pack and another snare from another set. In house-music, this isn't the most problematic thing, but if you're making sample based music, wanting it to sound like it's played on actual drums, you're making beats like The Album Leaf does, you'll want to have a congruent sound to your samples.

If you stick to a spesific sample-pack, you're likely to get samples processed with this in mind already, but while it has a bit reverb (like a drum sample-pack having the same room-mic mixed in in the various samples, giving it a good consistent sound), there should be left head-room for you to control how much reverb you actually want.

This is why I have individual reverbs on each track, to first assure that consistent sound.

The same goes for EQing and compression. While the samples are likely treated with some compression, it's done with head-room for you to get the ultimate control over the sound. Which means that in most cases it needs further compression.

I have two music projects going. My solo stuff that's Pink Floyd / The Album Leaf inspired. Here I use my guitar and my fairly expensive guitar rig, no plugins to treat my sound, and for drums I mix between electronic beats and analogue sounding drums.

Why do I need EQing and compression on my guitar-track when the first thing in my signal chain is a compressor and EQ? Because I make my guitar tones to sound as best as they can, coming out of my speakers. It'd be extremely tough to make the sound spesifically to sit with the rest of the mix - of course this could be done, but that's why I post-EQ and compress it a bit further.

Let's not get into the loudness debate, because compression isn't used to make it sound louder necessarily. It is there to help us have individual tracks sit with each other better.

Example time!
Here's a song off of my first album from two years back. Back then I didn't know much about mixing nor mastering, so I did very well with this song, with adjusting the volumes of each track and panning them to a nice aestetic result

this is done by compression and EQ and reverb. Reverb can be considered an art in itself. Compression doesn't have to add to a compressed sound, but really just help un-muddly and make the parts of the song work better as a whole.

My other music project is progressive house and thereabouts. Here I use more extreme EQing, because house isn't so much organic sounding music, so I LP24 or LP12 and HP24 or HP12 between 200-5000Hz, to give space for other aspects of the mix and room to give drive to the song. In the above example you can hear that I have EQ'ed the piano from the old to the new mix, but this is to give it a better sound, taking away some of the frequencies that doesn't help to the melodic content, but rather clutters up the sound. But all frequencies are still there.

1) With the way you use reverb (each track is treated with an individual reverb), doesn't that make it inconsistent? No offense, but this sounds a bit like a paradox what you're saying. Unless you mean treating every individual track with the same reverb, via send FX or something.

2) Secondly, I think the main gripe with producers who start out like us, is that fx like EQ, compression, delay, reverb, flanging, chorus etc. etc. is that all look simple, but like you said it's like an art itself. Not every reverb suits a certain sound, if you know what I mean. Also, the chain of fx you are using is extremely important by the way. Most people would argue, for instance, that you should compress and then EQ, but some people like John 'oo' Fleming do it the other way around. First EQ and then compress. Also, it can make a huge difference where the fx has it's position in the fx chain. For example, putting a compressor after the reverb or delay so that the compressor pushes the effect up and you have a longer tail. What I'm trying to say is that no producer should underestimate the power of fx!

Augur. A vector synth, that lets you sequence and mix different waveforms. Perfect for evolving, pad-type sounds, and just sounds great overall.

Synth1. An emulation of the Nord Lead, an early virtual analog synth. Not huge-sounding or impressive at first, but it sits well in a mix, is fun to tweak because it doesn't overwhelm you with extraneous effects, and is very light on CPU usage.

xhip. Synth with a cutting, precise digital sound to it. Perhaps not everyone's cup of tea, but personally I love it.

Digitalfishphones Blockfish/Floorfish. The only compressor/expander I use. I know that recording engineers often have tons of different compressors because they all have subtle differences in character. But for me personally this does everything I need a compressor to do, and sounds great, for free.

Man, just, thank you so much for everyone's tips so far! I can't believe how helpful this thread has been. I've just been producing non stop for the past 3 days :D

It's so awesome when you can get to that stage of "comfort" I guess whereby from then on you can start "harvesting" new tips and tricks and methods and stuff and incorporate them into your repertoire of sound manipulation, and use the basics of your knowledge to make them sound good and appealing to your ears (and hopefully by extension others' ears).

I know I'm jinxing this, but I just got offered a job (a gig more than a full time job) with Nintendo, can't really say anything about, but I can assure you if it works out it will be in no small part due to Producer-GAF. :)

Augur. A vector synth, that lets you sequence and mix different waveforms. Perfect for evolving, pad-type sounds, and just sounds great overall.

Synth1. An emulation of the Nord Lead, an early virtual analog synth. Not huge-sounding or impressive at first, but it sits well in a mix, is fun to tweak because it doesn't overwhelm you with extraneous effects, and is very light on CPU usage.

xhip. Synth with a cutting, precise digital sound to it. Perhaps not everyone's cup of tea, but personally I love it.

Digitalfishphones Blockfish/Floorfish. The only compressor/expander I use. I know that recording engineers often have tons of different compressors because they all have subtle differences in character. But for me personally this does everything I need a compressor to do, and sounds great, for free.

Q: Anyone know how to automate gliding pitch, that is, greater than the basic two or three semitones with the pitch bend wheel (in Pro Tools, but I guess Logic etc too as they're all pretty similar really), as opposed to simple, static pitch-shifts? I guess there are plug-ins for that too or am I just a buffoon and cannot see the simple way to achieve this within the DAW...

In fact I'd love to know a whole bunch of plug-ins that create different types of sonic/musical change to an audio track. (The only default example of this I can seem to find in Pro Tools is a handful of building-delay type plug-ins.) Or is everything else meant to be recorded into a new track as you automate it manually? (I guess I answered that question already.)

Q: Anyone know how to automate gliding pitch, that is, greater than the basic two or three semitones with the pitch bend wheel (in Pro Tools, but I guess Logic etc too as they're all pretty similar really), as opposed to simple, static pitch-shifts? I guess there are plug-ins for that too or am I just a buffoon and cannot see the simple way to achieve this within the DAW...

In fact I'd love to know a whole bunch of plug-ins that create different types of sonic/musical change to an audio track. (The only default example of this I can seem to find in Pro Tools is a handful of building-delay type plug-ins.) Or is everything else meant to be recorded into a new track as you automate it manually? (I guess I answered that question already.)

hey man, for gliding pitch use the portamento/legato setting on the synthesizer of your choice. you'll probably have to adjust the porta rate to get the sound you desire.

you shouldn't need to record effect automations to a new track with any of the current sequencers, they all have some form of recording (and later, editing) the automation data. at worst, it will require creating a seperate midi track.

basic effects that sound nice tweaked over time are:

the ubiquitous filter - lowpass, hipass, bandpass, w/eva pass, tweak the cutoff and the res. tweak by hand or use a level-sensing envelope. the bread'n'butter of dance music production.

delay - tweak depth, feedback, dly time

verb - fast sweeps on a verb mix lvl makes a big swoosh, slow sweeps the impression of something moving into the distance (esp. if you automate the unaffected level down at the same time)

i'd call those the basic three. most other effects that come to mind at the moment (chorus, phaser, bitcrush, distort, compress) i tend to use as set-and-forget or just automate bypassing but there's nothing to stop you if it sounds good.

if you're working with softsynths you can automate those as well, which gives you a great deal of flexibility, everything from cutoff to oscillator pitching. if you're using hardware then you'll have to create a midi track and send CC data, particular CC channels depending on what hardware it's going to (should be listed in the hardware's manual)

hey man, for gliding pitch use the portamento/legato setting on the synthesizer of your choice. you'll probably have to adjust the porta rate to get the sound you desire.

you shouldn't need to record effect automations to a new track with any of the current sequencers, they all have some form of recording (and later, editing) the automation data. at worst, it will require creating a seperate midi track.

basic effects that sound nice tweaked over time are:

the ubiquitous filter - lowpass, hipass, bandpass, w/eva pass, tweak the cutoff and the res. tweak by hand or use a level-sensing envelope. the bread'n'butter of dance music production.

delay - tweak depth, feedback, dly time

verb - fast sweeps on a verb mix lvl makes a big swoosh, slow sweeps the impression of something moving into the distance (esp. if you automate the unaffected level down at the same time)

i'd call those the basic three. most other effects that come to mind at the moment (chorus, phaser, bitcrush, distort, compress) i tend to use as set-and-forget or just automate bypassing but there's nothing to stop you if it sounds good.

if you're working with softsynths you can automate those as well, which gives you a great deal of flexibility, everything from cutoff to oscillator pitching. if you're using hardware then you'll have to create a midi track and send CC data, particular CC channels depending on what hardware it's going to (should be listed in the hardware's manual)

Thanks greatly for that! I'm just trying out the Augur pad thing now, pretty sweet, one example of what I was after I guess!

And yeah the thing is, I was finding I was doing most of those effects in post-production in Adobe Audition... I just thought it weird how I could find plenty of 'dynamic' EQs and filters in that program but couldn't see the equivalents in Pro Tools. I guess my biggest problem atm is I'm maiden to sends... But to that end I'm amazed how much better a bit of ducking sounds!

Freaking awesome tread, Tr4ance. It doesn't surprise me to hear you've been making music that long, I still remember your track from the first G.A.M.E. comp as one of the tightest we've ever had.

We have a lot of this type of discussion in the G.A.M.E. threads, and more concrete info is always welcome. I can say that for me, the best lesson has been to always reduce volume, rather than boosting it. I started out like a lot of people, wanting that loud sound that jumps out at you, but really underestimated just how much instruments will fuck with each other and lose cohesion if the db level is too high. I'm still trying to find that sweet spot between loudness and perceived loudness.

no music to share yet but currently have 3 projects underway with my group, its a combination of hip hop, progressive and rock opera with artist from Michigan, Orlando, Cincinnati, and my native Louisville

Freaking awesome tread, Tr4ance. It doesn't surprise me to hear you've been making music that long, I still remember your track from the first G.A.M.E. comp as one of the tightest we've ever had.

We have a lot of this type of discussion in the G.A.M.E. threads, and more concrete info is always welcome. I can say that for me, the best lesson has been to always reduce volume, rather than boosting it. I started out like a lot of people, wanting that loud sound that jumps out at you, but really underestimated just how much instruments will fuck with each other and lose cohesion if the db level is too high. I'm still trying to find that sweet spot between loudness and perceived loudness.

Hey, thanks man! :) Woa, that project feels like ages ago, but it was awesome.

I hope this thread doesn't die anytime soon, and that this will be a very informative but most of all enjoyable thread!

I knew there was an audiophile community waiting to come out on GAF :lol

Mods should change the title of this thread to Official Audiophile Thread, or some clever variant of that if OP permits.

I've contacted Teknopathetic, who also visits this thread, about a thread title change, something with the likes of 'The Official NeoGaf Music Production Thread: producers unite!' or something, but he told me that this thread title was catchy enough and that people who visit this thread might be confused, and he has a good argument there! But yeah, maybe we should change it, shouldn't we?

I knew there was an audiophile community waiting to come out on GAF :lol

Mods should change the title of this thread to Official Audiophile Thread, or some clever variant of that if OP permits.

Audiophiles are more about music playback equipment than music recording. Studio monitor speakers/amps and audiophile ones are designed with completely different goals in mind: the former to sound neutral and faithfully render all flaws in a recording so the engineer can correct them, and the latter to enhance the recording and hide its flaws.

Audiophiles are more about music playback equipment than music recording. Studio monitor speakers/amps and audiophile ones are designed with completely different goals in mind: the former to sound neutral and faithfully render all flaws in a recording so the engineer can correct them, and the latter to enhance the recording and hide its flaws.

Well the more you know. :lol I never knew the difference.

Originally Posted by Tr4nce

I've contacted Teknopathetic, who also visits this thread, about a thread title change, something with the likes of 'The Official NeoGaf Music Production Thread: producers unite!' or something, but he told me that this thread title was catchy enough and that people who visit this thread might be confused, and he has a good argument there! But yeah, maybe we should change it, shouldn't we?

Well, it certainly has the content in the OP to justify the title change to "Official".

VST plugins can be instruments or creative effects, not just basic recording stuff like EQ or reverb or whatever. No different than guitars, drums, or anything else: different musicians have different opinions on what works and sounds good to them. Just like different producers have different thoughts on the ideal DAW workflow, and can choose software that best matches it. The point of plugins is that you can choose a DAW based on its workflow and interface and be free to use any instruments or effects you might be attached to.

If Reason's built-in instruments do everything you want a DAW to do, there's nothing wrong with using it. It just might be irritating in my mind to eventually outgrow its instruments, then have to learn workflow on another DAW from scratch in order to use plugins.

1) With the way you use reverb (each track is treated with an individual reverb), doesn't that make it inconsistent? No offense, but this sounds a bit like a paradox what you're saying. Unless you mean treating every individual track with the same reverb, via send FX or something.

That's exactly it. Since samples come from various packs and producers and environments, they each have their own reverb and feel to them. If you just sent them through the same reverb, they'd have their differences still, just with added reverb.

To oversimplify it in a way I don't really want to, say that most samples have reverb treatment ranging from 10-40 "points" of reverb (whatever the hell that would be, this is the oversimplification). Say 100 "points" is the reverb amount you want on your track, then you can't simply add 60 "points", because some would have 70 and others 100.

Also, not all reverbs works for various things. Plate reverbs are great for drums, but various parts of a kit should need different types of reverb.

Compression is the thing seperating the boys from the pros. A pro knows all his compressors to a degree you'd think wasn't humanly possible. Say he has compressor X. He will know that "this works for a snare and sometimes I use it with clean guitar tracks, (but then in another way), but I'd never use this on a rhodes" - while what the compressor does is so subtle most people can't even tell the difference in sound with it on and not and most musicians certainly wouldn't be able to tell the difference in texture and tonality from that compressor to any other.

And there are mastering engineers that use compressors in chains. First they'd have a compressor with slow attack and release, then one with fast and then one with some other flavour, all to work on various parts of the song in tandem and making a unique result (not unique as in over-compressed, but as in beautifully mastered).

Ah, yes I understand. If you would apply equal amounts to different samples, then the differences would still be there. But I mostly use Vengeance samples, all from the same CD so most of them have the same sound or vibe to them.

While we are on the subject of reverb, anyone else here writing orchestral music using EW, VSL or any of the other sample libraries? Obviously reverb is very important in getting realistic results, I'd be interested to hear some techniques.

Right now my orchestral template consists of EW (release tails disabled) and VSL samples and I use about 9 instances of altiverb, seperate instances for both early reflections and the tail of each orchestra section and then a tail for the whole orchestra. Might be going a bit overboard but it seems to work well.

I hope I'm not the only one here using orchestral libraries, there has got to be some game/film composers lurking here somewhere.

I'm not a movie or game composer, but which DAW do you use? I think that Logic 9's Space Designer has the best reverb sound ever, it's a convolution reverberator. Nothing can come even close to the sound of a convolution reverb if you ask me. It's a great room simulator. However, with orchestral music it's difficult ofcourse to have some good reverb settings. But don't you think the best way would be to use just one main reverb as a send effect/bus/aux whatever? And maybe pan the instruments like the orchestra players would sit in a real room? That may sound dumb, but think about it!

Hey guys. Never knew there was a thread like this. Well anyway, Ive been using Fruity Loops for the past 2-3 years now, and well, I just feel its got its limitations. Recently, Ive been running out of ideas. Mainly into Hip Hop/RnB/Pop type. Some of my work:

Mainly use VSTs like Hypersonic and Purity. Anybody got any recommendations as to what VSTs to use, any better music software. Oh, and anyone here good at bass? I need tips on how to use bass, cos my music sounds pretty empty without it.

I'm not a movie or game composer, but which DAW do you use? I think that Logic 9's Space Designer has the best reverb sound ever, it's a convolution reverberator. Nothing can come even close to the sound of a convolution reverb if you ask me. It's a great room simulator. However, with orchestral music it's difficult ofcourse to have some good reverb settings. But don't you think the best way would be to use just one main reverb as a send effect/bus/aux whatever? And maybe pan the instruments like the orchestra players would sit in a real room? That may sound dumb, but think about it!

I use cubase 5 but have experience with logic. You're definitely right in that convolution reverbs are the best type of reverb. Space designer is quite good, but altiverb is in another league, the impulses are great and there are impulses of specific scoring stages which are excellent for orchestral music. For electronic music though space designer is good. The convolution reverb that comes with cubase 5 is also nice.

I do pan the different reverb instances to fit the position of the players, but the reason I have different instances for each orchestra section is that you can set different pre-delays for each section, that way instead of just having a left to right emulation of the space, you also have a front to back.

VST plugins can be instruments or creative effects, not just basic recording stuff like EQ or reverb or whatever. No different than guitars, drums, or anything else: different musicians have different opinions on what works and sounds good to them. Just like different producers have different thoughts on the ideal DAW workflow, and can choose software that best matches it. The point of plugins is that you can choose a DAW based on its workflow and interface and be free to use any instruments or effects you might be attached to.

If Reason's built-in instruments do everything you want a DAW to do, there's nothing wrong with using it. It just might be irritating in my mind to eventually outgrow its instruments, then have to learn workflow on another DAW from scratch in order to use plugins.

You're absolutely right.

But I'll take the risk of eventually outgrowing Reason and having to learn another DAW's workflow, for the level I'm at now (been producing for almost 2 years), I think Reason is best suited for me.

VSTs are great too, never said they weren't, and occasionally I mess around with stuff like Synth1 in Ableton, but I haven't felt the need yet to implement it in my tracks.

The most important thing is indeed what you said, every producer has different opinions on what works best, it's all personal. :-)

I use cubase 5 but have experience with logic. You're definitely right in that convolution reverbs are the best type of reverb. Space designer is quite good, but altiverb is in another league, the impulses are great and there are impulses of specific scoring stages which are excellent for orchestral music. For electronic music though space designer is good. The convolution reverb that comes with cubase 5 is also nice.

I do pan the different reverb instances to fit the position of the players, but the reason I have different instances for each orchestra section is that you can set different pre-delays for each section, that way instead of just having a left to right emulation of the space, you also have a front to back.

This is all getting very technical :D

Hehe, I LOVE getting technical. But ehm, I don't understand. You're saying that by creating different predelays on the reverbs, you can simulate front to back in a room? I thought predelay was only meant to shape your reverb. Sort of like determining when the reverb kicks in (in sense of time).

Hehe, I LOVE getting technical. But ehm, I don't understand. You're saying that by creating different predelays on the reverbs, you can simulate front to back in a room? I thought predelay was only meant to shape your reverb. Sort of like determining when the reverb kicks in (in sense of time).

Tek, thanks for the thread title change man.

Pre-delay is setting the time difference between the initial sound and when the reverb is triggered. So if you use a short pre-delay time than the sound will be more to the forefront when compared to a sound with a longer pre-delay, which is what I mean by simulating front-to-back positions. Of course with all things in music production, it doesn't work well with everything and is completely dependent on the instruments and style of music.

GAF, I really need your opinion on this track. It's still unfinished but I wanna know what you guys think. I just started experimenting with Garage Band for the second time and I'd like to know what you guys think of this.

Prod-GAF: started this this morning, using some ducking and other techniques I've learn in the past few days. Any thoughts?

Originally Posted by AgentWhiskersX

GAF, I really need your opinion on this track. It's still unfinished but I wanna know what you guys think. I just started experimenting with Garage Band for the second time and I'd like to know what you guys think of this.

Awesome work! Like the simple, yet effective style. Keep building on it! :)

Originally Posted by Tr4nce

So Gaf, I'm about to switch from PC to Mac. What would be better, buying a MacBook or the 21.5 inch iMac? I've had it with laptops!

Why you switching? For music production or what? *Happy PC user here =P

EDIT: Anyone have any advice on the best way to maintain a really solid, crunchy phat snare sound but not bust the levels? Is it just a matter of sufficient reverb, a bit of subtle ducking(?) and moving all the other levels down in the mix then compressing afterwards?

1) Why you switching? For music production or what? *Happy PC user here =P

2)EDIT: Anyone have any advice on the best way to maintain a really solid, crunchy phat snare sound but not bust the levels? Is it just a matter of sufficient reverb, a bit of subtle ducking(?) and moving all the other levels down in the mix then compressing afterwards?

EDIT: Anyone have any advice on the best way to maintain a really solid, crunchy phat snare sound but not bust the levels? Is it just a matter of sufficient reverb, a bit of subtle ducking(?) and moving all the other levels down in the mix then compressing afterwards?

Snares are among the peakiest material you get, but I don't bust levels on them. Why? Because I have headroom. My faders aren't cranked. If you really need more volume, turn up your monitors. I'll bet a million dollars those aren't cranked.

^ Thanks for the advice guys... I think I'm really starting to get the hang of appropriate ways to maintain decent headroom :) And good luck with the switch, Tr4nce, I can't stand Macs so I'll let someone else field queries with that one ;-)