It's' been 5 and 1/2 months since my previous post in this series, (sorry) so I'll need some time to get back on track. Okay....

What is Time?

In 9th grade, my physics teacher told me that time is something that could never be clearly defined. Wikipedia defines it as a measure in which events can be ordered from the past through the present into the future, and also the measure of durations of events and the intervals between them.....perhaps a bit confusing.

In music, time refers to the time signature, or meter of a song. Despite the large number of time signatures in existence, there are only 2 meters: duple and triple.​Duple: DOWN, up, DOWN, upTriple: DOWN, up, up, DOWN, up up

All time signatures are derived from, or variations of, these 2 meters.

Time signatures define how many beats are in a measure (or taala), and the notational duration of each beat. In western music, it's represented as a fraction:

3/4

This time signature signifies that there are 3 beats per measure, where each beat has the duration of a quarter note. This is an example of triple meter.

Why do we need it?

Time signatures do more than just indicate the structure of the song. Each time signature has certain rhythms associated with it. For example, 3/4 indicates waltz time. 6/8 is commonly used for faster dance music. My point is, time and rhythm go hand in hand.

Listening to music and trying to identify the meter, time signature, and the rhythms are excellent ways of improving your sense of music. Later, think back and try to group certain rhythms with certain meters. You'll soon find patterns emerge, correlating these 3 elements. Unfortunately, this is something you need to experience for yourself. Listening to uncommon time signatures helps a lot. My personal favorite signature is 7/8

How does this help compose?

Any rhythm that you play has to finally fit in your meter. Understanding how rhythm intertwines with time signatures can help you make "intelligent choices" in your music. For example, if you want your song to be interesting, the first thing you need is a catchy rhythm. Okay.........how do you get compose a catchy rhythm? Force yourself to use a catchy time signature, of course!

For example, 5/4, 7/8, 11/8, 13/8, 15/16

I think you get the picture. Any signature with an odd number of beats, will easily churn out an interesting rhythm. Unfortunately, 3/4 can't make this category, because it's pretty much over used, and, a bit slow to form an intriguing rhythm.

Final Thoughts

I realize that this has been a rather ambiguous blog. There's really nothing here about how to come up with a complex rhythm. I'll delve into that a later time......hopefully. Until then, keep analyzing.

As a quick recap, in my previous blog, I mentioned the power of 2 notes to form an interval. That's all wonderful, but the whole article was more about theory; useless knowledge if you can't apply it. So how do you use those 2 notes? Well, lets look back at an example used in the previous blog:

I used to think that a melody required at least 2 note. Well, Peter Kadar most certainly plays a one note melody (ignoring the chords) in that video. So, how does he do it?? If it's possible to play a one note melody and still maintain interest, then there has to be something more fundamental to music; a foundation that melody is built upon. So, what is it? (Hint: read the title of this article)

Human Nature

Rhythm is a part of human instinct. It's what gets our foot tapping. It's something that we all naturally possess. When we walk, we walk with rhythm. When we talk, we talk in rhythm. When we're angry, WETALKSOFASTTHATNOONEKNOWSWHATWERESAYING! When we try to make a point, we E-NUN-CI-ATE EV-ER-Y SYL-LA-BLE. All of this is rhythm! We each have our own rhythmic style, and yet, it's something that unites us all.

More Theory

There's only one difference between this, and musical rhythm: in music, rhythm has a pattern; mostly, one that's repetitive. In music, a rhythm consists of upbeats and down beats. A down beat is a STRONG beat - one where we feel the impact. An upbeat is the exact opposite: a lighter beat. Here's a simple way to look at it: when you tap your foot to a song, every time you hit the floor, that's a down beat, and vice versa. Take a look at the following rhythm:

BOOM boom BOOM boom BOOM boom BOOM BOOM

All the BOOMs are downbeats, and the booms are upbeats. Take another look at Peter Kadar's one note melody and try to find the upbeats and downbeats.

Moving On

That covers the expressive/intensity of a rhythm, but of course, you also need a pattern (how many hits per beat?). You can't discuss rhythmic patterns without mentioning time signatures (aka. taala....sorry for leaving out carnaatic terms and concepts in the past couple of articles). We'll take a look at that in the next article, but for now, don't worry too much about it. Just drum the objects around you to find a rhythm you like. Then, you can add notes to each hit of your rhythm, and that's it! You're composing! This trick has helped me out in a few tight situations, so keep at it, and remember: have fun.

I know a lot of musicians want to get into composing, but don't know where to begin. Don't worry, this is something we all go through. I'm not professional, but I have been composing for about 6 years now, and I have freelanced; I've handled my share of projects. So, I'd like to quickly shed some light on how to get started with composing.

One Note Wonder

Okay, let's get down to it: how do you start composing? Simple: you start with the first note! It can be any note; just pick your favorite, or even close your eyes and play a random note. What do you do next? Play another note! Just keep it in the vicinity. Now, don't roll your eyes at me! What you now have, is a melodic interval. These 2 notes can define your whole song! They have character, they have a certain mood! Don't underestimate the importance of a melodic interval!

Don't believe me? Just take a look at HansZimmer's score for Batman Begins. You'll find the iconic tune played around a minute into the track. As you can clearly see, it's just 2 notes! A melodic interval of a minor 3rd. This interval carries the whole film!! Still think the first 2 notes aren't important?

General Music

Now, before you start saying,"Well that's great, but I'm not a score writer, so this is useless", I'd just like to say: it doesn't matter what kind of musician you are, the principal is just the same! For example, take jazz music. Why jazz? It's one of the most complex forms of music in existence! You need insane performing skills accompanied by an equally thorough knowledge of music theory. That being said, check out what this guy, Peter Kadar, can do using just ONE NOTE on the blues scale. Sure enough, he proceeds from one note, to TWO NOTES. Again, there it is - a melodic interval. It's powerful enough to hold it's own in a song.

Bottomline

There are still so many times when I have to compose for a project, and my brain just wont get into gear. At times like that, what I do is get back to this tip - playing those first 2 notes really is enough to get your creativity going. So, I hope you realize what you can do with just one or two simple notes, and you have now "officially" started composing. What do you do after those 2 notes? Well that's for another article. How do you use those 2 notes? THAT, is where your creativity comes into play. Don't worry, you don't have to figure it out on your own; there are just certain things you have to pay attention to. THIS will be the topic for my next blog. Until then, keep practicing.

Welcome to the 2nd article in my Musical Maturity series. Vocal Harmony. Oh. My. God! You could write a 500 page book on harmony (many have, by the way), and still not cover everything there is to say about the subject! I've restricted myself to vocal harmony, and even then, I'm still wondering how I can condense all this information! Well, in this article, I plan to simply introduce you to the topic in general.

The reason I chose this topic is because harmony is the most important technique that draws the line between carnatic and western music. Carnatic is a purely melodic form of music, and lacks any form of harmony whatsoever. Vocal Harmony is used extensively in so many forms of western music, and for good reason! My friend Isaac and I (visit him here) always add vocal harmony to our songs. Whether we are composing or performing, there's at least one phrase where we harmonise. Why? Because it just sounds so good! Anyways, lets talk about where this originated.

Musical History: Organum

Lets be honest; most people hate this topic, so I'll try to keep it short. The origin of harmony dates back to devotional chants at the roman catholic church around 800 AD. At first, they were chanted by a choir of men, all singing the same notes at the same time (unison). Eventually, someone decided to add a group of boys to the choir. They were still singing the same thing, but the boys' voices were an octave higher. Suddenly, someone had the idea of using a section of the choir as a drone; saying the words while holding just a single note throughout the song. In carnatic music, the tamboura provides the drone.

Sometime later, a genius came up with a break through idea: "What if they don't all sing the same thing??". One section of the choir was made to sing the same melody 5 intervals higher (on the dominant, or starting on Pa instead of Sa). Long story short, this technique was developed much farther, and by the end of the century, it was known as Organum (because it kind of sounds like an organ). One major pioneer of organum (who is credited with huge contributions to the technique) is Perotin. Eventually, these chants were assembled and archived by Pope Gregory III, and are hence known as Gregorian Chants.

If you'd like more information, the internet is at your disposal. I recommend these youtube sources:

Modern Usage

Here are just a couple of unique examples of modern applications of vocal harmony.

Simon & Garfunkel: Soun﻿d of Silence

This is one of my absolute favorite songs. Simon and Garfunkel almost always use vocal harmony (in fact, I can't recall a single song without it!) The reason I chose this particular song is because the harmony is so easily perceptible.

Before you look at the date and say, "1964? That's not modern!", remember that we're dealing with a 1000 year old technique.

Death Note: Kyrie

If you've seen Death Note, then this iconic music is not new to you. However, you may not know that this has a very interesting story behind it. Kyrie is actually a type of Gregorian Chant! These chants always have the same words set to different music.

For kyrie, it's:

Kyrie eleison, Christe eleisonLord have mercy, Christ have mercy

What makes the death note version modern, is, of course, it's eerie instrumentation, which accompanies the vocals perfectly. Also, all Gregorian Chats were composed in Aeolian mode (aka. Natural Minor, or Jalmika Raga). This version of Kyrie is actually composed in Dorian (Natural minor with a natural subdominant, or Kafi Raga?)

I hope you enjoyed this article as much as I did. Until next time, take care, and keep learning.

Hey everyone. i recently got an idea for a series of blog articles: sharing my thoughts on various aspects of music, and how they play out in various genres. Being a classical musician, my primary genres of focus are going to be western classical and carnatic music. I'll occasionally throw in other genres for comparison, or to note how one form of music has led to the development of another.

Before we start, I'd just like to mention a little bit about my musical background. I learned carnatic (South Indian ethnic) vocal at a young age, along with western classical piano. Eventually, I stopped singing (for various reasons I won't get into now), but continued steadily in western classical. In high school, I wanted to be a professional pianist, performing Beethoven and Bach. Around that stage, I'd listen to carnatic music and think "What? What are they doing?" (Although, I made it a point to attend at least one kacheri every Margazhi Mahotsavam). A lot of things about carnatic music really puzzled me. Since then, I've spent a lot of time comparing and contrasting the 2 art forms and understanding more about them. Which brings us to the blog at hand: I'd like to use this as an outlet to share what I've learned through all of this analysis.

Instrument Potential?

If I'm not mistaken, carnatic music does not allow instruments to do what a human voice cannot. A human voice can only produce one note at a time, same as a flute. A violin can practically produce up to 2 notes(even 3, in the hands of an expert). A piano, 88 notes. What a waste it would be to restrict yourself to one note at a time! You're throwing away the one thing that makes the piano so special!

No More Keyboard!

There are a lot of conventions I see in Chennai that are quite disturbing to me, the most prominent one being teachers offering carnatic music lessons on the keyboard. I don't see anyone offering western music lessons on a veena (LOL. Please don't!). I'm all for mixing things up, but this to me is unacceptable. Why? To understand this, we must understand where this instrument is coming from.

The piano is a western classical instrument, developed over the years to suit that particular style of music. I believe every instrument has something special to offer, and in the piano, it's polyphony: the ability to play multiple notes at once. The piano was designed like this to suit the requirements of western classical music, by allowing you to play 3 to 4 lines of music simultaneously. However Carnatic music does not allow this. Why do you want to waste such potential?

To make matters worse, you're not going to learn proper fingering or hand technique if you take carnatic lessons. Why? Unless your carnatic teacher knows how to play western classical, they're not going to have proper technique themselves. So not only are you wasting the instrument potential, you're also making your own life harder. There are over a dozen ways to play any given musical phrase, and the technique you use is what makes it easy or hard. In my opinion, anyone teaching carnatic on a keyboard is just out for a quick buck.

Let me just say, I'm not against playing carnatic on the keyboard every once in a while. If you want to, then by all means go ahead! Just be sure to learn western first, and then implement what you learn. Or, if you're composing fusion, then play western music on western instruments, and indian music on indian instruments! Why do you think Rajesh Vaidya's songs are so amazing?

What about Indian Instruments?

When it comes to instruments made in India, carnatic music could not be any more perfect. Again, this is because majority of these instruments were developed to suit the rules and styles of carnatic. For example, a veena is designed in a way that makes it easy to play melodies, but difficult to strum more than one string at a time (this is what makes it so different to a guitar). Unlike western music, you only need to play on one string at a time. Despite it's beauty, I don't think the uniqueness of carnatic instruments lies in the veena, or the flute, or violin. It's all about percussion.

Indian percussive instruments are FAR more sophisticated than, say, drums, or even a timpani. In western percussion, you either hit a barrel drum on it's center, or it's rim. Deviating isn't going to produce a very different sound. Compare this with a mridangam or a tabla. The smallest deviation from the center produces a clearly audible variation in pitch, which enables virtuosos to demonstrate exactly how skilled they are!

This is again, why I would oppose implementing western music on Indian percussive instruments. Please be clear, I'm not saying "don't use a tabla in western music", I'm saying "don't play the tabla as if it's a timpani!" When you blend different styles of music, please let your instruments inherit their own musical customs. Remember, those customs are what define the instrument.

Update: December 27, 2014

Earlier today, I saw a Rajesh Vaidya concert with K. Sathyanarayanan playing carnatic on the keyboard, and yes, it sounded wonderful. In my blog I clearly said I was against this, but in this concert, he wasn't using just any keyboard. He was using a monophonic synth. Why does this make any difference? Because when you play two notes, the synth morphs one into the other, forming a gamaka, and eliminating polyphony. The only other reason I was against it was because of technique: you can't learn proper finger techniques unless you learn western keyboard. Well it turns out this guy cleared Trinity Grade 8 when he was 10 years old. Clearly, he's overthrown any and all reasons I gave for being against this idea, so I'm all for it (in his case!). However, it's also worth noting that he took western classical lessons from western classical pianists, and carnatic lessons from carnatic musicians on OTHER instruments (like mandolin U Srinivas). THIS is exactly what I was trying to enforce in my blog!

INTRODUCTION

'll be honest, I've been eagerly waiting to write this blog for quite a while. For those of you who don't know, I recently published "Fly Swatter", my first Android game on Google Play. Being a lover of vintage games, I was always wanted to try composing vintage game music, and what better chance?

RESEARCH

Now, let's be clear about one thing: I wanted this music to be authentic. If I could somehow get the track onto an NES cartridge, the console should be capable of playing it! Obviously, that involves research. What better way to start than by looking up the audio specs of the NES console? Here's what I found:

The NES console supports upto 5 channels of mono audio:

One triangle wave

Two square waves

One noise generator

One DPCM channel

These channels should be familiar for anyone who's used a vintage synth, and if you haven't, well, they're nothing more than basic sound generator features. Sine waves, triangle waves, square waves, and saw waves are different types of basic wave forms that many sound generators can create. Noise generators simply create white noise (basically, radio noise) at a particular frequency. It may sound useless at first, but it comes down to your creativity. It's all about how you apply it. According to my research, the noise generator is mostly used for percussion! DPCM channels are used for transmitting low quality audio, usually voices over phone lines. In games, they're used for voice based sound effects (Zelda, anyone?)

One key point to note is that these 5 channels are responsible for ALL of the audio, not just the music. So when composing this song, I couldn't just let loose and do whatever I want. I decided to reserve one square wave and the DMCP channel for the (imaginary) sound effects, and compose using the remaining square wave, triangle wave, and noise generator. Now, enough of all this theory, lets get down to applying what we've learned.

PRODUCTION

E SP Configuration

Being a Logic user, I simply loaded the ES Poly synth across 3 tracks, setting one to square, one to triangle and one to noise. Playing around with the remaining settings can go a long way in sound design, especially with the noise generator (that one took a very long time to get to where it is now).

I had initially planned to use the square wave as the lead, the triangle wave for accompaniment, and the noise generator for percussion, but since when do things go according to plan? It turns out that the triangle wave sounds much clearer at higher frequencies, so naturally, I used it for the lead instead of the square wave.

The rest of the process was mainly just....COMPOSING THE SONG (ironically, the one thing I hadn't really given much thought...till now). This was actually easier than I expected! I simply looped a basic tune for the accompaniment (4 bars) and improvised the percussion to form an interesting beat (15 bars).For the lead, I maintained a fairly constant rhythm and just jumped up and down across C major throughout the entire song!

Generally, with MIDI programming, you try to humanize all the tracks, purposely miss the beat by small fractions to emulate slips, and on the whole, just get it to sound like an authentic recording. For this project, I did the exact opposite: quantizing the entire song, use the same velocity for every note, and do whatever I could to make sure it sounded like it was coming from a machine. After that, I just adjusted static volume levels for each track, and that's it!

A JOB WELL DONE

So there you have it! That's how I composed this song. As much as I wanted to use it as the primary music in my game, it was a totally unrelated genre! So I decided to keep Flight of the Bumblebee composed by Nikolai Rimsky-Korsakov as the game play music (this is what I was using already), and used my 8-bit song for the menu. So that's it! After this little adventure, I went back to coding and finishing up my game, which, by the way, you can download from google play.

Last week, I was listening to one of my songs, and thought to myself, "How would this sound if i removed all of the effects I used?" So I did, and, although I was expecting something along the lines of what I heard, I was still quite shocked! To cut a long story short, here are a couple of audio files to share my experience.

EFFECTS BYPASSED:

EFFECTS APPLIED:

Well, the audio says it all! I still have some work to do, (I'm not whole-heartedly satisfied with the result) but this was quite an eye opener! Just felt like sharing.

Until next time,Adieu

Outdoor Melody is a royalty free track available on this website. Please check the music page for a high quality version of this track and more.

I just posted my new song "Medieval Melody" and wanted to blog about it, because this song is rather special to me, mainly for 3 reasons:

The tune struck me amidst a busy schedule, so I didn't have time to sit and fuel the spark. Due to the circumstances though, I managed to compose more than 75% of the song in my head! (Mostly in the shower). After opening my DAW (Digital Audio Workstation), it was simply a matter of getting my thoughts onto the screen.

As a composer, there are so many different ideas that occur to me, but I rarely get a chance to use any of them. Sometimes I try to squeeze one into a song, but it never quite fits in a perfectly fluid manner, so I end up removing it. However, I was able to implement many such ideas on this song; ones that I've had for over half a year now!

Finally, I had a clear idea of the effect I wanted to use in this song, and I knew it would require a level of sound engineering that I hadn't touched yet. Nevertheless, I did manage to achieve it, on a level that surpassed my expectations!

That is what I'm about to discuss in this blog-how I engineered the effect I wanted for the song.

THE PROCESS OF SOUND ENGINEERING

On the same day the initial tune of the song sparked, I was listening to some songs on shuffle, when Moonlight Sonata came on, and I was mesmerized by not the song itself, but of the recording quality. It was a very old recording, accompanied by a prominent hiss, and all the other traits of such recordings. That's when I got the idea to start off the song with this kind of "low quality, old recording" effect, and eventually, have it transition to modern, digital quality. So, how did I accomplish it?

I HAD TO TURN THIS:

INTO THIS:

STAGE 1 - BASIC ANALOG EFFECT

I decided to do the obvious thing, and record the file onto an actual tape. Although, to reduce the quality, I tried feeding my tape deck wrong parameters. The first thing I did was turn of Dolby Noise Reduction. Next, I noticed I was using a Type I tape, so I set the deck to Type IV. I don't know what that does, but it certainly doesn't match, so it cant' produce optimum quality, right? Then, I decided to lower the volume on my laptop when recording, and set the record level on my tape deck to 10 (max). To my knowledge, this "record level" is nothing but a pre-amp built into the deck. Being an analog preamp it certainly can't produce as good a sound as a digital one. It's fine when used in the mid-range, around 4-6, but setting a preamp to max would push even a digital amp to loss of quality.

Anyways, I lowered the output volume on my laptop so low, that even with the Record Level set to 10, the volume meters on the tape deck did not show any deflection. This way, I would have to increase the volume during playback, which would result in a more prominent tape hiss.

INPUT:

OUTPUT:

STAGE 2 - REPETITION

As you can see (or hear), the volume is low...really low! So what next? Amp it back up of course! Although, if I just used my DAW for that, I'll only be increasing the volume. Instead, if I use my tape deck, I'll be amplifying not only the volume, but also the analog characteristics of the sound! So that's exactly what I did! I left my tape deck's Record Level at 10, but increased the output volume on my laptop. I was keeping an eye on the volume level shown by my deck, making sure I wouldn't over amp it.

Note: While recording the 2nd output track from my tape deck, I set the input channel to mono (instead of the default stereo). That's something I forgot to do with the 1st stage recording.

INPUT:

OUTPUT:

STAGE 3 - CHANNEL EQ

I really liked what I was hearing, but I felt that the bass wasn't as loud as I wanted it to be. So I created a channel equalizer to amplify the bass. This turned out to amplify the tape hiss as well. That's fine by me! Haha.

PRE EQ

Channel EQ: Before Equalizing

Sound Wave and Spectrum: Before Equalizing

POST EQ

Channel EQ: After Equalizing

Sound Wave and Spectrum: After Equalizing

INPUT:

OUTPUT:

STAGE 4 - TAPE DELAY

Okay, so now, the bass is fine, plus I've got a nice hiss, and overall, it's really good! Sounds like it really was recorded a long time ago. But now, I feel like that's not enough. I don't just want the recording to be old, I want the tape to be old as well. That is, I want it to sound like the physical cassette is in a state of degradation. So how do I approach this requirement? I don't think there's anything more that I can get from repeatedly recording on tapes. So I've got to approach this digitally.

I took a look through the available plugins and found a "Tape Delay", only, I don't want a delay. I need it to play along with my digital arrangement. So first thing I do is set the delay to 0, and start playing around with the other properties. The results were astounding!

INPUT:

OUTPUT:

STAGE 5 - LAYERING TAPE HISS

Sounds about done right? Well....I don't know....It still feels a little...empty......Why not layer it with some extra tape hiss?So I downloaded a track of plain tape hiss, lower the volume and layer it in.

INPUT:

OUTPUT:

STAGE 6 - FINISHING TOUCHES

That's it! That's the sound I need!!This process worked for the most part, but for the phrase in the middle (where the song transforms from digital to analog again), I wanted only the recorder and clarinet to be analog, while the bass, cello and harp remained digital. So I had to rerecord those parts separately, repeating the process for each one. Unfortunately, the clarinet wasn't quite fluttering enough. I had to amplify the effect further for that track. I just played around with the tape delay settings and added another Channel Equalizer till I got what I wanted.

PRE EQ

Channel EQ: Before Equalizing

Sound Wave and Spectrum: Before Equalizing

POST EQ

Channel EQ: After Equalizing

Sound Wave and Spectrum: After Equalizing

PHRASE IN DISCUSSION:

INPUT (AFTER PASSING THROUGH TAPE STAGE 2):

PROCESSING:

OUTPUT:

THAT'S ALL FOLKS

Yup, that's it! Now, I did add some extra reverb for most instruments, and went over the top with the final phrase of the song, but I didn't mention any of that in this blog. There's not really much work involved in all that.

This took a couple of days to achieve, but I'm VERY proud of the output. It has been a pleasure working on this song. I really enjoyed the whole process.