What does stereo recording achieve? I noticed that the left and right channels of some stereo samples don't sound the same, and loaded them into audacity. The program confirms my suspicion that the sounds vary greatly in characteristics. For a vibrato filled note, most often only one of the channels displays the wavy pattern induced by the vibrato. Is this what stereo recording is supposed to do? So the sounds mixed into your music will have certain instruments placed to the L/R side of the listener?

What does stereo recording achieve? I noticed that the left and right channels of some stereo samples don't sound the same, and loaded them into audacity. The program confirms my suspicion that the sounds vary greatly in characteristics. For a vibrato filled note, most often only one of the channels displays the wavy pattern induced by the vibrato. Is this what stereo recording is supposed to do? So the sounds mixed into your music will have certain instruments placed to the L/R side of the listener?

Much of what you get in popular recordings is more accurately described as multi channel mono from multi-track with artificial reverb. These can be very pleasing. In a true stereo recording you're getting the room reflections and 'ambience'. I have some London / Decca recordings that are true stereo and are excellent.

BTW when the left and right channels sound the same, it's mono.

In my opinion much of what was 'wrong' with recordings were the limitations of the LP. Low frequency mono mix, 30 Hz high pass, restricted separation all to avoid skipping. Digital recording has no such limitations. In theory down to DC, no interchannel phase restrictions, no spectral limits because of EQ and then take out all the pesky wow, flutter ticks and pops. I'm told that gives it 'character'. Rubbish.

Much of what you get in popular recordings is more accurately described as multi channel mono from multi-track with artificial reverb. These can be very pleasing.

Yeah that's what I meant. But most sample libraries (including the expensive Vienna Symphonic) are recorded in True Stereo. How are you going to produce commercial style audio that people are used to listening to, with these unbalanced samples?

What does stereo recording achieve? I noticed that the left and right channels of some stereo samples don't sound the same, and loaded them into audacity. The program confirms my suspicion that the sounds vary greatly in characteristics. For a vibrato filled note, most often only one of the channels displays the wavy pattern induced by the vibrato. Is this what stereo recording is supposed to do? So the sounds mixed into your music will have certain instruments placed to the L/R side of the listener?

Yes, you didn't know this? How you ever hear people in a group all sounding like they are in the same place when not using a sound system.

Paul

--------------------

"Reality is merely an illusion, albeit a very persistent one." Albert Einstein

Much of what you get in popular recordings is more accurately described as multi channel mono from multi-track with artificial reverb. These can be very pleasing.

Yeah that's what I meant. But most sample libraries (including the expensive Vienna Symphonic) are recorded in True Stereo. How are you going to produce commercial style audio that people are used to listening to, with these unbalanced samples?

Just making a wild guess, but could it be that the channels on the samples are not actually left and right, but rather forward facing and rearward facing (I.E. the mic is rotated 90 degrees) or some other configuration and that they're meant to give whoever is mixing the final music more options on how to make the instruments sound?

Much of what you get in popular recordings is more accurately described as multi channel mono from multi-track with artificial reverb. These can be very pleasing.

Yeah that's what I meant. But most sample libraries (including the expensive Vienna Symphonic) are recorded in True Stereo. How are you going to produce commercial style audio that people are used to listening to, with these unbalanced samples?

Just making a wild guess, but could it be that the channels on the samples are not actually left and right, but rather forward facing and rearward facing (I.E. the mic is rotated 90 degrees) or some other configuration and that they're meant to give whoever is mixing the final music more options on how to make the instruments sound?

No idea about that, just the L/R stresses aren't even consistent across the same instrument/playing style. The samples for a violin may have one note sound left-leaning, then the next semitone above becomes right leaning, then left again, then center (just like the commercial recordings).

Much of what you get in popular recordings is more accurately described as multi channel mono from multi-track with artificial reverb. These can be very pleasing.

Yeah that's what I meant. But most sample libraries (including the expensive Vienna Symphonic) are recorded in True Stereo. How are you going to produce commercial style audio that people are used to listening to, with these unbalanced samples?

So you do a channel blend to towards mono but not actually be mono. I sometimes use a 90%/25% mix. What that means is left gets 90% L + 25% R. Right gets 90% R + 25% L. With large channel differences this ends up close to unity. A 50%/50% mix would be pure mono and you can fiddle any way you want. If channel 'balance' is important then diddle the gain until you're happy. These things are very easy in Adobe Audition. I don't use the other editors so I can't say.

Recordings aren't supposed to be like that. Foobar has the crossfeed feature precisely because of this characteristic of early digital recordings.

Recordings are indeed supposed to be like that, or otherwise everything would effectively be mono — unless I'm misunderstanding you.

Crossfeed is an effect to simulate speaker setup with only headphones, so that part of one channel is fed into the other ear, just as would happen in a two-speaker setup. A portion of listeners experience fatigue with the "hard" stereo effect of headphones, and crossfeed helps them listen for longer periods (I personally experience the exact opposite, btw, but that's why it's a preference).

Yes, you didn't know this? How you ever hear people in a group all sounding like they are in the same place when not using a sound system.

Paul

Recordings aren't supposed to be like that. Foobar has the crossfeed feature precisely because of this characteristic of early digital recordings.

I have no idea of what you are talking about. I've hear the same thing since the begining of stereo, in fact the very early recordings had only left and right sound, and nothing coming from the center, so they come out with so-called 360 sound that had the same sound (for the vocals) coming from the left and right channels to make it sound as it they where in the center. Stereo channels do not have to be the same, that is how one gets a spread in the field of sound.

Paul

--------------------

"Reality is merely an illusion, albeit a very persistent one." Albert Einstein

(1) A stereo file is simply a means for packaging two different audio data files.

(2) An attempt to convey a sense of spaciousness.

QUOTE

I noticed that the left and right channels of some stereo samples don't sound the same, and loaded them into audacity. The program confirms my suspicion that the sounds vary greatly in characteristics. For a vibrato filled note, most often only one of the channels displays the wavy pattern induced by the vibrato.

Sounds like maybe an example of (1).

QUOTE

Is this what stereo recording is supposed to do? So the sounds mixed into your music will have certain instruments placed to the L/R side of the listener?

If you are creating a music file by mxing files that are each related to a specific instrument (whether created from samples, of by multitracking) one generally pans each file, when adding it to the audio scene that you are trying to create. The source files can be mono or stereo.

I don't know how for classical, but in popular music, it's just part of the music. That's why we invented stereo. The band/sound engineer/whoever wanted to use two channels for listening pleasure, and most people like it.

However, one can't just make two separate tracks with different content, because it's just confusing. I personally don't like The Beatles' albums in stereo because I feel weird when listening.

edit: ^ that's what I was thinking about^^ that's also what I was thinking about

HTS doesn't mean "Why is music released in stereo?", he was asking why individual recordings of individual instruments would be recorded in stereo since they often get mixed together to place the instruments whereever in the stereo sound field the producer/editor/whoever wants during editing.

HTS doesn't mean "Why is music released in stereo?", he was asking why individual recordings of individual instruments would be recorded in stereo since they often get mixed together to place the instruments whereever in the stereo sound field the producer/editor/whoever wants during editing.

How would one know the different if one didn't hear each recording of the individual instruments to begin with? All we hear is a mix. Also recording an instrument in stereo would capture more of the sound field of the instrument to begin with.

Paul

--------------------

"Reality is merely an illusion, albeit a very persistent one." Albert Einstein

I don't know how for classical, but in popular music, it's just part of the music. That's why we invented stereo. The band/sound engineer/whoever wanted to use two channels for listening pleasure, and most people like it.

Experiments done in the early 1930s by Bell Labs involved researching a method of reproducing the orchestra concert experience. Initial experiments involved a multitude of mics, channels, speakers, but engineers concluded that the minimum channel count necessary to reproduce that experience was three, L, C, and R. However, the speakers were in a large concert venue, so replicating the acoustics using surround speakers wasn't considered. The channel count was reduced to two for practical reasons, mainly the release of music on record, and later transmission by radio.

A historic 3 channel live demo was done in 1933, with the live concert at the Academy of Music in Philadephia, and the 3 channel demo system at the Constitution Hall in Washington DC. The speakers were behind an acoustically transparent scrim, and the performance involved an orchestra and singers that moved around on the stage. Obviously, all channels were handled simultaneously.

Meanwhile in Great Britain, Alan Blumlein worked on two channel stereo using crossed figure 8 microphones. I believe his program material was actors on a stage.

QUOTE (skexu @ Jun 6 2011, 08:57)

However, one can't just make two separate tracks with different content, because it's just confusing. I personally don't like The Beatles' albums in stereo because I feel weird when listening.

Much of the early Beatles music was never intended for stereo, but rather recorded on a two channel tape machine so the vocal and band mix could be done later. The stereo release is simply the two track master, band on one side, vocals on the other, but never really meant for a stereo release. The stereo release was driven by a new market for stereo records, and the view probably was that anything done in two channels, real stereo or not, was marketable. We have to remember that real home stereo systems weren't all that common in the early 1960s, most listening, certainly all pop music radio, was mono. I believe the mono/stereo mix issue is documented in Geoff Emerick's book, "Here, There and Everywhere: My Life Recording the Music of the Beatles". I don't have it in hand to confirm that, but it's a great read regardless.

We had a lot of "Ping Pong" stereo in the early years of stereo because it made for a dramatic demo when compared to mono, though hardly true streophony.

How would one know the different if one didn't hear each recording of the individual instruments to begin with? All we hear is a mix. Also recording an instrument in stereo would capture more of the sound field of the instrument to begin with.

Paul

+1 to that. Most, but not all, instruments produce a sound field that is not a single point source. Brass might be the obvious exception, especially at a distance, but even brass players move. Woodwinds are one type that often gets mistaken for a point source, but none really are in the near field. I've often mic'ed solo saxophones with a stereo pair, and been glad of it later. Guitars too are excellent in stereo, though most electrics are pretty much point sources at the amp.

At the most basic level, one could never obtain identical signals from two transmissions (whether simultaneous or sequential) of any analogue source, as both the medium and the process introduce random variation at all times.