Just put this in the FX section of your patch and tweak the TONE knob to your liking.

And an explanation what the heck it is all about. This is taken from a discussion on the NM mailinglist. I thought to share it here with you.
(Sorry for any typos, e.g. read 'subconsciously if it says '' unconsciously'.)

--- Reply from Rob to an email by Sven Roehrig:

Psychoacoustics seems almost like an oral tradition. Like picking up bits
and pieces of info from everyone and everywhere. There is indeed some good
books though, like the Handbook for Sound Engineers, which is sort of an
encyclopedia, and accordingly quite expensive, though imho worth every
penny. The more serious books on microphone recording, especially for e.g.
classical music or film sound recording, often contain a lot of good info.
But there is only little of this on the internet.

I was lucky that in the past I've done work for a Dutch institute named TNO,
which is sort of a 'research factory' for both the government and the
industry. They have (or had, I was there in the eighties) a psychoacoustics
department, with dead room in a concrete bunker et all. They were mainly
into 'legibility of speech' and I learned a lot from their specialists in
this field. Making a good mix actually seems not so different from making
the announcements in a train station legible. This might seem a strange
statement, but the human mind likes to focus easily on whats important. So,
whatever draws the attention towards what should have the focus, should be
dealt with properly. All else should preferrably be absent. (Which is what
both the vocalist and the solo guitar player have been telling us all the
time... )

Well, I don't really know where to get comprehensive info on
psychoacoustics. But the importance of psychoacoustics is clear, in the
audio chain it is always the human mind that is the last and the most
curious link in the chain. I say curious and not weakest, as the mind is
incredibly sensitive for aural stimulae. And the 'reptilian part' of the
brain is still very active when it comes to sound. Kassen shared a nice
observation a couple of weeks ago, on the electro-music forum, about how
unnerving an unexpected, close by, soft-rustling sound can be.

And this is so important for digital synthesis systems. Without special
treatment digital synthesis algorithms in their theoretic implementations
all seem to have a spectrum that is like a white noise spectrum: flat. But
white noise sounds like the fire brigade aiming their hose full power at
your face. Pink noise instead sounds much more agreeable and has a natural
sense of depth. Natural phenomena often have a spectrum that is much more
like pink noise than like white noise. And the human mind seems at its best
when enjoying music when the overall spectrum sounds 'natural', so like
natural phenomena sound. Which lead to my private little theory for the need
to be able to 'tilt the spectrum' in order to tweak the 'naturalness' of
sounds until it sounds 'good'. Now, vocals already comply to this little
theory. And analog synths often seem to come quite close. But digital
synths, and worse; computer soundcards, are not. Which lead me to my little
private technology, that by lack of any name I baptized ACE (analog
circuitry emulation) technology. Which is just the development of all that
can be used to give digital algorithms the psychoacoustic characteristics of
the better sounding analog circuitry. Although I should really say the
atmospheric characteristics of natural sounds. ACE is not unlike physical
modelling.

With ACE there is a couple of issues. First issue is definitely the slope of
the overall sound spectrum. This actually has to do with the fact that the
mind uses two mechanisms to get a sense of direction from a sound. One works
below roughly 3 kHz and uses the amplitude and phase delays between the
ears. The other works above 3 kHz and uses the combfiltering effect of the
pinnae of the ears. The interesting phenomenon is that there must be a
certain loudness balance between these two regions based on what the ear is
expected to hear in nature. If this balance is just slightly off, the two
mechanisms do not seem to be able to work together anymore and the sound
looses its sense of direction. Its like the mind cannot meld the info from
the wto systems anymore, or make a choice which mechanism tells the truth,
and simply ignores both of them. The flatness of the digital spectrum of
most current synthesis algorithms is just enough off balance to destroy the
sense of direction in sounds. Which means in practice that no matter how
many digital sawtooth oscillators are used to get that big hoover, it just
doesn't sound spacious but just more and more buzzy. So, the best hoover is
still considered to be made with an old analog polysynth using BBD devices
for the big chorus. And the spectrum of these BBD choruses just happens to
coincide rather nicely with a slightly tilted 'natural' spectrum.
And then there is the damping of the air defined mostly by its voluminal
weight, which is largely depending on the dampness of the air. This effect
will change a 'white noise spectrum' in a 'pinkish noise spectrum' if the
sound travels over some distance. Which is one of the main ingredients of
Kassen's unnerving experience with the very bright soft rustling behind his
back. Even if the volume is low, if it is so bright it must come from close
by, and a clear and present danger might be at hand. Well, that's what the
mind at the end of the audio chain makes of it.

Second issue is harmonic distortion. It is said that a pure sinewave does
not exist in nature, no natural object or natural event produces only a pure
sine wave. The reason is simply that digital sine wave oscillators don't
grow on trees. But seriously, natural sounds are often caused by the
impact that one object has on another, being it a hit or an impact that
lasts for some longer time. The object that is hit, e.g. a drum skin, is
often elastic. A drum skin must be, otherwise it wouldn't vibrate. It is
like a mass spring system, where the force of resistance increases when the
mass is further away from its center or balance point. Meaning that the
energy that is received from a hit does not only die away in the kinetic
energy of the vibrating skin, but also as warmth by friction caused by the
elasticity of the skin. What happens is that, even if the skin was shaped in
a way that it would 'theoretically' produce a sine wave, it will actually
have some odd harmonic distortion caused by the friction. It is like the
peaks of the waveform are slightly compressed. And if the skin is on a
kettle, it is probably slightly more compressed in one direction as the air
in the kettle might be more resistive as the air above the skin. Meaning the
addition of some even harmonics as well. Basically what applies to one sine
wave applies to all partials generated by a firm hit on a drum skin. So, not
all energy of the hit will go into the 'natural resonance' of the drum, some
energy will produce harmonics of the natural resoance frequencies and some
enegy will be lost as warmth. This should of course be implemented in a good
synthesis model for a drum.

Third issue is echoing. Which means that the way how reflective surfaces
damp a reflection is to be taken into account. Now, the inside of a drum
kettle and the downside of the drumskin are reflective surfaces when the
drumsound resonates in the kettle. There is a natural development in the
timbre of decaying echoes like the resonance in the kettle, which cannot be
simulated by lowpass filters, but can nicely be simulated by (single pole)
allpass filters. The reason is simply because reflective surfaces are
slightly dispersive, like how white light disperses into a rainbow when
crossing the surface of a crystal or a prism. Basically each partial in a
sound is slightly shifted in phase. If a sound is captured in a resonant
cavity, where it keeps on reflecting until it dies away, this dispersion is
audibly at work. The point is that the human mind is by default not consious
of these effects, but uses the information unconsiously to produce a
'consumable image' containing information that is of more interest for human
survival, basically whether we are in danger and must get tense, or if we
are safe, so we can relax. And not really like "wow, that one partial at
1241 Hz is shifting phase of 6 degrees in respect to that partial at 829 Hz
when it hits that side wall".

And maybe more issues for later...

So, when a sound is even slightly out of balance compared to what the human
mind expects to hear, the mind unconsiously thinks it means 'something out
of the order is going on' which triggers the mind into tenseness. Although
in our consious thoughts we know that we are quite safe at home and should
enjoy the music. But there is this unconsious tiring thing going on, and we
cannot relax but unconsiously keep being tense. Until we put on the old
vinyl record and surprise: now we do relax and loose our tenseness, despite
the crackling, the hum and the noise.

About this sense of direction and pinnae thing there is a simple experiment,
just close your eyes, close one ear with a finger, wiggle your head and
point your other finger at the direction where you think the sounds in the
room come from. Then open your eyes and check. It is amazing how correct
this is, but just with one ear. By wiggling one's hear one can hear timbral
differences that sound a bit like phasing, and suddenly you will realize
that you have been subconsciously using that for all your life to get a
sense of direction. As it is quite easy to hear how the timbral change is
coupled with the sense of direction.

Then another experiment that you have to do with others. Take a blank piece
of paper, like from a photocopier or printer, and put just one dot somewhere
on the paper. Then ask others what they see, virtually anyone will say they
see a dot. And their eyes will probably keep staring at the dot. Only the
ones that know the experiment will say they see a sheet of not completely
blank paper. Still, their eyes will be drawn to the dot. Although the white
space of the paper is much bigger in size as the dot, the attention will be
drawn to the dot and not to the white area. This may seem a silly
experiment, but it is quite crucial in explaining how the human mind tries
to focus. There is a sonic equivalent of this principle, which is programmed
in the mind by aeons of evolution, that is part of our built in warning
system for if there is a predator around that might want us for lunch.

Now the point is how both mechanisms of sensing direction work together. The
mind expects sounds to have a 'natural' volume balance in the areas above
and below 3 kHz. If this balance is natural the mechanisms blend their
information just fine and all feels natural. But if the volume above 3 kHz
is louder than expected it seems like this 'focus to a point' thing is
taking over, trying to focus the mind on this overly bright high and
figuring out what is happening there. Perhaps you know that the human mind
is very good at masking frequencies away that seem of less importance, a
principle used in some sound compression systems. Or creating a sense of a
bassline from a little transistor radio that is physically unable to play
frequencies below 200 Hz by 'imagining' the bass tones from the little bit
of higher harmonics from the original bass that do get reproduced. In my
experience the mind is similarly good at masking away sense of direction if
the spectral balance is 'unnatural', meaning not like in nature. It is like
subconsciously the mind thinks that the unnatural balance means that the
directional sense must be incorrect and just blocks it out completely.
So, it is about the mechanism that senses direction below 3 kHz, the
mechanism that senses direction above 3 kHz plus the mechanism how the mind
tries to focus on thing. These three mechanisms together (plus probably some
more) seem to trigger the masking capabilities of the subconscious mind.

In these days of 192kHz sample rate and people claiming they hear aliasing
artifacts at 30 kHz, claiming that the exaggerated high of these days'
digital systems is not so good, is like 'swearing in chuch'. Which like Merv
in The Matrix I would suggest to do in the French language, as it indeed
sounds so much better as the f**k word.

Using lowpass filters to correct things doesn't work, as they make the
balance even worse. What is needed is a filter like the filters used to make
correct pink noise from white noise. Such filters can be quite complex, e.g.
a five thousand tap convolution filter running at 96kHz should be able to do
a very good job. But a 33-band equalizer can do a good job as well. What I
use myself is a simple filter made from two parallel single pole allpass
filters tuned to around 200 Hz and 2kHz (derived from splitting the audio
range into three decades) and mix their outputs together with the input
signal in a certain ratio. This tilts quite well with only a single knob,
and although the curve is not exactly straight it seems straight enough to
do the job. In fact, I found that the curve shouldn't be totally flay but be
a little bit more horizontal below 2.5 kHz and have 2.5kHz as the 0 dB
point. I guess you've seen this filter in some of my patches, but I also
have analog filters that do the same job. In fact, it is a type of filter I
learned from an old film-sound engineer some twentyfive years ago, to use on
the eight to twelve bit digital stuff from those days. The first instance
was actually passive with coils and capacitors, noise free, but it would
pick up hum.

Note that this all applies to digitally synthesized sounds and not to
digital recordings, as the digital recording will probably have the right
balance already recorded from the mics.

--- Max Clarke replies and has a patch attached:
>
> Ok, this patch is pretty crap, except for the tilt filters in the fx
> section. I followed Rob's specifications and found a ratio that worked
> for me, and it works very well. I hope this is the way it's supposed
> to be set up, but even if it's not, I like the effect
>
> "In 1" on the blue switch module is the tilt filter, "In 2" is the
> plain signal. Amazing what a little bit of filtering can do. I've
> named this tilt filter "the g2s missing balls.pch2" in my tools
> directory
>
> But it is more than that. The tilting not only helps focus the bottom
> end, but also removes alot of the harshness of the top end. I included
> the filter sweep to demonstrate this. I couldn't believe my ears!
> Overall it just sounds a hell of alot nicer with the filters than
> without them.
>

> Max
--- Rob replies:
OK!

Added to your patch is an example of how I do it these days. It is an easy
filter with one control that simply titls the spectrum by some amount. When
the knob is in the middle nothing happens, when turned left the spectrum
starts to slope downward, which is normally the way to go. If the knob is
turned right the spectrum slopes upwards, giving an excesssively bright
sound. To my taste I like the allpass filters to be just a little higher as
2kHz as it seems to add just a little extra presence in the mid. When using
6 dB filters the low/high volume balance for the allpass outputs is about 2
to 1. With the state variable allpass filter it is about 4.5 to 1.

The big difference with simply using a lowpass filter to suppress the very
high is that this allpass type of filtering doesn't loose the sense of
brightness of a sound.

--- matt replies:
> Rob,
>
> I'm having a hard time recreating a shelving and tilt eq (from
> some of your g2 patches) on my classic. my ears and a spectrum
> analyzer tell me: i'm not quite sure how your tilt eq works (deep
> notch at 5000 Hz, for example) and the conversion to the classic
> environment is tricky, sometimes!
>
> i might as well share what i have... one patch is your shelving &
> tilt eq for the g2 and one's my attempt at creating the same
> thing for the classic but not getting it 100%. the 'mix' is in
> the correct ratio, i think, but some (many?) thing's not right.
>
> matt
--- Rob replies:
Look at the attachment in my reply to the post with the subject "Tilt test"
for a good G2 example.

On the classic NM it is very difficult to patch this little thingy, as this
is a very good example of why the calculation order is so important. If
calculation order is not correct there can be a one sample difference on the
mixer inputs. Which will completely destroy the effect, as any type of DIY
filter must be sample accurate to work properly. Just one sample delay in
respect to another mixer input will have a big impact on the high part of
the audio range, and not sound at all how it is supposed to sound. E.g., on
the NM one would need extra inverter modules on the inputs of the mixers,
meaning that one mixer input signal has to go through an extra module. Which
will cause the extra sample delay. It is pretty tricky to define the right
order of placement of modules to get it right on the NM.

But on the G2 this always works properly and how it should.

--- Sven replies:
>
> Aha, here we have the practical applikation for the auto-optimisation
> algorythm...
> One that really matters and is not just a logical or timing problem.
> Thanks god Clavia has managed to make this so quick. i remember the
> software version where this process was taking at least 5 times as
> long as it needs now.
>
> And here is also the proof that the G2 has its own tricks to design
> its sound...
> Now i see why you stated that the G2 can do things that cant be done
> on the NM1...
> I was a bit currious about that because it havent sound the way that
> you was reffering about fx modules like delay and reverb.
>
> Knowing about this details actually can create arguments to buy a G2.
> Your research in filters really brought the instrument forward..
> The FIR filter you posted in the Forum is gourgeous btw. Also a very
> nice tool to shape highs...Very usefull..and the little patch to it
> is very straightforward and good to use aswell.
>
> Lots of very helpfull information during the last days )
> Thanks for sharing it.
>
> Sven

Edited 12 July 2006:
Added the patch SpectralTiltFilter.pch2', which has the best DSP efficiency, but is probably not that straightforward to understand as the other (equivalent) patches. The tilt pivot point is at approximately 750 Hz. The 50% setting for the control is sort of a magic position that has the average preference in blind listening tests.

/Rob

tilt test + robs filter.pch2

Description:

The patch attached by Max as talked about in the email list discussion, with Robs addition.

This is very interesting. Do you have a version of this one for the NM-1 too?_________________A Charity Pantomime in aid of Paranoid Schizophrenics descended into chaos yesterday when someone shouted, "He's behind you!"

This is very interesting. Do you have a version of this one for the NM-1 too?

Nope!

Always when I tried it on the NM1 I got this terrible difference in sound between the left and the right channel. Or it wouldn't work the same as in a previous patch.
It was like this, I first patched one channel and the copied the modules and then the channels would not sound the same.
The trouble with the NM1 was that whenever a module was deleted while patching it created a 'hole' and that hole got filled in when another module was inserted. Meaning that other module was calculated too early. And then there was no way to get it right apart from starting a patch all over again in the proper order. And pasting a group of modules could change their calculation order as well in sometimes the most curious ways.

So, I would just not send in a patch that I would know wouldn't work when just copied into another patch.

Just put this in the FX section of your patch and tweak the TONE knob to your liking.

Great add to the buildingblocks subforum and also a good example of list integration with the edited listmails... So there was the quick and direct list discussion that is now archivated and able to be easily found by futural..hmmm?
how shall i call them...G2 adicts, students, patchers, freaks ?

I like the mailing list, but such an amount of insightfull information surely needed to be placed in the forum aswell... The list to quickly spread it and the forum to preserve it..that is good... The building blocks subforum is anyway a great place because its so easy to locate and by its nature dont tends to get too crowded.

Rob, thanks for writing such great explainations of your excellent research. Also, thanks for moving it over to the forum which has the advantages Sven described. I was going to do it myself this morning, but you saved me a lot of work.

I wonder how "hard wired" this balance between the sub 3KHz phase dependent detection and the above 3KHz comb filter detection methods really are in our hearing. I mean, I wonder if our perception will evolve as we continue to live the "artificial" world of sounds with lots of high frequencies.

Also, I like this explaination better than the idea that analog circuts have this natural tilt due to the cumulative effect compensation capacitors in their circuits. It implies that we would prefer audio that is so tilted even if we never heard of analog circuits. So, calling this ACE may not be spot on._________________--Howard
my music and other stuff

rob, i'm just starting to play with this, and i expect i'll be spending a lot of time doing so...thanks so much, it's both fascinating and useful ...your commentary (not just the patch) is a valuable contribution here.

Quote:

I wonder if our perception will evolve as we continue to live the "artificial" world of sounds with lots of high frequencies.

i think our "hardware" probably takes a good deal of time to evolve, but on the "software" front, i think most people have implemented some of these filters in software (a notch at 60hz...at least here in the states, and something that filters out the hf sound of crt monitors in office environments).

an interesting (at least to me) personal observation on perception of sound. i have tinnitus to at least a noticable degree (probably a combination of many, many ear infections as a child when every one of the hundreds was treated with antibiotics, and going to loud raves for a few years on a regular basis). i notice it more at some times than others, and i often wonder if it's why i tend towards soundscapes, long delays and reverbs...it matches what is in my head. it's been a few years since i experemented with chemicals, but when under the influence of lsd or mushrooms, i've noticed that my hearing seems much clearer and crisp. i've also noticed this when being at gatherings where much of the audience is tripping and i'm not (contact high?). obviously lsd alters perceptions (otherwise there would be no point), but can those alterations help bypass some of the "hardware" problems that i (and others) have? are they simply distortions? how does just being around the "mood" of tripping affect things? obviously, experements in this area are more involved than writing a g2 patch, and i'm not expecting any definitive answers...but i do find it curious (i suppose curious like alice was about the rabbit with the pocketwatch).

Also, I like this explaination better than the idea that analog circuts have this natural tilt due to the cumulative effect compensation capacitors in their circuits.

I think that all little thingies play their little parts and eventually come to one sum. In analog circuitry there is a lot of little thingies, in digital code virtually none unless specifically implemented. Which I suspect accounts for the common idea of analog equipment having more character.

mosc wrote:

It implies that we would prefer audio that is so tilted even if we never heard of analog circuits. So, calling this ACE may not be spot on.

Well, both analog and digital circuits are part of these days reality. It just happens that sonic preferences seem to go a certain way. The answer to the question why is imho not in the analog circuitry, but in the human mind. Analog circuitry just seems closer to what the mind likes, compared to digital circuitry. So, my hope is that the figuring out why will open the door to solutions.

Truth is that I don't have substantial scientific psychoacoustic research results to back up my theory. I just have my ears. So, the why is all theory. But hey, e.g. both darwinism and creationism are theories that do not answer all questions. Still, both are widely accepted, though not really by the same people.
Why bother? If it works it work, and if it works better it works even better.

The only difficult thing about this ACE technology is to make it stable under all working conditions, which can be tricky at times. Stability is actually the only hard requirement.

an interesting (at least to me) personal observation on perception of sound. <snip>
deknow (kids, don't try this at home!)

This reminds me of what is said in one of the more esoteric forms of Buddhism, that the mind's awareness is always perfect in the sense that the perception of both health and sickness symptoms is equally perfect.

And what these chemicals you mention do is indeed very intriguing. It is my guess that the main reason these chemicals are banned is because they can bring up questions that might have answers that would raise tenseness in all those people that have put their bets on money.

I've been working for sometime on a "3-D Pan" that uses the acoustics of the ears for placement of sounds in a mix. Two giant stumbling blocks involve lack of freely available information, and various acoutics of speakers/room settings. Headphones seem far more "alike" from manufacturer to manufacturer than speakers, or maybe ears (from various manufacturers) are more alike than rooms

But your description and the filter system bring to light something I've been grappling with for a while: giving digital synths an accurate placement in spatial perception. Digital synths seem to easily do to a wonderful job at destroying the human capability of spatial perception, simply by adding some phase delay, combfiltering, or a slight reverb...and now I see many more of the reasons why this is so. **Controlling** that spacial perception however has alluded me, and maybe now I can take a step closer.

Maybe we can give George Lucas a run for his money in an alternative to THX theater It seems every 3 years two more speakers get added to "cinema quality" sound...just to more accurately simulate spacial location (surround sound (3), 5.1, 7.1, looks like 9.1 is starting to happen, etc...)

it seems to me that analog circutry does not necessarily "sound better", but durring the process of breadboarding, experementing, and building these circuts that a "musical ear" plays a large part in what is considered right.

for instance, with the g2, i can in about a minute put together an oscillator, filter and amp (a basic "normalized" subtractive synth). if i were to build the same thing in hardware from scratch, it would take days (probably weeks for me), and i would have to make tonal decisions every step of the way (especially in capacitor values...the caps have to be there, but one can choose from a variety of values and types to affect the sound to a greater or lesser degree). this process demands time and attention. people who would undertake such a project would have to balance "working properly" and "sounding good", and use a good deal of time and attention in doing so.

in the digital domain (even if working from scratch), a "perfect" sin wave can be created with a simple equation or table of values. a filter can be be a brick wall with no compromises, and an amp can be made with a very exact response curve. it's simply too easy to get from point a to point b without puzzling over comprimises and without making decisions that might not be "ideal" on paper.

imho, we are simply not smart enough to know what we want to accomplish in the end. the process of doing analog engineering makes us confront real world, non ideal issues in a way that our aesthetic has an influence on the end result....the digital simulation, we can easily produce the theoretical ideal and bypass our judgement as to what sounds good.

this is aparant in digital synthesis....a good deal of work goes into making something sound like an ms20 rather than a perfect osc>filter>amp.

the same is true in other areas....a photo can be quickly made to look "good" in photoshop, but you loose the hours of involvement and attention that a darkroom would demand. anyone can do desktop publishing, but without a great deal of experience, the results will probably look amatureish when compared to what someone who has worked in the field for years would do. synthesis is differant than these other forms of expression, as it is (by definition) a collection of relatively simple and defineable building blocks that intution would lead us to think that the "theoretical ideal" is better. clearly, if this were the case, robs filter would be seen as an effect, and not as an improvement.

the fairlight 2x doesn't sound musical because it has analog output filters, it sounds musical because the filters were designed to sound musical, not ideal.

And what these chemicals you mention do is indeed very intriguing. It is my guess that the main reason these chemicals are banned is because they can bring up questions that might have answers that would raise tenseness in all those people that have put their bets on money.

yes, i agree with this. at one point in time (years ago), i had access to some of these chemicals in what seemed a pharmicutical grade....the experience was more like being very very sober rather than fucked up and confused (as the "street" versions tended to do). of course most people only had experience with the street stuff (at least after it became illegal), and so the "word on the street" about these chemicals was (imho)more negative than would have been the case if people had access to the good stuff. lsd is almost nonexistant in the states these days (following the bust of the major manufacturer), and the popularity of mushrooms has risen in response....people do look for these experiences, and it's (imho) too bad that it is almost imposible to have them. all this said, i do know people that never "came back completely", but i also know fucked up people that never touched the stuff.

I've been working for sometime on a "3-D Pan" that uses the acoustics of the ears for placement of sounds in a mix. Two giant stumbling blocks involve lack of freely available information, and various acoutics of speakers/room settings. Headphones seem far more "alike" from manufacturer to manufacturer than speakers, or maybe ears (from various manufacturers) are more alike than rooms
<snip>

It is very difficult to do. Attached is a little test patch that explains a lot. It is the simple MetalNoise osc, but with a pushbutton named TEST, that when ON will randomly modulate the pitch input. Note that the pitch knob is opened only one tick and the modulation is miniature. And that the output is mono.
When there is no random pitch modulation the sound seems to come from one place. But then press this TEST button and the sound suddenly seems to come from all over the place. Still, the sound is mono!
Then try headphones and the effect is not there anymore.

So, this rather curious effect is caused in the room. The random modulation is as subtle as the combfiltering of the pinnae of the ears. But..., why not on headphones but definitely there in a room? I tested this in several rooms, as I was intrigued by this phenomenon. Well, the more irregularly placed furniture, curtains, etc., the better it works to warp every hit all over the place.

What this shows is that synthetic directional information above roughly 3 kHz is actually destroyed by the fact that:
1) each person has different pinnae
2) both ears themselves are vastly different
3) not only the ears have a combfiltering effect but the room as well
So, the interaction of the combfiltering effect of the room with each ear strongly defines out sense of direction, and when generating sounds artificially one cannot have control over exact placement, nor emulate it, as all rooms and pinnae are different.

The only system I ever heard myself that has exact placement of bright sounds is the wavefield system of the Delft University (which btw is in a relatively dead room). If you have a Linux computer and an RME multichannel soundcard, or other multichannel soundcard that works under Linux, you should check out the wavefield software named WONDER and written by Marije Baalman. It is free.
http://gigant.kgw.tu-berlin.de/~baalman/program/
The Fraunhofer Institute is also into this wavefield thingy.

It is my opinion that with a system like 5.1, 7.1 or 9.1 one can make a 'sense of spaciousness' in a disk, or with Robin Miller's 3-D system 'sense of spaciousness' in a sphere. But these systems are not capable of exact placement of very bright or high pitched sounds, as the average livingroom would destroy the subtle directional info in the very high. Robin Miller's microphone btw will capture lots of the sonic print of the room where the recording is made, so in a relatively dead listeningroom it can do a very good reproduction of spherical information.

Like the MetalNoise experiment, where the sound seems to come from all directions, an average living room could displace a carefully placed sound to a totally different direction.

So, I'm pretty pessimistic about the ability to exactly place very bright sounds. Maybe better to think more in terms of roughly place relative to another sound. So, adopt the idea that in a spatial mix the exact placement of very bright sounds should have no importance, but only the fact that it is differently placed to another sound.

Of course, the more energy in the very high end of the audio range the more difficult it becomes. For a sound that also has directional info below that roughly 3 kHz it is much easier, as long as the bright part of the sound doesn't mess up too much by being too loud. As the low end should now produce a proper sense of direction while the bright part would simply give the impression of a different room than the one we're in. Well, I guess.

I think one should be aware of both of these mechanisms to get sense of direction, and both their peculiarities, when doing any ambitious panning for e.g. 5.1. Modern techniques can increase the sense of spatiousness, but this does not necessarily mean exactness in sound placement. And it might take some years before it is common to have a 300 channel speaker system in the room and some coding technique for the recordings that on the fly will mix the sound for our specific configuration. As this would mean that tracks would have to be separately available, plus the info on how they are placed in a particular wavefield setup.

SpatialTest.pch2

Description:

Illusion of varations in spatial placement from a definitely mono 'hihat-like' sound.

Thanks Rob!
Maybe we can give George Lucas a run for his money in an alternative to THX theater It seems every 3 years two more speakers get added to "cinema quality" sound...just to more accurately simulate spacial location (surround sound (3), 5.1, 7.1, looks like 9.1 is starting to happen, etc...)

Ok..we use the forum now on as the list... from soundenhancement..to drug influenced listening via budism to 9,1 surround....

Just one thinng about that...
because most of the old speakers i got are single and not paired i ve an open baffle above my head..these VOG speaker ( voice of god ... ähmmm)
is realy cool way of listening somehow..

it seems to me that analog circutry does not necessarily "sound better", but durring the process of breadboarding, experementing, and building these circuts that a "musical ear" plays a large part in what is considered right.

exactly... This explains aswell why good sounding and bad sounding analog gear often share the same circuit diagram with minimal changes..The good sounding one is engineered while the bad sounding one is just calculated.

Once a retired Neumann developer stated that the only reason the neumann transistor equipment from the 60´s´sounds better than the 80´s stuff is that they tried to get the sound from the telefunken valve desks rebuild in the new transistor technologie... They still had the sound in the "ear" and wanted to recreate it..or the other way around theire listening in what sounds good and what sounds bad was influenced by listening all day to valve equipment... A good school as we know today again...

it's been a few years since i experemented with chemicals, but when under the influence of lsd or mushrooms, i've noticed that my hearing seems much clearer and crisp. i've also noticed this when being at gatherings where much of the audience is tripping and i'm not (contact high?). obviously lsd alters perceptions (otherwise there would be no point), but can those alterations help bypass some of the "hardware" problems that i (and others) have? are they simply distortions? how does just being around the "mood" of tripping affect things? obviously, experements in this area are more involved than writing a g2 patch, and i'm not expecting any definitive answers...but i do find it curious (i suppose curious like alice was about the rabbit with the pocketwatch).

deknow (kids, don't try this at home!)

Yes, I have experienced this as well. It is said that all your senses are heightened on psychedelic substances as well, like vision, taste, smell, touch, even psychic abilities. I'm of the opinion that because your mind is so open at the time, much more information from the senses gets through to your concious mind, and things that are normally filtered to help you make sense of the sensory information isn't. Conversely, things that don't normally get filtered do get affected. So in a sense (no pun intended) your hearing improves, but it is also quite distorted. You might not be able to understand what anyone in the room is actually saying, but the music coming out of the stereo sounds simply amazing and you can hear every little nuance of each bass note for example.

I think I might be undertaking a couple of experiments in this area with some friends over the christmas holidays this year

Funny you should mention it, but down here we have to celebrate Christmas twice. Once in December and again in June just for all the ex-northern hemi’s who can’t get used to Christmas dinner outside in the sun.
It’s sometimes difficult to enjoy that stodgy, traditional food on a hot day.

And back OT
I may be a monophonic voice, but I like the way the G2 sounds. I liked it when I got it over a year ago and I still like it now. My ears are so used to its character that real analogue just sounds bad to me now.
Rob’s stuff is very interesting and I’m glad it has helped some users who are not as pleased with the G2’s ‘tone’.
IMHO the most significant factor in the G2 character is the person doing the patching. Everyone posting here seems to get their own ‘sound’ from the G2.

Rob’s stuff is very interesting and I’m glad it has helped some users who are not as pleased with the G2’s ‘tone’.

Essentially it is not about 'the sound of the G2' at all, but instead about mixing.

There is basically nothing wrong with the sound of the G2 or any other digital synth or softsynth. But when these instruments are placed in a mix with other instruments, vocals, sounds, or whatever, it must all be tamed into one sonic space.
All sorts of instruments have their peculiarities that create different puzzles to solve in a mix. The issue presented here applies to the class of digital instruments and softsynths, and amongst them is the G2. The good thing is that on the G2 it is so easy to solve issues at the source, if there is the need to. Note that the need is defined by the mix that has to be made and not by the instrument.

Basically a mix is very much like the composition in a renaissance painting. E.g. there are these famous Madonna paintings where the focus of a spectator is guided along a triangle. There is the Madonna with the babychild on her lap and some figure to the left or right. This third figure is looking at the face of the Madonna, the Madonna is looking at the face of the child and the child is looking at the face of this third figure. This forms the mentioned triangle and the eyes of the spectator will follow this triangle. This is often what composition in a painting is about, guiding the focus of the spectator along a simple trajectory marked by 'important' points.

In a mix there are also things that focus the attention. Digitally synthesized sounds are said to easily 'cut through a mix', which can only mean that they tend to attract the focus. So, the tilt filter is in the end not meant to alter the 'tone' of the G2, instead it strives to ultimately have more control on how the focus of the listener can be guided in a mix. Or in other words how to control the 'presence' of an instrument in a mix.

The way to use this tilt filter while mixing is to first use the tilt filter together with the channel volume fader to get a natural tonal balance at a proper sounding volume, where the focus tends to go to the melody and not to an aspect of the timbre. On e.g. digitally generated padsounds the natural balance will in general sound more spacious as well, the why I explained at length. Then, parametric EQ can be applied to give a certain aspect in the timbre more focus. And this is always in respect to the other sounds in a mix.

Mixing used to be a job for specialists, but these days everyone wants to do their own mixing. And often wonder why they don't manage. I like to compare mixing with painting or sculpting, as there the composition comes first and the technique is just to refine the composition. (Btw, I'm originally a sculptor.) Composition and technique are both important in their own ways. In these arts it is often that a beginning artist does have a natural talent for composition, but lacks the technique. In sound these days it seems the opposite way, many musicians have a lot of technique available, but struggle with the composition when it comes to mixing. Here I mean the composition of a mix and not the composition of a melody, so maybe I should name it the 'composition of an arrangement'. And at this 'level' of composing or arranging it is all about guiding the focus of the listener.

Please, correct me if I'm wrong. As I very much like to learn from your experiences.

In a mix there are also things that focus the attention. Digitally synthesized sounds are said to easily 'cut through a mix', which can only mean that they tend to attract the focus. So, the tilt filter is in the end not meant to alter the 'tone' of the G2, instead it strives to ultimately have more control on how the focus of the listener can be guided in a mix. Or in other words how to control the 'presence' of an instrument in a mix.

That's a interesting perspective. I'd like to add that unlike paintings, which are static, compositions (barring a few rare "degenerate" cases¹) tend to change. This would imply the need for some kind of structure that would make such treatments dynamically react to what's happening elsewhere, either manually controlled or preferably based on some algorithm. This, in turn would lead to a demand for figuring out what instrument should "lead" what other(s) at speciffic moment. Remind me to borrow you this very facinating text a friend of mine wrote where he analised how this same principle works amongst small jazz and classical ensembles.

I could see a lot of use for a dynamically controlled matrix of sidechaining signals.

at one point in time (years ago), i had access to some of these chemicals in what seemed a pharmicutical grade....

My personal experiences with these chemicals are very limited, but one experience I like to share, as it is summer holiday anyway.

When 18 years of age I had to go into the army, but first one had to have a medical examination together with a couple of hundred others boys, to check if one was fit enough. Now, at that time the purpose of a Dutch soldier would be to delay the commies for 20 minutes so the infamous red button could be pressed in time, and life expectancy at the front was said to be about 23 minutes. But personally more important, I do like to learn all sorts of things, but not really how to slaughter other people. So, I did not really want to go into the army. The only way to avoid being enlisted was to be rejected at this medical checkup, but I realized that these military doctors would know all the tricks. So, I realized I should do something out of the ordinary, but what?
Around that time on the flower market in Rotterdam there were these two gray-haired and gray-eyed elderly ladies that specialized in cactusses and amongst them the Lophophora Williamsii, which I knew from the Aldous Huxley and Castaneda books were peyote cactusses containing mescaline. This gave me an idea and I bought all the peyotes they had and remember that, when I paid, the ladies had this little twinkle in their eyes. In the early morning of the day of the checkup I had to take a bus and have been chewing the cactusses in the bus. Aweful taste, btw. I do remember that some time later I was sitting in my underpants in a cold waiting room with a lot of other guys in their underpants sitting across me. Then my thoughts sort of went blank and everything became like a thunderstorm of colours and there is no recollection of what really happened. Until hours later I came to my senses while finding myself in the bus back home. I do remember that when reality came back it suddenly looked pale and life felt very depressing. Anyway, two weeks later I found out from my physician that apparently I didn't pass the check up ...because of my feet...
I still have no idea why my feet, as they are absolutely normal. But somehow it makes me a little proud that, although its a mystery why, I am officially declared unfit to wear the soldiers' boots. Heiho to symbolism!

Suddenly I realize the disadvantage of a forum and the advantage of an email list. On an email list it is so much easier to include a little OT and split threads. And the OT things that one writes, but maybe shouldn't, go into oblivion sooner. But hey, what the heck, this little story happened in my life and there is nothing wrong with telling it. After all, it is not as bad as killing by remote control.

I could see a lot of use for a dynamically controlled matrix of sidechaining signals.

Not only that, but using sidechaining to control focus could probably be more demanding than just pushing down the volume. E.g. focus also works with foreground versus background. In a mix foreground is the 'right in yer face' and background is probably sort of an 'into the abyss'.
So, maybe the sidechain matrix should also control chorus and reverb parameters, and equalisation levels according to e.g. Fletcher-Munson curves.
Still, sidechaining is a pretty immediate effect and there is the risk of getting a very nervous result.

Not only that, but using sidechaining to control focus could probably be more demanding than just pushing down the volume.

Well, yes, but in that case the demand is less on me. I firnly beleive in directing what should be done, then leaving others to actually go do it. Moving -say- 10 parameters, all releated to "presence", around continually for the duration of each track is quite a bit of work for me but much less so for a computer.

Quote:

E.g. focus also works with foreground versus background. In a mix foreground is the 'right in yer face' and background is probably sort of an 'into the abyss'.
So, maybe the sidechain matrix should also control chorus and reverb parameters, and equalisation levels according to e.g. Fletcher-Munson curves.

Yes, and volume and pan and some sort of "collision detection" between instruments. While we are at it we should delay the flow of information between two instruments by a amount indicated by their distance and make instruments more inclined to sync to the tempo of others closer to them and, and, and....

Quote:

Still, sidechaining is a pretty immediate effect and there is the risk of getting a very nervous result.

Well, yes for conventional systems where you map out a envelope follower straight to a parameter but it doesn't *need* to be. Side chaining only means some manipulation on one sound is affect by another sound, nobody said there couldn't be some analysis structure inbetween...._________________Kassen

Suddenly I realize the disadvantage of a forum and the advantage of an email list. On an email list it is so much easier to include a little OT and split threads. And the OT things that one writes, but maybe shouldn't, go into oblivion sooner. But hey, what the heck, this little story happened in my life and there is nothing wrong with telling it. After all, it is not as bad as killing by remote control.

Interesting story. Congrats on getting out.

OT stuff is facinating. If you could start a new thread with this, where would you start. Deknow brought up the chemicals because it was related to psychoacoustics. You took the opportunity to tell an interesting story. You are right, it is OT, but so what? I'm tempted to share the story of how I avoided being drafted during the Vietnam era.

Still, all this in the Building Block section?

As admin. I'm creating a new topic in the General Discussion and putting a link to it in the Building Block thread..._________________--Howard
my music and other stuff

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou can download files in this forum

Please support our site. If you click through and buy from our affiliate partners, we earn a small commission.