Notes: Here we have three middle bones again. The arm of the malleus is attached to the eardrum, and the footplate of the stapes is attached to the oval window of the cochlea (inner ear). What they are doing is they transmit vibrations of the eardrum into the cochlea. But here we have a problem. Because the impedance of air and impedance of liquid is really different. Which one is bigger? Impedance in the liquid is much bigger than that of air.

We can think of an example. Let's say we are in a swimming pool. And
we are under water in a swimming pool. And then we cannot hear well the
speech of outside even though the voice is loud. Try it later at the
gym. That is because the impedance of liquid is so high, most of sound
is reflected when the sound hits the water. And 99.9%, most of sound
is lost. In other words, only 0.1% of power is passed.

That sound loss
gives us -30dB sound level loss just because of impedance mismatch between
air and liquid. But fortunately our middle ear bones overcome that sound
loss. The process is called impedance matching because they are matching,
making up that loss.

Notes: What they are doing is they are amplifying sound level to overcome mismatched impedance.
It can work due to their physical structure.

1) First of all, we have a really big ear drum relative bones. Especially,
ear drum is really big and stapes footplate is really small. Here as
we can see, the area of ear drum is twenty times bigger than the area
of stapes footplate. (Ear drum 60 mm2, stapes 3m2).

Using equation of
decibel, we calcualte how much gain it boosts.

Area ratio is 20, so area
of ear drum to area of stapes footplate 20/1 = 20 log (20/1)=26dB gain
is boosted by this area ratio between ear drum and stapes footplate.
(The same concept=> If we think about hitting a nail with a hammer, we
put the force to the head of the nail. But the force is gonna be bigger
at the point of nail. Why is it? Because when the same force is applied,
then the pressure is gonna be incrased from larger to smaller area. p=f/a)

2) They work as like lever. Because as we can see in this picture, the
arm of malleus is longer than that of incus. So different distance makes
lever ratio. So this lever action gives another increase about 1.3 times
which is equal pressure increase by 2dB. (What that means is that the
stapes is displaced much less than TM. TM is displaced up to 2mm, but
stapes is displaced by 0.1mm.)

3) buckling of the ear drum.
As we saw before ear drum changes its shape in a complicate way when
the sound hits ear drum. Each part of ear drum response to different
frequency in a different way. So ear drum itself can increase force when
ear drum moves. This buckling effects increase pressure by 6 dB (by a
factor of 2). All together, these three factors provide 26dB+2dB+6dB
= more 34dB gain (Or linearly, 20*1.3*2=by a factor of 52)

Most of the efferent neurons synapse directly with the outer hair cells

Efferent neurons carry information from higher auditory system to cochlea

Afferent neurons carry information from the cochlea to the higher auditory
system

Inner hair cells

Causes the release of neurotransmitter and the initiation of action potentials
in the neurons of the audtiory nerve.

Action potential  'firing'of
a neuron. Propagation is in one direction only down the length of the
axon

Most of the afferent neurons make contact with the inner hair cells

Possibly all information about the input sound is conveyed via the inner
hair cells

How The Cochlea Functions

When sound enters the cochlea a travelling wave moves along the basilar
membane

The amplitude gradually rises before reaching a maximum at its
point of resonance (characteristic frequency) beyond which it collapses
abruptly

Basilar Membrane (BM)

The BM has two structural properties that determine the way it responds
to sound.

BM is narrow and stiff at its base and becomes broader and
more flexible towards the apex

BM a mechanical frequency analysis -
separating the incoming sound signal into its frequency components, -
processed at different locations along the length of the cochlea

High
frequency sounds are processed at the base. Low frequency at the apex

Bekesy's Theory describes Passive Mechanics

Based on work in 'dead' cochleae

Highly damped - not sharply tuned

Active Undamping occurs in live and healthy cochleae

Like pumping on a swing - adds amplitude

Transduction by Hair Cells

When the basilar membrane moves in response to a motion at the stapes,
the entire foundation supporting the hair cells move, because the basilar
membrane, rods of Corti, reticular lamina and hair cells are all rigidly
connected

These structures move as a unit pivoting up or away from the
modiolus

This set up a shearing motion of the hair cell sterocilia

'Transduction process': mechanical energy into electrical energy

Depolarization Decreases Length OHC

Hyperpolarisation increases length OHC

Implications

Damage is it OHC/IHC?

DSP hearing aids  gain from OHC

What if significant IHC damage?

Amplification

OHC constitute a cochlear amplifier

When the outer hair cells amplify the response of the basilar membrane,
the stereocilia on the inner hair cells bend more, and the increased
transduction process in the inner hair cells produce a greater response
in the auditory nerve.

Gain from OHC may be 50 dB at low and medium
sound levels

Frequency Tuning Curves Show these Effects

They have a characteristic shape

sharp tip (shows best sensitivity at one freq)

steep high frequency tail

shallow low frequency tail

OHC and Frequency Slectivity

shape of the tuning curve changes drastically when the sensory hair cells
are damaged. Instead of a sharp tip region, the frequency selectivity
is broadened

Reduced frequency discrimination

Increased susceptibility to noise

Dynamic Range Compression

Normal hearing dynamic range is about 120 dB
The loudest sound has an amplitude 1 million times as great as the quietest
sound we can hear!

Nerves have about a 30 dB range

OHC feedback compresses the auditory signal

Amplification is known to be highly nonlinear

OHC and OAE

The cochlear amplifier generates its own sounds

These sounds can be detected and form the basis for OAE assessment

Implications

CI different to hearing aid user

OHC/IHC bypassed

Noise a different issue

Sound Localisation
Two Ears are better than one

Outer Ear - Auditory Localisation

Auditory space - surrounds an observer and exists wherever there is sound

Sounds are localized in space by using

Azimuth coordinates - position left to right (binaural cues: ITD and
IID)

Elevation coordinates - position up and down (pinna spectral cues
and head movements)

Azimuth, elevation, and distance coordinates for localization.
Two elevation coordinates are shown, one (M) in which the vertical coordinate
is positioned on the person's midline, and the other (S), which is off
to the side.

Auditory Localisation

Location cues are not contained in the receptor cells like on the retina
in vision; thus, location for sounds must be calculated using cues

The
direct sound carries information about the location of the source relative
to the listener.

Indirect sound informs the listener about the space,
and the relation of the source to that space.

Interaural Time Difference

The principle behind interaural time difference (ITD).

The tone directly in front of the listener, reaches the left and the
right ears at the same time (A).

However, when the tone is off to the
side (B) it reaches the listeners right before it reaches the left ear.

Schematic illustration of interaural differences

Interaural Time Difference (ITD)

interaural time delay applies to low frequency localisation  less than
approximately 1500Hz.

The average distance between the ears is 20cm resulting
in a 600 microsecond delay between hearing the incident sound in one
ear and hearing in the other

Interaural time difference - difference
between the times sounds reach the two ears
When distance to each ear
is the same, there are no differences in time
When the source is to the
side of the observer, the times will differ

Interaural Intensity Delay

IID is dependent on frequency

If the wavelength is equal to or greater than the width of the head,
the sound will bend or diffract around the head and be heard with almost
equal intensity in the opposite ear

Interaural Intensity Difference

However, for higher frequencies there is absorption of sound energy by
a solid medium (head)

Resulting in sound shadow, region of effectively
zero energy

Sound Localization

Interaural Level Difference

Schematic illustration of interaural differences

Interaural Intensity difference

Head Shadow Effect (Interaural Intensity difference) - Localisation of
high frequencies (above approximately 1.5 kHz) is dependent upon the
head shadow effect.

The head casts a sound shadow, which in turn attenuates
sound by at least 6dB between the two ears. However, can reach 20dB at
higher frequencies

If a high frequency sound is perceived in one ear
at a significantly higher intensity than the other ear, the brain concludes
that the sound has originated from the higher intensity side

Outer Ear
Vertical Localization

Vertical localization - based on reflections from the pinna

Vertical Localisation

Vertical localisation is achieved using pinna echoes.

Sound from below will produce a slightly more delayed echo (about 300
microseconds) than if the sound came from above (echo after about 100
usecs)

Such echoes are involved in the frequency range 3.3 to 10kHz.

The bumps and ridges of the outer ear apparently produce reflections
of the entering sound. The delays between the direct path and the reflected
path make vertical localization possible (Bear et al 1996)

Vertical localisation deteriorates markedly with hearing loss

High frequencies are especially affected by vertical localisation

Significance of Sound Localisation

Localisation is important in:

Speech perception in background noise

Communication in background noise

Safety

Binaural Squelch

Ability to suppress background noise and attend to a specific auditory
signal.

Fortunately, the auditory nervous system is wired to help in
noisy situations as long as there is functional input from both ears, that
is, the auditory system and brain can combine information from both ears
so that there is a better central representation than would be had with only
information from one ear

The squelch effect takes advantage of the spatial separation of
the signal source and the noise source(s) and the differences in time and
intensity that these create at each ear.

Speech recognition in such noisy environments
is even harder for a person with sensorineural hearing loss both because
of the inherent distortion and loss of normal nonlinearities introduced by
cochlear damage