This report explains that, beginning in week 16 of pregnancy, a response exists to music delivered intravaginally, expressed through specific movements of the mouth and tongue.

Our initial hypothesis suggests that music creates a response which manifests as vocalisation movements, as it activates the brain circuits that stimulate language and communication. In other words, learning begins in utero.

This study also shows that the only way for the fetus to hear the music in the same way we do is by relaying it intravaginally. If we play music for the fetus from outside the body, through the mother’s abdomen, the fetus does not perceive it in the same way.

What important innovations does this study offer?

It shows, for the first time, that a fetus can hear starting at week 16 of the pregnancy.

Babypod can open new lines of research on fetal hearing and deafness.

It allows the pregnant mother to ensure fetal well-being.

It stimulates the primary brain circuits involved in communication. On hearing music, the fetus responds with movements similar to vocalisation, which is the step prior to singing and speaking.

With the device we have developed for the scientific study, we can transmit sound effectively to our babies in the womb, and begin to stimulate them even before they are born.

We presented and explained the conclusions of the study at the Massachusetts Institute of Technology (MIT). Dr. Marisa López-Teijón has been awarded the Ig Nobel Prize for Medicine, the first one in Obstetrics in the 27-year history of the award.

What is heard in the womb?

The fetus receives sounds from inside the mother’s body, such as her heartbeat, breathing and intestinal activity. It also perceives sounds the mother makes, such as when she is speaking or walking with high heels, in addition to other external sounds.

The fetus is well-protected from noise. The fact that it is living in a soundproof environment means that perceived sounds are distorted, as has been shown in research done on sheep using intrauterine microphones. According to this research, the majority of sounds are perceived as whispers (around 30 decibels), while the mother’s voice during normal conversation (60 decibels) is barely discerned at all (24 decibels).

And, as most sounds are repetitive, the fetus becomes accustomed to them and does not react. They do not prevent the fetus from sleeping.

We could say that the sounds heard in utero are like the background rustles heard in a forest.

From what we have discovered, we know that the mother’s voice and other outside sounds are not perceived in the same way as we hear them. The soft tissue from the mother’s abdominal wall and the inside of her body absorb the sound waves, decreasing their intensity and creating a distorted sound. It is as if the fetus were surrounded by pillows, or like when we hear someone talking in the next room, but are unable to make out what they are saying.

According to a study done on gestating sheep and published in 1996, words spoken externally and recorded from inside the womb were only about 50% intelligible (Griffiths et al, Journal of the Acoustical Society of America).

The fetus is said to perceive primarily low tones, because higher sounds are buffered even further.

If the fetus is to hear the same as we do, it can only be achieved intravaginally

The vagina is a closed space, thus preventing sound dispersion. The layers of soft tissue separating the fetus from the origin of the sound are fewer: there are only the vaginal and uterine walls.

By placing a speaker inside the vagina, the fetus can perceive the sound at nearly the same intensity with which it is being emitted.

Music from the abdomen: part of the sound is reflected back out, and another part is absorbed and distorted by the soft tissue that comprises the abdominal and uterine walls. The fetus only perceives a portion of the transmitted sound, and it is of a lower intensity and clarity than at the point of emission.

Intravaginal music: the sound is emitted in a closed space, so there is no sound dispersion and the layers of soft tissue separating the fetus from the origin of the sound are fewer. We have only the vaginal and uterine walls, and we therefore avoid absorption from the abdominal wall.

We know that the inner ear is fully developed at week 16 of gestation, but until now medical literature could only confirm a functioning auditory system from week 26. This research study shows for the first time that the fetus begins hearing at week 16.

The scientific study

The study focused on transmitting a greater intensity of sound to the fetus. We therefore designed a new, specific device that would emit music intravaginally.

The study was carried out on pregnant patients from our centre between the 14th and 39th week of pregnancy. Throughout the study, the research team used ultrasound to observe the reaction of the fetus upon hearing music emitted both abdominally and intravaginally. The results were also compared with vibrations emitted intravaginally with no accompanying music.

The music used in the published study was Johann Sebastian Bach’s Partita in A Minor for Flute Alone – BWV 1013.

Ultrasound scans performed prior to initiating stimuli showed that approximately 45% of fetuses made spontaneous head and limb movements, while 30% moved their mouth or tongue, and 10% stuck out their tongue. These are typical movements made by a fetus when awake.

Before the scan, the pregnant patient inserted the intravaginal device designed for the study, which emits an average sound intensity of 54 decibels (equivalent to a conversation in low voices or background music).

87% of fetuses reacted with non-specific head and limb movements, accompanied by specific mouth and tongue movements which stopped when the music stopped. Likewise, with intravaginal music, nearly 50% of fetuses reacted with a striking movement, opening their jaw very wide and sticking out their tongue as far as it would go.

To study whether the fetal reaction was due to the vibration caused by sound waves (i.e. mechanical, non-musical vibration), the pregnant patient inserted a vaginal vibrator such as those used for sexual stimulation. The research team performed the ultrasound, emitting sound vibrations at an average intensity of 68 decibels (equivalent to a loud conversation). No changes were observed in fetal facial expression during this part of the study.

Vibrator used during the study to measure fetal response to non-musical vibration

The response varied in each scan, as did the time the fetus took to react. The type, number and intensity of movements likewise differed from one subject to another, as well as the amount of time it took for the fetus to discontinue the movement once the stimulus had finished, confirming that the movement was not simply a reflex.

Fetal response begins at week 16, with statistically significant variations throughout the pregnancy. The further on the mother is in the pregnancy, the more striking the facial movements.

Response is different for each fetus, with different response levels each time the music is played.

Our hypothesis is that music elicits a response which manifests as vocalisation movements, as music stimulates brain circuits responsible for language and communication.

Once the inner ear is fully formed, an auditory stimulus with rhythm and melody received through the cochlea would activate the most primary centres of the brain stem in the area that controls social behaviour, and which elicit vocalisation.

A group of cells called the inferior colliculus detects sound. If these cells perceive the sound as harmonic and associate it with music, they become stimulated and activate the nerves responsible for moving the mouth, the jaw and the tongue for vocalising (the phase prior to language).

Currently, and in collaboration with the Head of Neuroanatomy at the Hospital Clínic in Barcelona and the Head of Radiology at Hospital San Raffaele in Milan, our team of researchers is using magnetic resonance imaging with pregnant patients to study what areas of the fetal brain are activated through music transmitted intravaginally.

We know that babies begin to spontaneously vocalise in response to sounds they hear as they begin to explore their vocal register: this is the phase prior to speaking. Dissonant noises and sounds do not activate these neurons; speaking or singing to a child stimulates vocal development, while noises do not.

Response circuits for fetal stimulation through intravaginally transmitted music

As we are looking at a response and not simply a reflex, the reaction of the fetus depends on multiple factors, which is why it is different each time. It varies depending on the neuronal activity of the brain stem at that particular moment, which means that the response could depend on the sleep phase the fetus is in or blood sugar levels. For example, when we sing to a baby, the response differs depending on whether the child is hungry, thirsty or sleepy.

If a fetus responds to music emitted intravaginally at 54 decibels, yet does not respond to music emitted abdominally at a much greater intensity, we should suppose that it cannot be heard. The reduced intensity of sounds that the fetus can perceive from outside the womb makes them scarcely audible.

It is significant that no response to the intravaginal vibrator is observed, even though it is emitting a sound at an intensity of 68 decibels. We know that the sound is a series of vibrations with a regular frequency, whereas noise manifests as vibrations with an irregular frequency. Noise is a discordant organisation of sound, while music is a harmonic organisation of sound.

In later studies, our research team has observed that the fetus does not respond to beeps emitted intravaginally at 54 decibels, owing to the fact that fetal acknowledgement occurs as a response to stimuli that elicit a communication-related reply. This response can be elicited by music or language, but not by noise.

When you speak or sing to your baby, she will likewise try to communicate with you by trying to vocalise. This does not happen with a simple noise, as different brain circuits are being stimulated in each case.

A study done by Dr. Perani (Dr. Perani et al, PNAS) used magnetic resonance imaging to analyse the brain areas that are activated by music, and it was observed that both cortical and subcortical processing occurred in the primary and higher-order auditory cortices, particularly in the right hemisphere of the brain. These areas were not activated by dissonant sounds; on the contrary, no brain processing was observed.

Fetal response is thus elicited not by sound vibrations or noise, but rather by music.

Applications of the study

If the fetus responds to music it can show that the baby is not deaf. Deaf people can hear vibrations, but not music. If the fetus does not respond during the ultrasound this does not mean that the baby cannot hear; the reaction depends on neuronal activity at that precise moment, and a reaction can be observed in a subsequent ultrasound. Until now, there has been no method for diagnosing deafness before birth. This diagnosis could not be made until the child was one or two years old. The earlier the diagnosis, the sooner treatment options can be addressed.

Greater effectiveness and speed of ultrasounds. For the sonographer, this device represents a significant advance because it elicits fetal movement, making it possible for the technician to see all structures more readily and thus shortening examination time.

Stimulus for the neurological development of the baby. It is generally accepted that any type of sensory stimulation is useful, and the sooner, the better. Music stimulates language learning.

The fact that the mother can share relaxing moments with her baby can help reduce maternal stress. This is particularly advisable for mothers suffering from a high degree of anxiety. It is also recommended for specific moments in which the mother notices decreased fetal activity, as music emitted through the device elicits a reaction from the fetus.It is also a source of pleasure to listen to music and share it with the mother, the father and anyone else who may also be listening.

It opens significant pre- and postnatal areas of research on many levels.

For this scientific study, we designed a prototype of an intravaginal music transmitter that we continued to perfect. In order that every fetus can benefit from this discovery and clearly hear voices or music, we have licensed Babypod to use our idea and technology.

Babypod is a small, easy-to-use intravaginal speaker which poses no risk for the pregnancy. It is inserted like a tampon, and music is transmitted with the use of a mobile phone.

Babypod photo

We recommend beginning use at week 16 of the pregnancy, and continuing until the baby is born, provided the mother does not present with any contraindications for use: cervical dilation, high-risk pregnancies due to uterine malformation, possibility of premature birth, premature rupture of membranes, placenta praevia or active vaginal or urinary tract infections.

The use of BabyPod® is recommended for 10- to 20-minute intervals, once or twice a day. No limit has been specified, but to avoid disrupting sleep cycles, the previously mentioned frequency is suggested.

We have performed studies to analyse fetal response to the spoken voice of the mother and others. No differences were found in fetal response to the voice of the mother as compared to other female or male voices. Nor was any reaction detected when the fetus heard the voice emitted from outside the womb or when the mother was speaking, regardless of the intensity.

However, any voice transmitted intravaginally elicited fetal response: approximately 75% of fetuses responded with movements of the mouth or tongue, but protrusion of the tongue did not occur.

We were particularly interested by fetal reaction to the voice of Mickey Mouse: 17% of fetuses stuck out their tongue upon hearing it. The explanation for this is that the Disney character speaks in falsetto (a higher, more musical tone), which is the tone we often use when speaking to babies.

As has already been discussed, the fetus can barely hear the mother’s voice when unamplified. It is perceived as a whisper, and does not wake them. The fetus can perceive and remember variations in the rhythm and intonation of the mother’s voice, but all sounds are muffled and present variations in tone and timbre.

We imagined that it would be easier for the baby to recognise voices after birth if the fetus were able to perceive clearly audible voices during gestation, intravaginally.

With the intravaginal device we are using in our research, anyone can speak to the fetus.

It is easier to understand fetal response if we think of how a baby reacts. The differences between a fetus and a baby, though, are the soundproof environment of the uterus and the degree of brain development, but auditory mechanisms and the primary brain stem circuits are the same.

When we want to communicate with a baby, we use a higher, more musical tone. This stimulates communication by eliciting vocalisation movements, which are the step prior to language use. Music modulates attention and memory mechanisms. We know that it is easier to learn multiplication tables with music, or remember them through the words of a song.

Low, monotonous tones are not stimulating, nor are intermittent, occasional or monotonous noises or sounds. We imagine that the fetus does not respond to the vibration of non-musical sound waves transmitted intravaginally in the same way that babies do not respond to the noise of a dishwasher.

Because music is the most ancestral form of communication between humans. The first language was music, even before spoken language. It is the greatest stimulus we have for communication.

Our brains have specific circuits for music: some elicit pleasure, other stimulate sociability, and others, memory. Through our studies we have discovered specific circuits for vocalisation in the primary brain.

Myelination of the nucleus accumbens pleasure centre has not yet occurred at week 16. We do not know at what point the nucleus accumbens begins to function during pregnancy, but it is likely that it occurs after week 26. We will obtain more data through the research study we are carrying out with pregnant women, where we perform a magnetic resonance imaging of the brain while music is being transmitted intravaginally.

We are aware of and recognise the importance of talking to babies from the moment they are born to promote neurological stimulation. Now we have the amazing opportunity to do this much sooner, which is a huge advance.