NeuroNotes: New progress after talking to the experts

A quick recap on our last post

our big question: How can we tackle Specific Language Impairment (SLI) among young individuals (infants, toddlers) using a musical approach?

Previous solution:

1. (Control group and SLI patients) Language tasks to understand what linguistics behavior they have trouble with, and which part of the brain is not fully functioning.

2. (Control group and SLI patients) Expose them to music. Find out how each type of music activates brains of each group. Understand which type of music can activate the part of the brain that is not activated/fully functioning.

3. We want to focus on the difference in how each group is responding to different music types, so that we don’t account for those music that basically has the same impact no matter the subject is in control group or is a SLI patient.

Some Feedback from Dr.Emberson and Dr.Aslin:

Feedback 1:(regarding our experiment 1)

“One possibility here is to go right to fNIRS and have them all listen to language and see if you can identify differences in how the brains respond across the groups. Looking at a passive listening task might be more analogous to the passive music listening task that you are suggesting. So this approach is not to ask what part of the brain is malfunctioning per see but to look at the neural differences in language processing across the groups.”

Response: Previously, we decided to have the subjects do language tasks, which is more like an “active language task”. Now, we instead want to make our initial experiment a “passive language task” by letting the subjects listening to some language pieces and examining how their brains are activated. In this way, the task in our first experiment will be more analogous to our second experiment where they are supposed to listen to music pieces. This will allow us to see how, at the neural level, control and SLI individuals process language differently opposed to trying to target a specific or a few specific areas of malfunction in the SLI individuals. This is important because, as we will discuss below, it is unlikely that there will be one area of the brain that has deficits in all SLI individuals.

Feedback 2:(regarding our experiment 2)

“Remember here that you need a contrast. Music listening compared to what? Silence? Other sounds that are not musical? If you just use music compared to silence you will find a large swath of the brain will be activated which will increase overlap but maybe not meaningfully. You should also use the same control condition for each type of music so that you always have the same comparison. ”

Response: This piece of feedback inspires us to include more “contrast” when we design our experiments:

Having a control group versus SLI patients for each experiment.

For both experiment 1 and experiment 2, we will add tasks where subjects will listen to silence; for experiment 1 specifically, we can add tasks where subjects will listen to some sounds with no meaning or words without tones; for experiment 2 specifically, we can add tasks where subjects will listen to non-musical sounds, tasks where subjects listen to basic musical sounds, tasks where subjects listen to music with single instrument, tasks where subjects listen to more complicated music of different aspects.

Feedback 3:(regarding our experiment 3)

"MANY parts of the brain are activated by different aspects of speech stimuli. So, you are unlikely to find just ONE area of the brain that differs between SLI kids and controls. Perhaps the best approach is to focus on ONE level of language processing and ONE key area of the brain that supports that processing to see if SLI kids have a deficit in behavior and brain activation."

Response: Based on this piece of feedback, we decide to focus on one level of language processing and one key area of the brain based on the results from both experiments. Along with this feedback, both Dr. Aslin and Dr. Emberson mentioned maybe we should reconsider how we are going to use machine learning. Dr. Emberson mentioned that from the first two experiments might be sufficient for us to conclude the parts of the brains that we should focus on. However, Dr. Aslin also mentioned there might be many parts of the brain being activated in our experiments. So now, we decide to reserve our machine learning model solely for designing personalized solution, but wait to decide whether we can do it or not until we finish the first two experiments.

Feedback 4:

“why do you think music is related to language? There is lots of evidence that music (or language prosody) activates different parts of the brain compared to language (especially grammar processing). Are you thinking that listening to music will change the brain of SLI kids to make them more "normal"?”

Response: Thus, the biggest risk of our project would be after the second experiment we don't see any music activating any specific areas that we get from the first experiment. But this would also make contributions to the field since by then we would know that maybe using musical approach to address SLI is in fact not a feasible solution though previously proposed to be promising by some researchers.

Feedback 5:

“Be sure to say how you will look at how the solution is effective. Are you going to go back to the original neural comparison in the language task? That would be a good one but then you will need to have a control group because you need to show improvement beyond mere exposure. For example, you might find that language processing looks better over time but you are also exposing them to the same language task over and over. So you need to compare to a group that gets all the same tests but some other type of intervention (e.g., picking the type of music that overlaps the least).

Response: This is an important piece of feedback. Our goal would be a long term solution for the SLI individuals such that their language deficits would remain improved, or be gone entirely. Therefore, we would need to test both the short and long term efficacy of our solution. Dr. Aslin noted that it may be problematic to continually expose the individuals to the same language task/audio clip because this might allow them to learn the task/clip and skew the results to be more so a representation of memory function rather than language processing. Therefore, we could create passive listening tasks testing the same level of language of language processing to avoid this risk. In addition, we will also need to create a variety of experimental and control groups. These will include groups that get all the same language tests and all other components but then are given the type of music that least overlaps with the deficit area. Other groups will test the amount of music exposure, for example one group could be exposed daily for 1 month, another for 6 months, etc.

Summary on the Feedback

Overall, the feedback of Dr. Aslin and Dr. Emberson was extremely helpful and informative. With their feedback and some additional research, we have made a few important changes to our solution.

Passive listening first instead of language task

Adjust controls to increase contrast to confirm that differences between SLI individuals and controls are in fact related to our experiment and goals

Reserve machine learning model for solution(if in fact we get results that allow us to do so)

Our biggest Risk

As suggested by Dr. Aslin, there is possibility that after conducting the whole experiments, we might end up not seeing music playing a clear role in activating any of the brain areas that we identified as relevant to SLI.

Although this would be a huge risk indicating that the solution that we are seeking might not exist at all, we also see the value of the potential failure. Since many previous papers on SLI suggested that theoretically, a musical approach might be a very good solution for SLI, but no one had really see how in practice the musical approach would work. Thus, if by conducting our experiments we realize that musical approach in general does not work, it would also be a contribution to the field.

Revised solution:

After hearing all of the feedback we got from our two experts, we decided to make a few changes. One of the biggest of these was to change our original idea of an active language task, that would have been participation-based, to something that will be passive and listening based. This will allow use to more directly compare this task to our second task, where we will have kids exposed to music. This way, the two activities are set up similarly, so any other factors that could contribute to brain activation are taken away. The two experts we reached out to were extremely helpful in making this change. We also want to have a control group for each of these tests. We will, in our listening stage, test different types of music with each of these groups to find a type of music that affects SLI patients in dramatically different ways than the control group. While we have made these changes, we will still use NIRS to record our data, and keep many other elements the same. With our NIRS data, we will use machine learning, but we have adapted our use of machine learning based on our expert feedback. At each of many checkpoints, at 1 week, 1 month, 3 months, and 6 months, we will input our data into the machine learning model to make it smarter and better. This way, we will build connections over time between brain patterns and both linguistic and musical behavior. We can also get personalized solutions for each child. To conclude, we kept a similar solution, but made a few key changes with the help of our experts that we think will make a big difference in our results. We especially like how our machine learning model will now improve over time, and how we can be more sure that the results in our experiment are due to the music, instead of a false result due to a difference in how the two experiments are conducted. We are confident now that we have a strong experiment thanks to our experts.