New computer system to spot speech disorders in kids

Boston : MIT researchers have developed a new computer system that can screen young children for speech and language disorders as well as provide specific diagnoses.

For children with speech and language disorders, early-childhood intervention can make a great difference in their later academic and social success.

However, many such children one study estimates 60 per cent go undiagnosed until kindergarten or even later.

Researchers at Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital’s (MGH) Institute of Health Professions in the US, hope to change that with the new computer system.

The system analyses audio recordings of children’s performances on a standardised story telling test, in which they are presented with a series of images and an accompanying narrative, and then asked to retell the story in their own words.

“The really exciting idea here is to be able to do screening in a fully automated way using very simplistic tools,” said John Guttag, professor at MIT.

“You could imagine the storytelling task being totally done with a tablet or a phone. I think this opens up the possibility of low-cost screening for large numbers of children,” he said.

They conducted a set of experiments with their system, which yielded promising results.

To build the system, researchers used machine learning, in which a computer searches large sets of training data for patterns that correspond to particular classifications in this case, diagnoses of speech and language disorders.

“Better diagnostic tools are needed to help clinicians with their assessments,” said Jordan Green, from MGH.

“Assessing children’s speech is particularly challenging because of high levels of variation even among typically developing children. You get five clinicians in the room and you might get five different answers,” Green said.

Unlike speech impediments that result from anatomical characteristics such as cleft palates, speech disorders and language disorders both have neurological bases. However, they affect different neural pathways, Green said.

Green along with Tiffany Hogan, a researcher at MGH, had hypothesised that pauses in children’s speech, as they struggled to either find a word or string together the motor controls required to produce it, were a source of useful diagnostic data.

They identified a set of 13 acoustic features of children’s speech that their machine-learning system could search, seeking patterns that correlated with particular diagnoses.

These included the number of short and long pauses, the average length of the pauses, the variability of their length, and similar statistics on uninterrupted utterances.

Login

Dear user, we've recently made some changes in our website to make it more secure & accessible. We request you to Reset your password in case you get any problem in logging in your account. For any help, contact : Support

Email: *

Password *

Please activate your account

Please click on the "account activation link" we have sent to your registered email.