As you may have realized (with frustration!) by now, we have limited options for evaluating the expressive communication skills of children who are minimally verbal. Enter: the Communication Complexity Scale (CCS), designed to measure just that. Prior papers have described the development of the CCS and determined its validity and reliability, but in this study, we get to see it in action with a peer-mediated intervention.

First, a little bit about the tool. It’s a coding scale—not a standardized assessment—that can be used during observations. Because prelinguistic communication skills often take time to develop with this population, this tool helps us think about all the incremental steps along the way and accounts for the variety of communicative modes the children might use. It’s a 12-point scale following this pattern:

The researchers found that the CCS could measure improvement in overall communication complexity and behavior regulationfor preschoolers with autism after a peer-mediated intervention (the same one we reviewed here!).

So far in the research, the CCS has only been used during structured tasks meant to elicit communicative responses (see the supplemental material), such as holding a clear bag with toys where the child can see it, but can’t access it independently. We know it's crucial to observe our students in natural communication opportunities, though, so we'd have to be a little flexible in using the CCS during unstructured observations. The scale could definitely be useful when describing communication behaviors during evaluations or when monitoring progress. Wouldn’t it be much more helpful to say “The child consistently stopped moving (i.e. changed her behavior) in response to the wind-up toy stopping” instead of “The child was not observed to demonstrate joint attention”? Using the CCS, we have new ways of describing those “small” behaviors that really aren’t small at all!

NOTE: This study crosses over our Early Intervention vs. Preschool cut-offs, with kids from 2 to 5 years old. So for those of you who also read the Early Intervention section, we’ll publish this there next month! Just giving you the heads-up so you don’t feel like it’s Groundhog Day :)

Do you serve pre-K or kindergarten-aged kids? Are some/lots/all of them from Hispanic backgrounds and learning Spanish AND English? Mandatory reading right here, friends!

So—a major issue for young, dual-language learners? Appropriate language assessments. We talk about it a lot (plus here, here, here, and here, to name a few). In this new study, the authors compared a handful of assessments to see which could most accurately classify 4- and 5-year-olds (all Mexican–American and dual-language learners) as having typical vs. disordered language.

The single measure with the best diagnostic accuracy was two subtests of the Bilingual English-Spanish Assessment (BESA)—Morphosyntax and Semantics (the third subtest is phonology, which they didn’t use here). But to get even more accurate? Like, sensitivity of 100% and specificity of about 93%? Add in a story retell task (they used Frog, Where Are You?). Sample both Spanish and English, and take the better MLUw of the two. This BESA + MLU assessment battery outperformed other options in the mix (English and Spanish CELF-P2, plus a composite of the two, a parent interview, and a dynamic vocab assessment).

Not familiar with the BESA? It’s a newer test, designed—as the name implies—specifically for children who are bilingual, with different versions (not translated) of subtests in each language. If you give a subtest in both languages, you use the one with the highest score. And before you ask—yes,the test authors believe that monolingual SLPs can administer the BESA, given preparation and a trained assistant.

Now, the researchers here don’t include specific cut scores to work with on these assessments, but you can look at Table 2 in the paper and see the score ranges for the typical vs. disordered language groups. They also note that an MLUw of 4 or less can be a red flag for this group.

The major issue with this study, affecting our ability to generalize what it tells us, is that the sample size was really small—just 30 kids total. So, take these new results on board, but don’t override all that other smart stuff you know about assessing dual-language learners (see our links above for some refreshers if needed). And keep an eye out for more diagnostic studies down the road—you know we’ll point them out when they come!

Aha! This is a fun one. The simple answer is that the iPad has been around for less than a decade (shocking, huh?), and there is very little research on apps in our field (the little we do have is on AAC and aphasia). Nooooo! So you see where this is going: it’s not easy. The best you can do is perhaps, 1) know the research on the effective ingredients of speech–language treatment in the first place, and see if you can identify those within the apps, and 2) know the research on multimedia learning (not from our field; see article for overview) and use that to also guide your thinking. Then, of course, EBP also requires considering clinical practice and client data as well…

Challenging as this is, Heyman (2018) has started to pick at the question with a survey and interview study of hundreds of SLPs, asking how SLPs are selecting apps for therapy. The results:

How do SLPs know which apps to consider?

They’re mostly relying on word of mouth and social networks.

Then how do SLPs make purchase decisions?

“The main finding reported was that participants used apps because they were engaging and motivating to children…”

Finally, “Participants emphasized that apps were a tool and used them in the same was as any other tool or toy...”

What do we think of this?

Well, it seems that SLPs’ biggest concern is just getting kids excited about the therapy process. And that makes sense. But, ideally, we need to find a way to start to identify which apps will actually give us the features and flexibility to make good progress on speech–language goals. Heyman provides a checklist of features that could be considered, including things like: Can targets be repeated? Can items be skipped? How much control do you have over the screen (e.g. ability to remove elements)? … But we need a lot more research in this area to know which of these features matter, and when.

In the meantime, a little more digging by SLPs could certainly help! Heyman states, “Interestingly, only 22% of respondents looked at the developer sites in order to obtain information about apps; yet, information regarding thebackground and research evidence are often provided on the developer site.”

It can be hard to figure out your role in reading instruction, especially if you work in a school. On the one hand, reading is a huge part of the curriculum and is so important for helping students succeed; on the other, there are already so many professionals targeting reading that it can be hard not to step on anyone’s toes.

Lervåg et al. studied the development of reading comprehension (AKA the ultimate goal of all of this reading instruction) over time, and their results show why oral language is an important part of children’s reading outcomes.

The authors followed the same group of students from age 7 to 13, and gave them a boatload of reading and language tests at 6 points over the 5-year study. (These were Norwegian-speaking children, but results are similar to those from other studies of English-speaking children.) The goal was to test the simple view of reading, which says that reading comprehension depends on:

Their results supported the simple view of reading: decoding and listening comprehension (i.e., grammar, vocabulary, inference, and verbal working memory skills) together explained a whopping 96% of children’s reading comprehension ability. Listening comprehension predicted reading comprehension ability in both older and younger children, while decoding predicted reading comprehension ability only when children struggled with it. Once children’s decoding skills were good enough to read a text, only improvements in listening comprehension mattered for reading comprehension.

Now, does this study show that treating oral language skills improves children’s listening comprehension? No, but other studies do (see the “Summary and Conclusions” section for a review). And remember, you are uniquely qualified to help children improve their listening comprehension skills, which are crucial for reading success—you go, language expert!

So many words in English are spelled irregularly and don’t follow the rules for how they should be sounded out. These are usually taught as “sight words,” but that’s A LOT of memorizing for our clients to do. To give us a hand in teaching irregular words, Dyson et al. tested a treatment based on a theory of reading that says children trying to recall a word’s pronunciation (phonology) can get help from knowing how it is spelled (orthography) or what it means (semantics).

The researchers recruited 5–to 7-year-olds whose teachers reported that they struggled with reading. During 20-minute, twice weekly, small-group sessions, children listened to a puppet say an irregular word (e.g., mystery, referee, piano) incorrectly and tried to figure out what it should have said. Then, they listened to definitions of the words and completed a writing worksheet so they could get more practice with spelling them (examples in Appendix B).

After just 8 weeks of treatment, children improved significantly over the control group on: (1) accuracy reading the taught words, (2) accuracy reading a list of similar, untaught words, (3) vocabulary knowledge for taught words, AND (4) vocabulary knowledge for untaught words. If your students struggle with reading irregular words, this treatment might be a great way to target multiple skills at once.

Last month, we highlighted a couple articles from ASHA’s Special Interest Group 1 about progress monitoring for language. Even more on that topic to share now! (Heads up: ASHA Journals’ website got a facelift! So don’t get confused when you click out!)

Needing direction for your language therapy with older students? Read this one through for a hypothetical case study of assessment, goal-setting, intervention, and progress monitoring for an adolescent with DLD. Yes, language sampling is involved!

Baylis and Shriberg found that 14 of 17 children (82.4%) with 22q11.2 deletion syndrome (aka DiGeorge syndrome and velocardiofacial syndrome) had comorbid motor speech disorders. Speech motor delay and childhood dysarthria were more common than CAS. These initial prevalence estimates add to a growing body of evidence that helps us better understand the profile of 22q syndrome.

Glover et al. found that young children (preschool through 3rd grade) had more negative attitudes toward stuttering than their parents. By 5th grade, those attitudes improved and were similar to attitudes of parents.

Hammarström et al. found that an intense treatment (4 sessions per week for 6 weeks) was effective for a 4 year old, Swedish-speaking child with a severe speech sound disorder. Treatment incorporated multiple approaches—integral stimulation, nonlinear phonology, and a core vocabulary approach. After therapy, the child produced more target words, word shapes, and consonants correctly.

Kraft et al. replicated an earlier study to find that effortful control (an aspect of temperament) was the most important factor predicting stuttering severity in children. They recommend addressing self-regulation as part of the holistic treatment of stuttering.

Lancaster and Camarata set out to explain the heterogeneity of language skills in kids with DLD. At this time, it’s looking like a spectrum model (think autism!) fits best, versus labeling kids by subtypes or chalking up the differences to unique, individual profiles; but lots more data is needed. For now, the evidence suggests we should assess and treat kids with DLD based on level of severity *and* individual needs—which is probably what you’re doing already.

Lane et al. profiled the communication skills of children withSotos Syndrome using a parent-report measure. They found that most of the children had a language impairment (with issues in both structure and pragmatics), with a relative strength in verbal vs. nonverbal communication and a weakness in using context. These children are likely to need support in peer relationships, too.

Sutherland et al. found that a standardized language test (the CELF-4) can be reliably administered via telehealth to children with autism. The specific children they tested were between 9 and 12 years old and mostly mainstreamed.

Ah, the age-old question—what words should we use in therapy for kids with speech sound disorders? There are a number of choices, each with some good arguments in its favor:

This study checked out differences in treatment success for each of these three word types for 24 children ages 3 to 7. Therapy for each kid focused on one, word-initial complex phoneme (/r/, /l/, “th” or “ch”), with five target words (either high-frequency, academic vocab, or nonwords, depending on which group the child was assigned to). The article describes the activities within each 50-minute intervention session, and supplemental treatment materials are available on a lab website (woo!). Each child’s progress was compared with his own performance in a baseline condition.

So, the winner? It’s the best possible news for clinicians, really. All the kids improved their phonological skills, with no significant differences among the words types. The authors point out that in reality, you’ll probably want to incorporate multiple types of words into your sessions. Like starting with nonwords for that “clean slate” effect, then moving to academic vocabulary words after a while to help boost those skills. But either way, the initial familiarity of the words likely won’t make or break your therapy.

A second research question looked at treatment intensity. By splitting their subject pool in half (which, keep in mind, meant the number of kids per condition was pretty small), they found a large effect of treatment after 19 sessions (accuracy of target sounds in new words), and a medium effect after 11. I doubt anyone here is shocked—shocked!—that more therapy leads to better outcomes, but the size of the difference was actually pretty surprising. As we know, many kids don’t just make slow and steady progress, but need to get to that point where things start to “click.” It’s good to realize that the “clicking” place might be a little further away than we think.

A: It’s fetal alcohol spectrum disorder (FASD). And while all of us probably learned the hallmark physical features and cognitive/behavioral consequences (Do you know how often “philtrum” comes up on Jeopardy? It’s a lot!), the particulars of the speech impairment haven’t been well studied. Traditionally it’s talked about as a speech delay, but clinical SLPs have found the situation to be more… complicated than that.

Speech delay, disorder, or both? It matters, since you’ll approach treatment differently. But some research is missing to connect the dots and guide our intervention. This study begins that work by analyzing the speech of a group of boys* with FASD and comparing it with another group of typically-developing children. The boys with FASD had:

Slightly lower overall intelligibility

More consonant errors and some differences in order of mastery (in Dutch, FYI)

The takeaways for SLPs? Speech in this population seems to be both delayed and disordered. It may be that motor planning and processing deficits are causing many of the speech issues we see. Beyond that, specific characteristics, such as hearing loss, tongue control issues, high arched palates, and phonological impairments (all of which some, but not all children with FASD will have) have additional effects on speech. Clinicians need to evaluate these underlying differences and difficulties and use that to guide treatment. And remember: These kids need a lot of repetition and practice to learn and generalize skills.

Unfortunately, there are no easy solutions for this population. They need “long-term dedicated treatment that is tailored to the individual profile under the guidance of SLPs who are trained in working with these children.”

*Why just boys? Evidently no families of girls with FASD were willing to participate. Interesting. So... sex-related differences in the FASD population are still an open question.