Speech comprehension depends on the successful operation of a network of brain regions. Processing of degraded speech is associated with different patterns of brain activity in comparison with that of high-quality speech. In this exploratory study, we studied whether processing degraded auditory input in daily life because of hearing impairment is associated with differences in brain volume. We compared TI-weighted structural magnetic resonance images of 17 hearing-impaired (HI) adults with those of 17 normal-hearing (NH) controls using a voxel-based morphometry analysis. HI adults were individually matched with NH adults based on age and educational level. Gray and white matter brain volumes were compared between the groups by region-of-interest analyses in structures associated with speech processing, and by whole-brain analyses. The results suggest increased gray matter volume in the right angular gyrus and decreased white matter volume in the left fusiform gyrus in HI listeners as compared with NH ones. In the HI group, there was a significant correlation between hearing acuity and cluster volume of the gray matter cluster in the right angular gyrus. This correlation supports the link between partial hearing loss and altered brain volume. The alterations in volume may reflect the operation of compensatory mechanisms that are related to decoding meaning from degraded auditory input.

The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations.

Purpose: This research aimed to increase the analogy between text reception threshold (TRT) and speech reception threshold (SRT) and to examine the TRT's value in estimating cognitive abilities important for speech comprehension in noise.

Method: We administered five TRT versions, SRT tests in stationary (SRTSTAT) and modulated (SRTMOD) noise, and two cognitive tests: a reading span (RSpan) test for working memory capacity, and a letter-digit-substitution test for information processing speed. Fifty-five normal hearing adults (18–78 years, mean = 44) participated. We examined mutual associations of the tests and their predictive value for the SRTs with correlation and linear regression analyses.

Results: SRTs and TRTs were well associated, also when controlling for age. Correlations for the SRTSTAT were generally lower than for the SRTMOD. The cognitive tests were only correlated to the SRTs when age was not controlled for. Age and the TRTs were the only significant predictors of SRTMOD. SRTSTATwas predicted by level of education and some of the TRT versions.

Conclusions: TRTs and SRTs are robustly associated, nearly independent of age. The association between SRTs and RSpan is largely age-dependent. The TRT test and the RSpan test measure different non-auditory components of linguistic processing relevant for speech perception in noise.

Purpose The purpose of this study was to investigate the effects of 2nd language proficiency and linguistic uncertainty on performance and listening effort in mixed language contexts.

Method Thirteen native speakers of Dutch with varying degrees of fluency in English listened to and repeated sentences produced in both Dutch and English and presented in the presence of single-talker competing speech in both Dutch and English. Target and masker language combinations were presented in both blocked and mixed (unpredictable) conditions. In the blocked condition, in each block of trials the target–masker language combination remained constant, and the listeners were informed of both prior to beginning the block. In the mixed condition, target and masker language varied randomly from trial to trial. All listeners participated in all conditions. Performance was assessed in terms of speech reception thresholds, whereas listening effort was quantified in terms of pupil dilation.

Results Performance (speech reception thresholds) and listening effort (pupil dilation) were both affected by 2nd language proficiency (English test score) and target and masker language: Performance was better in blocked as compared to mixed conditions, with Dutch as compared to English targets, and with English as compared to Dutch maskers. English proficiency was correlated with listening performance. Listeners also exhibited greater peak pupil dilation in mixed as compared to blocked conditions for trials with Dutch maskers, whereas pupil dilation during preparation for speaking was higher for English targets as compared to Dutch ones in almost all conditions.

Conclusions Both listener's proficiency in a 2nd language and uncertainty about the target language on a given trial play a significant role in how bilingual listeners attend to speech in the presence of competing speech in different languages, but precise effects also depend on which language is serving as target and which as masker.

The purpose of the current study was to investigate how well normal-hearing adults recalled Swedish (native) and English (non-native) fictional stories masked by speech in Swedish and English. Each story was 15 min long and divided into three parts of 5 min each. One part was masked by Swedish speech, one by English speech and one was presented unmasked as a baseline. Audibility was rated immediately after listening to each fragment. Episodic long-term memory was assessed using 24 multiple choice questions (4AFC). Every 8 questions corresponded to 5 min of recorded story and included 4 simple and 4 complex questions. Participants also performed complex span test of working memory capacity and proficiency tests in Swedish and English. The main result was that the stories in quiet were significantly better recalled than the stories masked by Swedish. Although the stimuli were correctly identified at the perceptual level, challenging listening

The aim of the present study was to address how 43 normal-hearing (NH) and hearing-impaired (HI) listeners subjectively experienced the disturbance generated by four masker conditions (i.e., stationary noise, fluctuating noise, Swedish two-talker babble and English two-talker babble) while listening to speech in two target languages, i.e., Swedish (native) or English (non-native). The participants were asked to evaluate their noise-disturbance experience on a continuous scale from 0 to 10 immediately after having performed each listening condition. The data demonstrated a three-way interaction effect between target language, masker condition, and group (HI versus NH). The HI listeners experienced the Swedish-babble masker as significantly more disturbing for the native target language (Swedish) than for the non-native language (English). Additionally, this masker was significantly more disturbing than each of the other masker types during the perception of Swedish target speech. The NH listeners, on the other hand, indicated that the Swedish speech-masker was more disturbing than the stationary and the fluctuating noise-maskers for the perception of English target speech. The NH listeners perceived more disturbance from the speech maskers than the noise maskers. The HI listeners did not perceive the speech maskers as generally more disturbing than the noise maskers. However, they had particular difficulty with the perception of native speech masked by native babble, a common condition in daily-life listening conditions. These results suggest that the characteristics of the different maskers applied in the current study seem to affect the perceived disturbance differently in HI and NH listeners. There was no general difference in the perceived disturbance across conditions between the HI listeners and the NH listeners.

This study evaluated how hearing-impaired listeners perceive native (Swedish) and nonnative (English) speech in the presence of noise- and speech maskers. Speech reception thresholds were measured for four different masker types for each target language. The maskers consisted of stationary and fluctuating noise and two-talker babble in Swedish and English. Twenty-three hearing-impaired native Swedish listeners participated, aged between 28 and 65 years. The participants also performed cognitive tests of working memory capacity in Swedish and English, nonverbal reasoning, and an English proficiency test. Results indicated that the speech maskers were more interfering than the noise maskers in both target languages. The larger need for phonetic and semantic cues in a nonnative language makes a stationary masker relatively more challenging than a fluctuating-noise masker. Better hearing acuity (pure tone average) was associated with better perception of the target speech in Swedish, and better English proficiency was associated with better speech perception in English. Larger working memory and better pure tone averages were related to the better perception of speech masked with fluctuating noise in the nonnative language. This suggests that both are relevant in highly taxing conditions. A large variance in performance between the listeners was observed, especially for speech perception in the nonnative language.

Identifying speech in adverse listening conditions requires both native and non-native listeners to cope with decreased intelligibility. The current study examined in four speech reception threshold (SRT) conditions how speech maskers (two-talker babble Swedish, two-talker babble English) and noise maskers (stationary and fluctuating noise) interfered with target speech in Swedish (native language) and English (non-native language). Listening disturbance for each condition was rated on a continuous scale. The participants also performed standardized tests in English proficiency, nonverbal reasoning and working memory capacity; the latter in both Swedish and English. Normal-hearing (n = 23) and hearing-impaired (n = 23) native Swedish listeners participated, age-range between 28 and 65 years.

The SRTs were better for native as compared to non-native speech. In both groups, speech perception performance was lower for the speech than the noise maskers, especially for non-native target speech. The level of English proficiency is important for non-native speech intelligibility in noise. A three-way interaction effect on the subjective rating scores indicated that the hearing loss affects the subjective disturbance of Swedish babble in native and non-native language perception.

Conclusion: Speech perception and subjective disturbance is influenced by a complex interaction between masker types and individual abilities.

Identifying speech in noisy conditions requires both native and non-native listeners to cope with decreased intelligibility and thereby an increased cognitive load. The current study examined in four speech reception threshold (SRT) conditions how energetic (stationary, fluctuating) and informational (two-talker babble Swedish, two-talker babble English) maskers interfered with target speech in Swedish (native language) and English (non-native language). The participants also performed standardized tests in English proficiency, nonverbal reasoning and working memory capacity; the latter in both Swedish and English. Twenty-three normal-hearing native Swedish listeners participated, 13 females and 10 males, age-range between 28 and 64 years.The main result was that the target language, masker type and English proficiency all affected speech perception. The SRT’s were better when the target language was Swedish. The informational maskers were interfering more with perception than energetic maskers, specifically in the non-native language. High English proficiency was beneficial in three out of four conditions when the target language was English. The findings suggest that English proficiency is essential regarding automaticity in perceiving this non-native language

The present study examined to what extent proficiency in a non-native language influences speech perception in noise. We explored how English proficiency affected native (Swedish) and non-native (English) speech perception in four speech reception threshold (SRT) conditions, including two energetic (stationary, fluctuating noise) and two informational (two-talker babble Swedish, two-talker babble English) maskers. Twenty-three normal-hearing native Swedish listeners participated, age between 28 and 64 years. The participants also performed standardized tests in English proficiency, non-verbal reasoning and working memory capacity. Our approach with focus on proficiency and the assessment of external as well as internal, listener-related factors allowed us to examine which variables explained intra- and interindividual differences in native and non-native speech perception performance. The main result was that in the non-native target, the level of English proficiency is a decisive factor for speech intelligibility in noise. High English proficiency improved performance in all four conditions when the target language was English. The informational maskers were interfering more with perception than energetic maskers, specifically in the non-native target. The study also confirmed that the SRTs were better when target language was native compared to non-native.

Recent studies have shown that prior knowledge about where, when, and who is going to talk improves speech intelligibility. How related attentional processes affect cognitive processing load has not been investigated yet. In the current study, three experiments investigated how the pupil dilation response is affected by prior knowledge of target speech location, target speech onset, and who is going to talk. A total of 56 young adults with normal hearing participated. They had to reproduce a target sentence presented to one ear while ignoring a distracting sentence simultaneously presented to the other ear. The two sentences were independently masked by fluctuating noise. Target location (left or right ear), speech onset, and talker variability were manipulated in separate experiments by keeping these features either fixed during an entire block or randomized over trials. Pupil responses were recorded during listening and performance was scored after recall. The results showed an improvement in performance when the location of the target speech was fixed instead of randomized. Additionally, location uncertainty increased the pupil dilation response, which suggests that prior knowledge of location reduces cognitive load. Interestingly, the observed pupil responses for each condition were consistent with subjective reports of listening effort. We conclude that communicating in a dynamic environment like a cocktail party (where participants in competing conversations move unpredictably) requires substantial listening effort because of the demands placed on attentional processes. (C) 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

Dividing attention over two streams of speech strongly decreases performance compared to focusing on only one. How divided attention affects cognitive processing load as indexed with pupillometry during speech recognition has so far not been investigated. In 12 young adults the pupil response was recorded while they focused on either one or both of two sentences that were presented dichotically and masked by fluctuating noise across a range of signal-to-noise ratios. In line with previous studies, the performance decreases when processing two target sentences instead of one. Additionally, dividing attention to process two sentences caused larger pupil dilation and later peak pupil latency than processing only one. This suggests an effect of attention on cognitive processing load (pupil dilation) during speech processing in noise.

Objectives: Recent research has demonstrated that pupil dilation, a measure of mental effort (cognitive processing load), is sensitive to differences in speech intelligibility. The present study extends this outcome by examining the effects of masker type and age on the speech reception threshold (SRT) and mental effort. less thanbrgreater than less thanbrgreater thanDesign: In young and middle-aged adults, pupil dilation was measured while they performed an SRT task, in which spoken sentences were presented in stationary noise, fluctuating noise, or together with a single-talker masker. The masker levels were adjusted to achieve 50% or 84% sentence intelligibility. less thanbrgreater than less thanbrgreater thanResults: The results show better SRTs for fluctuating noise and a single-talker masker compared with stationary noise, which replicates results of previous studies. The peak pupil dilation, reflecting mental effort, was larger in the single-interfering speaker condition compared with the other masker conditions. Remarkably, in contrast to the thresholds, no differences in peak dilation were observed between fluctuating noise and stationary noise. This effect was independent of the intelligibility level and age. less thanbrgreater than less thanbrgreater thanConclusions: To maintain similar intelligibility levels, participants needed more mental effort for speech perception in the presence of a single-talker masker and then with the two other types of maskers. This suggests an additive interfering effect of speech information from the single-talker masker. The dissociation between these performance and mental effort measures underlines the importance of including measurements of pupil dilation as an independent index of mental effort during speech processing in different types of noisy environments and at different intelligibility levels.

A recent pupillometry study on adults with normal hearing indicates that the pupil response during speech perception (cognitive processing load) is strongly affected by the type of speech masker. The current study extends these results by recording the pupil response in 32 participants with hearing impairment (mean age 59 yr) while they were listening to sentences masked by fluctuating noise or a single-talker. Efforts were made to improve audibility of all sounds by means of spectral shaping. Additionally, participants performed tests measuring verbal working memory capacity, inhibition of interfering information in working memory, and linguistic closure. The results showed worse speech reception thresholds for speech masked by single-talker speech compared to fluctuating noise. In line with previous results for participants with normal hearing, the pupil response was larger when listening to speech masked by a single-talker compared to fluctuating noise. Regression analysis revealed that larger working memory capacity and better inhibition of interfering information related to better speech reception thresholds, but these variables did not account for inter-individual differences in the pupil response. In conclusion, people with hearing impairment show more cognitive load during speech processing when there is interfering speech compared to fluctuating noise. (C) 2014 Acoustical Society of America.

It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. Before being able to assess such a hearing aid benefit the present study examined how processing load while listening to masked speech relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in thirty-two normal hearing participants while listening to sentences masked by fluctuating noise or interfering speech at either 50% and 84% intelligibility. Additionally, working memory capacity, inhibition of irrelevant information, and written text reception was tested. Pupil responses were larger during interfering speech as compared to fluctuating noise. This effect was independent of intelligibility level. Regression analysis revealed that high working memory capacity, better inhibition, and better text reception were related to better speech reception thresholds. Apart from a positive relation to speech recognition, better inhibition and better text reception are also positively related to larger pupil dilation in the single-talker masker conditions. We conclude that better cognitive abilities not only relate to better speech perception, but also partly explain higher processing load in complex listening conditions.

Pupillometry is one method that has been used to measure processing load expended during speech understanding. Notably, speech perception (in noise) tasks can evoke a pupil response. It is not known if there is concurrent activation of the sympathetic nervous system as indexed by salivary cortisol and chromogranin A (CgA) and whether such activation differs between normally hearing (NH) and hard-of-hearing (HH) adults. Ten NH and 10 adults with mild-to-moderate hearing loss (mean age 52 years) participated. Two speech perception tests were administered in random order: one in quiet targeting 100% correct performance and one in noise targeting 50% correct performance. Pupil responses and salivary samples for cortisol and CgA analyses were collected four times: before testing, after the two speech perception tests, and at the end of the session. Participants rated their perceived accuracy, effort, and motivation. Effects were examined using repeated-measures analyses of variance. Correlations between outcomes were calculated. HH listeners had smaller peak pupil dilations (PPDs) than NH listeners in the speech in noise condition only. No group or condition effects were observed for the cortisol data, but HH listeners tended to have higher cortisol levels across conditions. CgA levels were larger at the pretesting time than at the three other test times. Hearing impairment did not affect CgA. Self-rated motivation correlated most often with cortisol or PPD values. The three physiological indicators of cognitive load and stress (PPD, cortisol, and CgA) are not equally affected by speech testing or hearing impairment. Each of them seem to capture a different dimension of sympathetic nervous system activity.

Recent studies have shown that activating the noise reduction scheme in hearing aids results in a smaller peak pupil dilation (PPD), indicating reduced listening effort, at 50% and 95% correct sentence recognition with a 4-talker masker. The objective of this study was to measure the effect of the noise reduction scheme (on or off) on PPD and sentence recognition across a wide range of signal-to-noise ratios (SNRs) from +16 dB to -12 dB and two masker types (4-talker and stationary noise). Relatively low PPDs were observed at very low (-12 dB) and very high (+16 dB to +8 dB) SNRs presumably due to giving up and easy listening, respectively. The maximum PPD was observed with SNRs at approximately 50% correct sentence recognition. Sentence recognition with both masker types was significantly improved by the noise reduction scheme, which corresponds to the shift in performance from SNR function at approximately 5 dB toward a lower SNR. This intelligibility effect was accompanied by a corresponding effect on the PPD, shifting the peak by approximately 4 dB toward a lower SNR. In addition, with the 4-talker masker, when the noise reduction scheme was active, the PPD was smaller overall than that when the scheme was inactive. We conclude that with the 4-talker masker, noise reduction scheme processing provides a listening effort benefit in addition to any effect associated with improved intelligibility. Thus, the effect of the noise reduction scheme on listening effort incorporates more than can be explained by intelligibility alone, emphasizing the potential importance of measuring listening effort in addition to traditional speech reception measures. (C) 2018 Elsevier B.V. All rights reserved.

Objectives: To undertake a systematic review of available evidence on the effect of hearing impairment and hearing aid amplification on listening effort. Two research questions were addressed: Q1) does hearing impairment affect listening effort? and Q2) can hearing aid amplification affect listening effort during speech comprehension? Design: English language articles were identified through systematic searches in PubMed, EMBASE, Cinahl, the Cochrane Library, and PsyclNFO from inception to August 2014. References of eligible studies were checked. The Population, Intervention, Control, Outcomes, and Study design strategy was used to create inclusion criteria for relevance. It was not feasible to apply a meta-analysis of the results from comparable studies. For the articles identified as relevant, a quality rating, based on the 2011 Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines, was carried out to judge the reliability and confidence of the estimated effects. Results: The primary search produced 7017 unique hits using the keywords: hearing aids OR hearing impairment AND listening effort OR perceptual effort OR ease of listening. Of these, 41 articles fulfilled the Population, Intervention, Control, Outcomes, and Study design selection criteria of: experimental work on hearing impairment OR hearing aid technologies AND listening effort OR fatigue during speech perception. The methods applied in those articles were categorized into subjective, behavioral, and physiological assessment of listening effort. For each study, the statistical analysis addressing research question Q1 and/or Q2 was extracted. In seven articles more than one measure of listening effort was provided. Evidence relating to Q1 was provided by 21 articles that reported 41 relevant findings. Evidence relating to Q2 was provided by 27 articles that reported 56 relevant findings. The quality of evidence on both research questions (Q1 and Q2) was very low, according to the Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. Conclusion: In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings.

Hearing impairment negatively affects speech perception and may increase listening effort, especially under adverse conditions such as in the presence of background noise. Previous research showed that hearing-aid rehabilitation can improve speech perception performance. However, it is not clear whether it influences listening effort during speech perception. The aim of this systematic review is to provide an overview of available evidence of the effect of hearing-aid rehabilitation on listening effort. English language articles were identified through systematic searches in PubMed, EMBASE, Cinahl, the Cochrane Library, PsycINFO and through reference checking from inception to August 2014. The primary search produced 12210 unique hits using the key-words: hearing aids OR hearing impairment AND listening effort OR perceptual effort OR ease of listening. Three researchers independently determined eligibility of the articles. In total, about 45 articles fulfilled the search and selection criteria of: experimental work on hearing aid technologies AND listening effort OR fatigue during speech perception.

Most of the about 45 eligible studies (about 70%) measured perceived effort using subjective scales or questionnaires. Behavioral measures of listening effort mainly included dual-task paradigms. Finally, physiological measures such as provided by pupillometry, electroencephalography and functional magnetic resonance imaging objectively estimated listening effort. Some studies found that hearing-aid rehabilitation was associated with significant reductions of listening effort, while others failed to do so or even reported an increase of listening effort associated with hearing-aid rehabilitation.This review summarizes the available evidence on the effects of hearing aid rehabilitation on listening effort.

Previous research has reported effects of masker type and signal-to-noise ratio (SNR) on listening effort, as indicated by the peak pupil dilation (PPD) relative to baseline during speech recognition. At about 50% correct sentence recognition performance, increasing SNRs generally results in declining PPDs, indicating reduced effort. However, the decline in PPD over SNRs has been observed to be less pronounced for hearing-impaired (HI) compared to normal-hearing (NH) listeners. The presence of a competing talker during speech recognition generally resulted in larger PPDs as compared to the presence of a fluctuating or stationary background noise. The aim of the present study was to examine the interplay between hearing-status, a broad range of SNRs corresponding to sentence recognition performance varying from 0 to 100% correct, and different masker types (stationary noise and single-talker masker) on the PPD during speech perception. Twenty-five HI and 32 age-matched NH participants listened to sentences across a broad range of SNRs, masked with speech from a single talker (-25 dB to +15 dB SNR) or with stationary noise (-12 dB to +16 dB). Correct sentence recognition scores and pupil responses were recorded during stimulus presentation. With a stationary masker, NH listeners show maximum PPD across a relatively narrow range of low SNRs, while HI listeners show relatively large PPD across a wide range of ecological SNRs. With the single-talker masker, maximum PPD was observed in the mid-range of SNRs around 50% correct sentence recognition performance, while smaller PPDs were observed at lower and higher SNRs. Mixed-model ANOVAs revealed significant interactions between hearing-status and SNR on the PPD for both masker types. Our data show a different pattern of PPDs across SNRs between groups, which indicates that listening and the allocation of effort during listening in daily life environments may be different for NH and HI listeners. (C) 2017 The Authors. Published by Elsevier B.V.

Objective: The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. Study sample: Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4dB HL. Design: LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. Results: The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R-2 = 0.40). Conclusions: All LEVEL 2 factors are important theoretically as well as for clinical assessment.

Working memory is important for online language processing in a dialogue. We use it to store relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us keep track of a dialogue while taking turns and following the gist. This paper examines the Ease-of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in sound and speech processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on working memory, albeit in different ways. New predictions and clinical implications are outlined.

Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.

A working memory based model for Ease of Language Understanding (ELU) hasbeen developed (Rönnberg, 2003; Rönnberg et al., 2008). It predicts that speechunderstanding in adverse conditions, for example in the presence of high levels ofbackground noise, phonological distortion or misleading semantic context, is dependenton explicit processing resources such as working memory capacity. Thispresentation will outline the details of this prediction by addressing (1) how matchingvs. mismatching semantic cues affect speech understanding at differentSNRs behaviorally and neurally, (2) how working memory capacity interacts withsignal processing in hearing aids under “high degradation” (i.e. modulated noise,fast compression) and “low” degradation (i.e. steady state noise, slow compression)conditions, and (3) how different memory systems are selectively affectedby hearing impairment.

Perceptual load and cognitive load can be separately manipulated and dissociated in their effects on speech understanding in noise. The Ease of Language Understanding model assumes a theoretical position where perceptual task characteristics interact with the individual's implicit capacities to extract the phonological elements of speech. Phonological precision and speed of lexical access are important determinants for listening in adverse conditions. If there are mismatches between the phonological elements perceived and phonological representations in long-term memory, explicit working memory (WM)-related capacities will be continually invoked to reconstruct and infer the contents of the ongoing discourse. Whether this induces a high cognitive load or not will in turn depend on the individual's storage and processing capacities in WM. Data suggest that modulated noise maskers may serve as triggers for speech maskers and therefore induce a WM, explicit mode of processing. Individuals with high WM capacity benefit more than low WM-capacity individuals from fast amplitude compression at low or negative input speech-to-noise ratios. The general conclusion is that there is an overarching interaction between the focal purpose of processing in the primary listening task and the extent to which a secondary, distracting task taps into these processes.

Many older adults with hearing impairment continue to have substantial communication difficulties after being fitted with hearing aids, and many do not choose to wear hearing aids. Two group communication education programs aimed at such older people are described. The 'Keep on Talking' program has a health promotion focus, and is aimed at maintaining communication for older adults living in the community. An experimental group (n = 120) attended the program, and a control group (n = 130) received a communication assessment but no intervention. Significant improvements were found in the experimental participants in terms of knowledge about communication changes with age and about strategies to maintain communication skills. At the follow-up evaluation at 1 year, 45% of the experimental group, compared to 10% of the control group, had acted to improve their communication skills. The 'Active Communication Education' program focuses on the development of problem-solving strategies to improve communication in everyday life situations. Preliminary outcomes have been assessed on a small scale (n = 14) to date. It is concluded that communication programs represent an important adjunct to, or supplement for, the traditional approach that focuses on hearing aid fitting.

Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate those measures to auditory sensitivity and processing, different types of cognitive processing abilities, and signal processing in hearing aids.

Material and method: Participants were 200 hearing-aid wearers, with a mean age of 60.8 years, 43% females, with average hearing thresholds in the better ear of 37.4 dB HL. Tests of auditory functions were hearing thresholds, DPOAEs, tests of fine structure processing, IHC dead regions, spectro-temporal modulation, and speech recognition in quiet (PB words). Tests of cognitive processing function were tests of phonological skills, working memory, executive functions and inference making abilities, and general cognitive tests (e.g., tests of cognitive decline and IQ). The outcome test variables were the Hagerman sentences with 50 and 80% speech recognition levels, using two different noises (stationary speech weighted noise and 4-talker babble), and three types of signal processing (linear gain, fast acting compression, and linear gain plus a non-ideal binary mask). Another sentence test included typical and atypical sentences with contextual cues that were tested both audio-visually and in an auditory mode only. Moreover, HINT and SSQ were administrated.

Analysis: Factor analyses were performed separate for the auditory, cognitive, and outcome tests.

Results: The auditory tests resulted in two factors labeled SENSITIVITY and TEMPORAL FINE STRUCTURE, the cognitive tests in one factor (COGNITION), and the outcome tests in the two factors termed NO CONTEXT and CONTEXT that relates to the level of context in the different outcome tests. When age was partialled out, COGNITION was moderately correlated with the TEMPORAL FINE STRUCTURE and NO CONTEXT factors but only weakly correlated with the CONTEXT factor. SENSITIVITY correlated weakly with TEMPORAL FINE STRUCTURE and CONTEXT, and moderately with NO CONTEXT, while TEMPORAL FINE STRUCTURE showed weak correlation with CONTEXT and moderate correlation with NO CONTEXT. CONTEXT and NO CONTEXT had a moderate correlation. Moreover, the overall results of the Hagerman sentences showed 0.9 dB worse SNR with fast acting compression compared with linear gain and 5.5 dB better SNR with linear gain and noise reduction compared with only linear gain.

Conclusions: For hearing aid wearers, the ability to recognize speech in noise is associated with both sensory and cognitive processing abilities when the speech materials have low internal context. These associations are less prominent when the speech material has contextual cues.

Recently, the measurement of the pupil dilation response has been applied in many studies to assess listening effort. Meanwhile, the mechanisms underlying this response are still largely unknown. We present the results of a method that separates the influence of the parasympathetic and sympathetic branches of the autonomic nervous system on the pupil response during speech perception. This is achieved by changing the background illumination level. In darkness, the influence of the parasympathetic nervous system on the pupil response is minimal, whereas in light, there is an additional component from the parasympathetic nervous system. Nineteen hearing-impaired and 27 age-matched normal-hearing listeners performed speech reception threshold tests targeting a 50% correct performance level while pupil responses were recorded. The target speech was masked with a competing talker. The test was conducted twice, once in dark and once in a light condition. Need for Recovery and Checklist Individual Strength questionnaires were acquired as indices of daily-life fatigue. In dark, the peak pupil dilation (PPD) did not differ between the two groups, but in light, the normal-hearing group showed a larger PPD than the hearing-impaired group. Listeners with better hearing acuity showed larger differences in dilation between dark and light. These results indicate a larger effect of parasympathetic inhibition on the pupil dilation response of listeners with better hearing acuity, and a relatively high parasympathetic activity in those with worse hearing. Previously observed differences in PPD between normal and impaired listeners are probably not solely because of differences in listening effort.

Objective: People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Design: Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. Results: No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil dilation. Conclusions: To the best of our knowledge, this is the first study to investigate the relationship between a subjective measure of daily-life fatigue and an objective measure of pupil dilation, as an indicator of listening effort. These findings help to provide an empirical link between pupil responses, as observed in the laboratory, and daily-life fatigue.

Context Although the pupil light reflex has been widely used as a clinical diagnostic tool for autonomic nervous system dysfunction, there is no systematic review available to summarize the evidence that the pupil light reflex is a sensitive method to detect parasympathetic dysfunction. Meanwhile, the relationship between parasympathetic functioning and hearing impairment is relatively unknown. Objectives To 1) review the evidence for the pupil light reflex being a sensitive method to evaluate parasympathetic dysfunction, 2) review the evidence relating hearing impairment and parasympathetic activity and 3) seek evidence of possible connections between hearing impairment and the pupil light reflex. Methods Literature searches were performed in five electronic databases. All selected articles were categorized into three sections: pupil light reflex and parasympathetic dysfunction, hearing impairment and parasympathetic activity, pupil light reflex and hearing impairment. Results Thirty-eight articles were included in this review. Among them, 36 articles addressed the pupil light reflex and parasympathetic dysfunction. We summarized the information in these data according to different types of parasympathetic-related diseases. Most of the studies showed a difference on at least one pupil light reflex parameter between patients and healthy controls. Two articles discussed the relationship between hearing impairment and parasympathetic activity. Both studies reported a reduced parasympathetic activity in the hearing impaired groups. The searches identified no results for pupil light reflex and hearing impairment. Discussion and Conclusions As the first systematic review of the evidence, our findings suggest that the pupil light reflex is a sensitive tool to assess the presence of parasympathetic dysfunction. Maximum constriction velocity and relative constriction amplitude appear to be the most sensitive parameters. There are only two studies investigating the relationship between parasympathetic activity and hearing impairment, hence further research is needed. The pupil light reflex could be a candidate measurement tool to achieve this goal.

Objectives Pupil light reflex (PLR) has been widely used as a method for evaluating parasympathetic activity. The first aim of the present study is to develop a PLR measurement using a computer screen set-up and compare its results with the PLR generated by a more conventional setup using light-emitting diode (LED). The parasympathetic nervous system, which is known to control the rest and digest response of the human body, is considered to be associated with daily life fatigue. However, only few studies have attempted to test the relationship between self-reported daily fatigue and physiological measurement of the parasympathetic nervous system. Therefore, the second aim of this study was to investigate the relationship between daily-life fatigue, assessed using the Need for Recovery scale, and parasympathetic activity, as indicated by the PLR parameters. Design A pilot study was conducted first to develop a PLR measurement set-up using a computer screen. PLRs evoked by light stimuli with different characteristics were recorded to confirm the influence of light intensity, flash duration, and color on the PLRs evoked by the system. In the subsequent experimental study, we recorded the PLR of 25 adult participants to light flashes generated by the screen set-up as well as by a conventional LED set-up. PLR parameters relating to parasympathetic and sympathetic activity were calculated from the pupil responses. We tested the split-half reliability across two consecutive blocks of trials, and the relationships between the parameters of PLRs evoked by the two set-ups. Participants rated their need for recovery prior to the PLR recordings. Results PLR parameters acquired in the screen and LED set-ups showed good reliability for amplitude related parameters. The PLRs evoked by both set-ups were consistent, but showed systematic differences in absolute values of all parameters. Additionally, higher need for recovery was associated with faster and larger constriction of the PLR. Conclusions This study assessed the PLR generated by a computer screen and the PLR generated by a LED. The good reliability within set-ups and the consistency between the PLRs evoked by the set-ups indicate that both systems provides a valid way to evoke the PLR. A higher need for recovery was associated with faster and larger constricting PLRs, suggesting increased levels of parasympathetic nervous system activity in people experiencing higher levels of need for recovery on a daily basis.

Purpose: In this study, the authors assessed the influence of masking level (29% or 71% sentence perception) and test modality on the processing load during language perception as reflected by the pupil response. In addition, the authors administered a delayed cued stimulus recall test to examine whether processing load affected the encoding of the stimuli in memory. less thanbrgreater than less thanbrgreater thanMethod: Participants performed speech and text reception threshold tests, during which the pupil response was measured. In the cued recall test, the first half of correctly perceived sentences was presented, and participants were asked to complete the sentences. Reading and listening span tests of working memory capacity were presented as well. less thanbrgreater than less thanbrgreater thanResults: Regardless of test modality, the pupil response indicated higher processing load in the 29% condition than in the 71% correct condition. Cued recall was better for the 29% condition. less thanbrgreater than less thanbrgreater thanConclusions: The consistent effect of masking level on the pupil response during listening and reading support the validity of the pupil response as a measure of processing load during language perception. The absent relation between pupil response and cued recall may suggest that cued recall is not directly related to processing load, as reflected by the pupil response.

Purpose: In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. less thanbrgreater than less thanbrgreater thanMethod: Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, andamp; Tobi, 1996) and performed the Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, andamp; Houtgast, 2007) test as well as tests of spatial working memory (SWM) and visual sustained attention. Regression analyses examined the predictive value of age, hearing thresholds (pure-tone averages [PTAs]), speech perception in noise (speech reception thresholds in noise [SRTNs]), and the cognitive tests for the 5 AIADH factors. less thanbrgreater than less thanbrgreater thanResults: Besides the variance explained by age, PTA, and SRTN, cognitive abilities were related to each hearing factor. The reported difficulties with sound detection and speech perception in quiet were less severe for participants with higher age, lower PTAs, and better TRTs. Fewer sound localization and speech perception in noise problems were reported by participants with better SRTNs and smaller SWM. Fewer sound discrimination difficulties were reported by subjects with better SRTNs and TRTs and smaller SWM. less thanbrgreater than less thanbrgreater thanConclusions: The results suggest a general role of the ability to read partly masked text in subjective hearing. Large working memory was associated with more reported hearing difficulties. This study shows that besides auditory variables and age, cognitive abilities are related to self-reported hearing disability.

Purpose: The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions.; Method: First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables.; Results: The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average.; Conclusions: We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.

This study assessed the influence of masker type, working memory capacity (reading span and size comparison span) and linguistic closure ability (text reception threshold) on the benefit obtained from semantically related text cues during perception of speech in noise. Sentences were masked by stationary noise, fluctuating noise, or an interfering talker. Each sentence was preceded by three text cues that were either words that were semantically related to the sentence or unpronounceable nonwords. Speech perception thresholds were adaptively measured and delayed sentence recognition was subsequently assessed. Word cues facilitated speech perception in noise. The amount of benefit did not depend on masker type, but benefit correlated with reading span when speech was masked by interfering speech. Cue benefit was not related to reading span when other maskers were used and did not correlate with the text reception threshold or size comparison span. Larger working-memory capacity was furthermore associated with enhanced delayed recall of sentences preceded by word cues relative to nonword cues. This suggests that working memory capacity may be associated with release from informational masking by semantically related information, with keeping the cues in mind while disambiguating the sentence and for encoding of speech content into long-term memory.

The measurement of cognitive resource allocation during listening, or listening effort, provides valuable insight in the factors influencing auditory processing. In recent years, many studies inside and outside the field of hearing science have measured the pupil response evoked by auditory stimuli. The aim of the current review was to provide an exhaustive overview of these studies. The 146 studies included in this review originated from multiple domains, including hearing science and linguistics, but the review also covers research into motivation, memory, and emotion. The present review provides a unique overview of these studies and is organized according to the components of the Framework for Understanding Effortful Listening. A summary table presents the sample characteristics, an outline of the study design, stimuli, the pupil parameters analyzed, and the main findings of each study. The results indicate that the pupil response is sensitive to various task manipulations as well as interindividual differences. Many of the findings have been replicated. Frequent interactions between the independent factors affecting the pupil response have been reported, which indicates complex processes underlying cognitive resource allocation. This complexity should be taken into account in future studies that should focus more on interindividual differences, also including older participants. This review facilitates the careful design of new studies by indicating the factors that should be controlled for. In conclusion, measuring the pupil dilation response to auditory stimuli has been demonstrated to be sensitive method applicable to numerous research questions. The sensitivity of the measure calls for carefully designed stimuli.