Departments

Hidden versus Not-so-Hidden Hearing Loss

By Colleen G. Le Prell, PhD

The term “hidden hearing loss” was explicitly defined as a selective reduction in the number of synapses connecting the inner hair cells (IHCs) and their auditory nerve (AN) targets, resulting in a decrease in the amplitude of wave I of the sound-evoked auditory brainstem response (ABR). The term was coined by Schaette and McAlpine with the primary purpose of labeling the novel and paradigm-changing observations of Kujawa and Liberman,1–3 who identified permanent noise-induced synaptic pathology (“synaptopathy”) and corresponding decreases in the amplitude of ABR wave I even after the recovery of noise-induced temporary threshold shifts (TTS). The term hidden hearing loss has also been used by some to refer more generically to functional deficits such as difficulty understanding speech-in-noise, tinnitus, and hyperacusis, based on the hypothesis that these functional deficits, which are “hidden behind a normal audiogram,” are the result of synaptopathy. To avoid confusion, it is helpful to use precise language when referring to synapse loss, wave I amplitude decreases, or specific clinical issues, rather than using the term “hidden hearing loss,” which has been used at different times and by different authors to mean any of the above.

In animal models, tissues can be harvested so that cells and synapses can be carefully counted to confirm specific anatomical pathology. This tissue collection is completed after the amplitude of the ABR waveforms has been quantified in anesthetized subjects, with highly reproducible waveforms being able to be collected in anesthetized animals. If there is a statistically significant reduction in synapses in the absence of measurable loss of the outer hair cell (OHC) population, and ABR wave I amplitude is decreased, then the decreases in wave I amplitude have been attributed to the loss of the IHC/AN synapses. In human populations, data collection and interpretation are obviously more challenging. Human data are much more likely to be collected in awake patients/participants, resulting in noisier evoked-potential data, and thus there have been efforts to normalize wave-I amplitude relative to other evoked potentials [such as summating potential (SP) or wave V amplitude]. Measures of OHC function rely on distortion product otoacoustic emission (DPOAE) tests, which are often scored pass/fail using a 6-dB signal to noise criteria that fails to distinguish present but abnormal DPOAE amplitudes from normal DPOAE amplitudes. Finally, collection of longitudinal evoked potential records in human participants has been much more limited; instead of powerful within subjects pre- and post- noise measures, human studies have instead relied on cross-sectional or correlational analyses based on self-reported noise history.

This is a hot topic area, with labs around the world busily working to understand a host of issues related to differences in vulnerability across species, where risk begins, how risk grows, and, perhaps most important from an audiologist’s perspective, what patient complaints are likely, what functional tests should be completed, and what rehabilitation options exist. The purpose of this article is to guide audiologists and audiology students through the state of the current evidence regarding synaptopathy, evoked potential outcomes, and functional deficits, within this broad topic area of “hidden hearing loss.”

Measuring “Hidden” Noise Injury in Animal Models

In early studies, there was a significant decrease in the number of synaptic connections between IHCs and afferent neurons subsequent to exposures in which the TTS measured the day after noise exposure was 40–50 dB at the most affected frequencies.3–7 Synapse loss was followed by a slow neural degeneration.2 In a more recent investigation, histological data from two macacca mulatta (rhesus macaque) monkeys showed synaptic loss ranging from approximately 10% at 2 kHz to approximately 30% at 32 kHz after exposure to a 50-Hz noise band centred at 2 kHz (108 dB SPL for 4 hours), with OHCs missing in only a small cochlear region and no measurable PTS.8 These primate data confirm selective synaptopathic damage can be induced by noise exposure in a non-human primate model, and support the possibility that such noise-induced pathology might also occur in humans. Confirmatory human data are currently lacking, in part because the temporal bones from human cadavers often do not have known noise exposure histories, but also because humans have a lifetime of other accumulated risk factors making attribution of synaptic loss or other pathologies thus making it difficult to attribute pathology to a single specific causal factor/

In considering translation to humans, the boundary at which risk begins is a major unknown not only for humans, but also across species. During later acute noise injury studies with rodents, when TTS measured 24 hours after the noise exposure was smaller (i.e., a maximum of 20–30 dB TTS 24 hours post noise), there generally has not been any reported synaptopathy or decrease in ABR Wave I amplitude.7, 9–11 Risk of injury will almost certainly be more complicated than observed TTS, however, based on the important observation by Fernandez et al., that the overall configuration of the TTS (notched versus sloping) may be an important.7 Specifically, whereas a relatively notched hearing loss with a maximum TTS of approximately 25 dB at 22.6 kHz was not accompanied by synapse loss at that cochlear location, an identical TTS of approximately 25 dB at 22.6 kHz was accompanied by in synapse loss at that cochlear location when the overall TTS was sloping (with increasing TTS at frequencies above 22.6 kHz). Moreover, all of these studies have been based on a single acute noise exposure, whereas human lifetime noise histories – at least for civilians - will typically include multiple exposures each having a smaller effect on the auditory system. Additional investigation is urgently needed to fully understand whether the size of the TTS, the configuration of the audiometric changes, or some other as yet unknown variable predicts the specific conditions in which synpatopathy occurs both in rodents and in other mammals including humans, both for acute exposure and chronic (repeated) exposure conditions.

Measuring Noise Injury in Humans

It has long been known that noise exposure damages the inner ear; longer and louder sounds are more hazardous than shorter and quieter sounds, and there is a trade-off between these two variables: the longer the sound exposure, the shorter the safe listening duration will be. Beyond these simple relationships, we also know that changes in hearing can be either temporary or permanent.12–14 There is a wealth of information regarding occupational safety regulations in place in different countries. While the specific limits vary with respect to the levels and durations at which sound is believed to become hazardous, all of the regulations in place at this time are intended to prevent the development of permanent threshold shift (PTS).15,16 There is some discussion of TTS prevention based on the hypothesis that repeated TTS may ultimately resolve to a PTS, and there are some populations for which even temporary compromise of communication ability is hazardous (i.e., service members and safety officers operating in hazardous areas), but most discussions of noise regulations emphasize prevention of permanent noise injury.

Given that the federal noise regulations are based on the measurement of PTS, it should be clear that TTS by definition is not a reportable noise-induced workplace injury. However, because it is now clearly established in rodents that a noise exposure can result in permanent neural injury in the absence of PTS, the potential for neural injuries to be missed using the current OSHA-regulated threshold-based monitoring protocols has been noted. These data have thus been used to draw into question whether the strategies used to monitor worker hearing, and the criteria used to define occupational noise injury, are appropriately protective for workers and military personnel.3,17–20

At this time, the most direct evidence of synaptopathy in human cochlear tissues comes from the assessment of age-related changes in synaptic integrity by Viana et al.,21 who counted synapses in five temporal bones. There were fewer synaptic connections observed as a function of increasing age of the temporal bone donor at the time of their death. Other data from Makary et al. had previously documented an age-related decrease in human cochlear spiral ganglion cell survival,22 in 100 temporal bones with intact sensory cell populations, a finding that was later confirmed in mice by Sergeyenko et al.23 Neural degeneration in the absence of hair cell loss has also been reported in the temporal bones of two young adults treated with different aminoglycoside antibiotic regimens,24 a surprising finding given the widespread identification of OHC pathology as the primary adverse ototoxic effect for these agents.25 If selective neural pathology underlies some forms of age-related hearing loss and some forms of noise injury (and perhaps even some forms of aminoglycoside ototoxicity, then it is possible there will be overlap in the patterns of functional deficits observed during aging and after noise exposure. The extent to which psychophysical deficits overlap was reviewed previously by Shrivastav,26 who noted overlapping deficits in several domains. It is important to consider whether psychophysical tests or speech-in-noise tests will be sensitive and useful metrics for revealing early effects of noise.

Speech-in-Noise Tests: Sensitivity to Noise Injury

There have been several suggestions that difficulty understanding speech in noisy environments, despite the presence of clinically normal hearing thresholds, might be a consequence of noise-induced neuropathic damage.3,4,22 Speech-in-noise and other signal-in-noise tests have been the topic of several recent reviews.26,27 In brief, there is no current “gold standard” regarding which test (or tests) will be the most sensitive to noise injury. Among participants with varied histories of recreational sound exposure, there are multiple recent studies that have failed to find evidence of either decreased wave I amplitude,28–32 or decreased speech-in-noise/signal-in-noise test performance28,29,32,33 as a function of self-reported history. A subset of these studies has also included careful psychophysical measurements assessing sensitivity to frequency, amplitude, and phase cues, including both difference limens and modulation depth.32,33 A single report34 with updated analysis35 provides the only positive findings for a relationship between recreational sound exposure and decreasing wave I amplitude.

In contrast to the above negative findings from populations in which specific noise history was not part of a targeted recruitment effort, data collected from participants with “extreme” concert attendance (at least 25 loud music events in the past year, and at least 40 loud music events in the past 2 years) revealed some evidence of group differences. Grose and colleagues reported small decreases in the ratios of wave I/wave V amplitude (i.e., when wave I was normalized against wave V),36 but they did not detect any functional deficits on a speech-in-noise test. Their speech-in-noise test was a custom test using filtered BKB sentences embedded in a similarly filtered speech-shaped noise. In contrast, a study comparing at-risk participants (primarily music students) to others deemed not at risk (primarily CSD students), revealed deficits on a custom word-in-noise task in addition to small decreases in the ratios of summating potential (SP) amplitude relative to action potential (AP) amplitude (i.e., SP/AP amplitude ratios). In their custom listening task, Liberman et al., added time compression and reverberation to NU6 words to increase the difficulty of the listening test.37

Future studies should document participants’ most recent noise exposure to assure that temporary deficits do not confound the interpretation of group differences as the effects of loud recreational sound exposure on performance in a speech-in-noise test can be temporary. Using a pre/post within subjects design, Grinn et al. measured a temporary noise-induced decrease in performance on the words-in-noise (WIN) test 24 hours after loud recreational activities (concerts, clubs with amplified music, etc.),29 with a statistically significant relationship between noise dose and performance deficit. The data collected in this study suggested the WIN is highly sensitive to the effects of noise on hearing, as changes in the WIN test the day after the noise exposure were detected in the absence of measurable TTS. The changes observed by Grinn et al., were only at the most difficult signal to noise ratios,29 a finding that parallels observations from the rat model in which permanent post-noise detection-in-noise deficits were observed only in the poorest (most difficult) signal-to-noise condition after TTS had recovered.11

Cross-sectional data collected from workers exposed to occupational noise have frequently revealed functional deficits.38 However, small but statistically significant threshold differences, or small but statistically significant differences in DPOAE amplitude, are typically also observed, with noise-exposed groups having slightly poorer outcomes relative to others. In other words, even though thresholds and DPOAE amplitudes are clinically normal, those exposed to noise have small but reliable deficits that may contribute to the slightly poorer performance on signal-in-noise tests. Because ABR wave I amplitude will decrease as a function of damage to the OHC active process, synaptopathy cannot be inferred to be the cause of observed signal-in-noise test deficits in these cases. The potential for OHC loss to be a particularly important confound to the interpretation of any relationship between neural loss and speech-in-noise deficits was highlighted when Hoben et al., demonstrated that OHC damage may directly underlie speech recognition deficits in noisy backgrounds.39 After reviewing a variety of historical data, Hickox et al., concluded that inner pathology will almost certainly include a mix of OHC pathology and neural pathology in humans, clearly complicating diagnostic interpretation.40

Taken together, the data importantly suggest the potential that compromised speech-in-noise outcomes may be one of the earliest functional deficits induced by occupational noise and other loud sound exposures. However, we cannot at this time attribute these deficits to selective synaptopathy, as the deficits may be driven by subtle OHC pathology, or the combination of both OHC and neural pathology. If early detection of noise-induced deficits in either young adults or noise-exposed workers is the sole goal, precise attribution of deficits to OHC pathology of loss of neural synapses is probably not important. However, from a basic science perspective, continued efforts to understand human pathology are critical as the damaged cells become potential therapeutic targets once functional deficits compromise detection, discrimination, understanding, communication, localization, or other important auditory tasks. Some scientists are working towards the identification of drugs that would restore synaptic integrity.41–43 whereas others are working towards the identification of drugs that would induce the generation of new hair cells44,45 or the regrowth of the spiral ganglion.46 To optimize assessment of such agents in clinical trials and select appropriate drug interventions for patients (once such drugs are approved), it will be critically important to have precise identification of the specific pathology driving functional deficits, an issue noted by Staecker et al. in their discussion of clinical trials.47

Guidance for Audiologists: Tests for “Not-So-Hidden” Hearing Loss

It has long been known that there is a subset of patients who have audiometric thresholds within normal limits, but who nonetheless self-report problems discriminating speech in noisy environments.48,49 This impairment was termed Idiopathic Discriminatory Dysfunction (IDD) by Rappaport et al50; other names used to label these symptoms have included King-Kopetzky Syndrome, Obscure Auditory Dysfunction, and Auditory Disability with Normal Hearing.36 When documenting difficulties, the two tools that are primarily used at this time are speech-in-noise tests, which provide quantitative functional data, and surveys, which provide more qualitative judgements about difficulty.

Speech-in-noise tests that clinicians are most likely to be familiar with include the Quick Speech-in-Noise (QuickSin) test, the Hearing-in-Noise test (HINT), and perhaps the SPRINT, as this test is used to assess Military Auditory Fitness for Duty. Our team has advocated for the use of the Words-in-Noise test based on (1) extensive validation data51–54; (2) the availability of this test as part of the NIH Toolbox,55 and (3) the sensitivity of the test to acute, noise-induced changes in study participants.29Regardless of the specific test selected for use as part of clinic-specific protocols, speech-in-noise performance should be routinely documented at least during initial testing to establish a baseline against which changes in function can be determined if patient complaints emerge in the future.

Survey tools available to audiologists range from simple questions about hearing difficulties or difficulties perceiving speech to well-established tools. The Speech, Spatial, and Qualities of Hearing Scale (SSQ), developed by Gatehouse and Noble,56 and the Hearing Handicap Inventory (HHI) [available in versions for adults (HHIA)57 and for the elderly (HHIE)58] have been assessed in a small number of studies attempting to validate the surveys against quantitative speech-in-noise test outcomes. These efforts have met with mixed success. Although overall SSQ scores were not reliably correlated with threshold sensitivity or WIN test scores in either young or old listeners, higher (better) SSQ scores on the speech sub-scale were associated with lower (better) WIN thresholds the speech subscale in the younger participant group.59 In a more recent investigation, scores on the SSQ were reliably associated with sensitivity to temporal fine structure cues, with those with better SSQ scores also having better temporal fine structure cue sensitivity during psychophysical testing.60 As reviewed by Eckert et al.,61 the HHIE/HHIA exhibits modest but statistically significant associations with word recognition in quiet, with somewhat stronger associations observed with word recognition in noise tests.57,62–66 The Client Oriented Scale of Improvement (COSI) is shorter and more open-ended than the SSQ and HHIE/HHIA surveys in that it asks patients to identify their top 5 rank-ordered needs, with conversations in noise, with one or two people, or a group of people, being 2 of the 16 possible categories in which patients can identify specific communication needs. Because all of these surveys were designed to document improved function with hearing aid use, it is not known if they will be sensitive to small changes in function over long periods of time, as observed in many of the studies using workers exposed to occupational noise as participants. No deficits on the SSQ were reported by Prendergast et al.33 as a function of lifetime noise exposure, but there were also no deficits on multiple evoked potential measures in that cohort30 so SSQ deficits man not have been expected.

Studies assessing the effects of noise and other loud sounds on the ascending auditory signal have largely relied on the amplitude of ABR wave I or AP amplitude.67–69 There are a variety of electrophysiological tests under investigation for potential use detecting effects of noise on the auditory system. Evoked potential assessments under investigation include the envelope following response (EFR),70,71 middle ear muscle reflex,72 ABR Wave-V latency changes during forward masking,73 normalizing the amplitude of ABR Wave-I relative to the amplitude of ABR Wave-V (a measure of central response that does not appear to be affected by synaptopathy),74 and normalizing the amplitude of the action potential relative to the amplitude of the summating potential (i.e., SP/AP ratio).37 Some of these measures would be easier to incorporate into a routine clinical test battery than others (for discussion, see Hickox et al.40). At this time, none of these measures have been shown to be reliably associated with speech-in-noise performance or other functional metrics.

Threshold assessment at high frequencies (from 10–20 kHz) should be considered for inclusion in test batteries when patients report difficulties understanding speech-in-noise despite audiometrically normal thresholds up to 8 kHz. Although there are no clear causal relationships between high frequency hearing and speech-in-noise test performance, high-frequency hearing loss in relatively more noise-exposed participants has been reported by multiple teams,33,36,37 and may be one of the first changes among long-time users of personal audio systems.75 High frequency hearing loss reflects damage to the more basal regions of the cochlea, and it is reasonable to speculate that changes in the stiffness of the basal sensory epithelium with the loss of the OHCs might influence the mechanics of the traveling wave as it passes the basal cochlea to reach its best frequency location. Empirical data are needed to assess this specific suggestion, and determine the strength of any potential relationships between high-frequency hearing and speech-in-noise performance. Regardless, documentation of deficits to the high frequency regions of the cochlea will provide a tool for counseling patients on the importance of protecting their ears from loud sound to reduce the risk of further damage.

Guidance on Rehabilitation

It is difficult when patients do not have hearing loss that meets the criteria for amplification, but report experiencing difficulties communicating.76 Certainly, there are a variety of modern digital hearing aids that perform real-time digital signal processing to extract background noise and exclude it from the delivered, amplified signal.77 One could imagine considering such devices for use by patients with difficulties extracting signals from noisy backgrounds despite thresholds that are within the clinically normal range. However, hearing aid programming, whether based on National Acoustic Laboratories (NAL) or Desired Sensation Level (DSL) prescriptions, or manufacturers’ proprietary fitting algorithms, is based on the selection of appropriate gain for a given measured hearing loss.78–82 Real-ear verification is then used to modify the programming as needed such that the signal is appropriately amplified in the patient’s ear canal.83 Thus, there is a challenge in programming a hearing aid for a patient who reports difficulty understanding speech-in-noise but has no measurable loss of threshold sensitivity. Dispensing a low-gain device to the patient may be appropriate despite the absence of compelling scientific evidence if the appropriate clinical tests are completed to document improved speech-in-noise performance for that patient; the lack of scientific data cannot be interpreted as lack of benefit at this time.

For many audiologists, dispensing a low-gain device (subject to the testing and documentation of benefits as described above) would be preferred over the potential scenario in which a patient perceives their communication deficits to be unresolved and chooses to pursue an over-the-counter (OTC) direct-to-consumer hearing aid (which does not require a professional consultation or fitting) or a personal sound amplification product (PSAP) marketed for use by normal hearing listeners. A recent review of the literature described production of potentially dangerous sound pressure levels by some OTC hearing aid and PSAP devices when electroacoustic characteristics were measured, as well as an emphasis on low-frequency amplification that may provide less benefit to the typical individual with normal age-related declines in hearing at higher frequencies.83 One recent study documented speech recognition improvements in patients with hearing loss who were fit with PSAPs and hearing aids when the PSAP devices were programmed to fit participant hearing loss by an audiologist in a clinical setting.85 It is not yet clear how significant the impact of OTC hearing aid and PSAP devices will be for the traditional hearing aid market, and it is not yet known to what extent such products might someday be dispensed under the oversight of an audiologist or a hearing aid dispenser for those patients with an appropriate audiometric configuration and an interest in a lower cost device with fewer programming options, but these are also areas of significant interest to the broad professional community.

With respect to auditory training programs, there are a variety of training programs that could be considered for use by those patients that self-report or have documented difficulty on speech-in-noise tests. Some of the commercially available programs that audiologists might already be aware of include clEAR (https://www.clearworks4ears.com/), Mindweavers (http://www.mindweavers.co.uk/), Neurotone LACE (https://www.neurotone.com/lace-interactive-listening-program), Sense Synergy ReadMyQuips (http://www.sensesynergy.com/readmyquips), and others. In general, auditory training is designed to build on and improve a patient’s perceptual skills as related to phoneme recognition/discrimination, word identification, and distinguishing among multiple simultaneous inputs (“stream segregation”) when listening in an environment that has both a signal of interest (such as a conversational partner) and other distracting inputs (such as people carrying on other unrelated conversations).

A variety of training programs (including some of those listed above as currently available for purchase) have been assessed with various subject populations, with mixed results across programs and populations.86–93 In general, these studies have either assessed potential benefits of auditory training within populations of patients with hearing loss learning to use hearing aids or cochlear implants, or they have assessed potential training-related improvements in children diagnosed with auditory processing disorder. A recent systematic review of randomized controlled trials concluded there is not enough evidence to be able to determine whether older adults with hearing loss benefit from aural habilitation, including use of communication training programs.95 Gallun et al. recently provided an important discussion of all of the above rehabilitation issues96; for veterans with traumatic brain injuries, they noted that low-gain hearing aids, assistive listening devices, and auditory rehabilitation may have benefit, but that evidence supporting the efficacy of these approaches is currently lacking. Better rehabilitation outcomes data are urgently needed to allow evidence-based decisions on how to support patients with clinically normal thresholds but self-reported and/or measured deficits understanding speech-in-noise.

Summary of the Findings to Data

Noise exposure that induces transient changes in thresholds clearly can induce permanent neural pathology, but not all transient changes in thresholds (TTS) result in this permanent neural pathology.18,19 Similar patterns of pathology can be induced with significant acute noise exposure in a primate model.72 Although this latter observation confirmed selective synaptopathic injury can be induced in a non-human primate model, the exposure used to induce this pathology exceeded the daily occupational noise exposure limits for workers in the US15,16 and other countries,97 and the primate subjects were anesthetized, which decreases vulnerability. Taken together, it is not clear how the data collected in these acute noise exposure studies will translate to occupational noise risk.98 The extent to which synaptic pathology, OHC loss, or mixed pathologies, will be induced by occupational exposure is not known, given that occupational exposure will be limited to lower daily doses, but will be repeated 5 days per week over many weeks, months, and years – a condition not tested in animal studies.

Acute noise exposures are in some ways better models for potential hazards of recreational sound exposure, as recreational sound exposures tend to be shorter, and less frequent, than occupational noise exposures, but the dose for an individual event can exceed the dose allowed during a single workday. Although initial data appeared to indicate that increasing recreational noise history was associated with decreasing ABR Wave I amplitude,34 those results have not been replicated when similar studies were completed in multiple other laboratories around the world.28–32 At the extreme end of the recreational noise exposure continuum are the frequent concert attendees studied by Grose et al.36 Frequent concert goers had smaller ABR Wave-I/ABR Wave V amplitude ratios and poorer high frequency thresholds, but no signal-in-noise test differences.36 The lack of deficits on signal-in-noise tests contrasts with changes in function on a signal-in-noise test that were detected when rats were tested in difficult listening conditions; however, deficits in rats emerged only when TTS was robust (40–50 dB, 24-hours post noise), and deficits were only observed at frequencies at which a permanent noise-induced decrease in ABR Wave I amplitude was measured.11 It is of course possible that additional functional differences would emerge with continued frequent concert attendance, highlighting the urgent need for longitudinal data.

In contrast to these negative outcomes, analysis of data from music students revealed functional differences during high frequency audiometry tests and deficits on difficult hearing-in-noise tests, in combination with differences in SP amplitude but not AP amplitude, resulting in SP/AP ratio differences as well.37 Music students and musicians will almost certainly continue to be a population of interest given long-standing discussions about whether music is as hazardous as other sound exposures, and the availability of only very limited preliminary data addressing this issue.99,100 The data from Liberman et al. are a call for concern about this population given the functional differences they detected when music students were compared to other non-music students (primarily CSD students).37 Finally, at the opposite extreme, there are populations that appear to fit the profile for synpatopathic (or other neural) injury after firearm exposure. Data from civilians who use firearms recreationally and military personnel with high noise exposure (including firearm use) had significantly smaller ABR Wave I amplitudes than other low noise groups.101 Unfortunately, speech-in-noise performance was not assessed in those participants. Differences in outcomes across participant populations (college student convenience samples, concert goers, music students, military service members) provide preliminary insight into risk, but the use of different metrics across studies (EHF, ABR, AP, SP, Wave I, Wave V, WIN, QuickSin, NU6, etc) makes direct comparisons difficult.

Major Unknowns

One of the major unknowns in this topic area is the extent to which risk for neural injury might increase relatively linearly along some graded continuum as noise exposure increases, versus the potential that there will be some critical boundary at which risk of injury suddenly increases in an “all or nothing” fashion. This is an urgent issue with respect to both acute noise injury – the paradigm used in the majority of the animal studies that are currently available – and also chronic noise injury. There is virtually no understanding of the extent to which small but repeated TTS “injuries” might, or might not, ultimately result in either an OHC or a neural pathology.98,102 Depending on the specific pathology induced by the exposure, the consequences might include decreased DPOAE amplitude,103–106 poorer high frequency threshold sensitivity,107–110 an overt STS that meets NIOSH’s “early” warning (NIOSH16) or OSHA’s reportable hearing loss criteria,15 or, perhaps, speech-in-noise deficits.38 And, of course, it is possible that there will be mixed pathology underlying observed functional deficits, and mixed patterns of deficits. As an example, there are a number of studies in which workers with overt NIHL have poorer word-in-noise test outcomes.111–114

Thoughts for the Future

This is certain to remain an active area of investigation given active discussion of the potential implications for workplace noise regulations98 and noise-exposed military personnel.20 It is extremely likely that additional studies across a variety of populations that are potentially at-risk will continue to emerge as there is an urgent need to understand who is at risk, and what they are at risk for, so prevention strategies can be appropriately designed and targeted. In addition to studies of specific populations, the future is likely to bring increasing consensus on protocols. There is good agreement that thresholds and DPOAEs must be measured, but there has been variable inclusion of high frequency stimuli within the threshold test protocol. There has also been variable inclusion of speech-in-noise tests, and whereas some studies have included SP or wave V amplitude such that ratios of SP/AP or Wave I/Wave can be calculated, others have solely focused on wave I. The use of different protocols in these early assessments makes it difficult to directly compare across studies, but the variety of tests completed to data are providing insight into potentially important metrics moving forwards. Finally, diagnosis and rehabilitation remain important challenges, with no consensus on best practices at this time. Hearing aids with digital noise management algorithms should be considered when speech-in-noise difficulties are accompanied by mild to moderate hearing loss; it is not clear if low-gain devices will benefit those with clinically normal thresholds. Until evidence accumulates, individual benefit should be verified during clinical assessment prior to dispensing the programmed device. Although it is not clear how to best support patients with normal audiometric sensitivity, it is also possible that auditory training programs might prove helpful. However, such strategies have not yet been assessed for normal hearing populations with difficulty in noise.

Acknowledgments

Support provided by the Emilie and Phil Schepps Distinguished Professorship in Hearing Science at the University of Texas at Dallas is gratefully acknowledged. The text in this article is an abbreviated version of a comprehensive review that will be published in the forthcoming Advances in Audiology, Speech Pathology, and Hearing Science, edited by S. Hatzopolous, A. Ciorba, and M. Krumm.

Le Prell CG, Spankovich C, Lobarinas E, and Griffiths SK. Extended high frequency thresholds in college students: effects of music player use and other recreational noise. J Am Acad Audiol 2013;24:725–39.

Riga M, Korres G, Balatsouras D, and Korres S. Screening protocols for the prevention of occupational noise-induced hearing loss: the role of conventional and extended high frequency audiometry may vary according to the years of employment. Med Sci Monit 2010;16:CR352-356.