Objectives: To study differences regarding pain and activity limitations during the 3 years following diagnosis in women and men with contemporary treated early RA compared with their counterparts who were diagnosed 10 years earlier. Method: This study was based on patients recruited to the Early Intervention in RA (TIRA) project. In the first cohort (TIRA-1) 320 patients were included in time for diagnosis during 1996-1998 and 463 patients were included in the second cohort (TIRA-2) during 2006-2009. Disease activity, pain intensity (Visual Analogue Scale, VAS), bodily pain (BP) in the 36-item Short Form Health Survey (SF-36), activity limitations (Health Assessment Questionnaire, HAQ), and medication were reported at inclusion and at follow-up after 1, 2, and 3 years. Results: Disease activity, pain, and activity limitations were pronounced at inclusion across both genders and in both cohorts, with some improvement observed during the first year after diagnosis. Disease activity did not differ between cohorts at inclusion but was significantly lower at the follow-ups in the TIRA-2 cohort, in which the patients were prescribed traditional disease-modifying anti-rheumatic drugs (DMARDs) and biological agents more frequently. In TIRA-2, patients reported significantly lower pain and activity limitations at all follow-ups, with men reporting lower pain than women. Women reported significantly higher activity limitations at all time points in TIRA-2. Conclusions: Pain and activity limitations were still pronounced in the contemporary treated early RA cohort compared with their counterparts diagnosed 10 years earlier and both of these factors need to be addressed in clinical settings.

Speech comprehension depends on the successful operation of a network of brain regions. Processing of degraded speech is associated with different patterns of brain activity in comparison with that of high-quality speech. In this exploratory study, we studied whether processing degraded auditory input in daily life because of hearing impairment is associated with differences in brain volume. We compared TI-weighted structural magnetic resonance images of 17 hearing-impaired (HI) adults with those of 17 normal-hearing (NH) controls using a voxel-based morphometry analysis. HI adults were individually matched with NH adults based on age and educational level. Gray and white matter brain volumes were compared between the groups by region-of-interest analyses in structures associated with speech processing, and by whole-brain analyses. The results suggest increased gray matter volume in the right angular gyrus and decreased white matter volume in the left fusiform gyrus in HI listeners as compared with NH ones. In the HI group, there was a significant correlation between hearing acuity and cluster volume of the gray matter cluster in the right angular gyrus. This correlation supports the link between partial hearing loss and altered brain volume. The alterations in volume may reflect the operation of compensatory mechanisms that are related to decoding meaning from degraded auditory input.

We still have very little knowledge about how ourbrains decouple different sound sources, which is known assolving the cocktail party problem. Several approaches; includingERP, time-frequency analysis and, more recently, regression andstimulus reconstruction approaches; have been suggested forsolving this problem. In this work, we study the problem ofcorrelating of EEG signals to different sets of sound sources withthe goal of identifying the single source to which the listener isattending. Here, we propose a method for finding the number ofparameters needed in a regression model to avoid overlearning,which is necessary for determining the attended sound sourcewith high confidence in order to solve the cocktail party problem.

Auditory attention identification methods attempt to identify the sound source of a listeners interest by analyzing measurements of electrophysiological data. We present a tutorial on the numerous techniques that have been developed in recent decades, and we present an overview of current trends in multivariate correlation-based and model-based learning frameworks. The focus is on the use of linear relations between electrophysiological and audio data. The way in which these relations are computed differs. For example, canonical correlation analysis (CCA) finds a linear subset of electrophysiological data that best correlates to audio data and a similar subset of audio data that best correlates to electrophysiological data. Model-based (encoding and decoding) approaches focus on either of these two sets. We investigate the similarities and differences between these linear model philosophies. We focus on (1) correlation-based approaches (CCA), (2) encoding/decoding models based on dense estimation, and (3) (adaptive) encoding/decoding models based on sparse estimation. The specific focus is on sparsity-driven adaptive encoding models and comparing the methodology in state-of-the-art models found in the auditory literature. Furthermore, we outline the main signal processing pipeline for how to identify the attended sound source in a cocktail party environment from the raw electrophysiological data with all the necessary steps, complemented with the necessary MATLAB code and the relevant references for each step. Our main aim is to compare the methodology of the available methods, and provide numerical illustrations to some of them to get a feeling for their potential. A thorough performance comparison is outside the scope of this tutorial.

This study investigated language function associated with behavior problems, focusing on pragmatics. Scores on the Children’s Communication Checklist Second Edition (CCC-2) in a group of 40 adolescents (12–15 years) identified with externalizing behavior problems (BP) in childhood was compared to the CCC-2 scores in a typically developing comparison group (n=37). Behavioral, emotional and language problems were assessed by the Strengths and Difficulties Questionnaire (SDQ) and 4 language items, when the children in the BP group were 7–9 years (T1). They were then assessed with the SDQ and the CCC-2 when they were 12–15 years (T2). The BP group obtained poorer scores on 9/10 subscales on the CCC-2, and 70% showed language impairments in the clinical range. Language, emotional and peer problems at T1 were strongly correlated with pragmatic language impairments in adolescence. The findings indicate that assessment of language, especially pragmatics, is vital for follow-up and treatment of behavioral problems in children and adolescents.

Internet-delivered cognitive behavior therapy (ICBT) has been tested in many research trials, but to a lesser extent directly compared to face-to-face delivered cognitive behavior therapy (CBT). We conducted a systematic review and meta-analysis of trials in which guided ICBT was directly compared to face-to-face CBT. Studies on psychiatric and somatic conditions were included. Systematic searches resulted in 13 studies (total N=1053) that met all criteria and were included in the review. There were three studies on social anxiety disorder, three on panic disorder, two on depressive symptoms, two on body dissatisfaction, one on tinnitus, one on male sexual dysfunction, and one on spider phobia. Face-to-face CBT was either in the individual format (n=6) or in the group format (n=7). We also assessed quality and risk of bias. Results showed a pooled effect size (Hedges' g) at post-treatment of -0.01 (95% CI: -0.13 to 0.12), indicating that guided ICBT and face-to-face treatment produce equivalent overall effects. Study quality did not affect outcomes. While the overall results indicate equivalence, there are still few studies for each psychiatric and somatic condition and many conditions for which guided ICBT has not been compared to face-to-face treatment. Thus, more research is needed to establish equivalence of the two treatment formats.

BackgroundInternet-delivered cognitive behavior therapy (ICBT) for major depression has been tested in several trials, but only with follow-ups up to 1.5 years.

AimThe aim of this study was to evaluate the outcome of ICBT 3.5 years after treatment completion.Methods

A total of 88 people with major depression were randomized to either guided self-help or e-mail therapy in the original trial. One-third was initially on a waiting-list. Treatment was provided for eight weeks and in this report long-term follow-up data were collected. Also included were data from post-treatment and six-month follow-up. A total of 58% (51/88) completed the 3.5-year follow-up. Analyses were performed using a random effects repeated measures piecewise growth model to estimate trajectory shape over time and account for missing data.

ResultsResults showed continued lowered scores on the Beck Depression Inventory (BDI). No differences were found between the treatment conditions. A large proportion of participants (55%) had sought and received additional treatments in the follow-up period. A majority (56.9%) of participants had a BDI score lower than 10 at the 3.5-year follow-up.

ConclusionsPeople with mild to moderate major depression may benefit from ICBT 3.5-years after treatment completion.

Purpose: The purpose of this research forum article is to describe the impetus for holding the First International Meeting on Internet and Audiology (October 2014) and to introduce the special research forum that arose from the meeting. Method: The rationale for the First International Meeting on Internet and Audiology is described. This is followed by a short description of the research sections and articles appearing in the special issue. Six articles consider the process of health care delivery over the Internet; this includes health care specific to hearing, tinnitus, and balance. Four articles discuss the development of effective Internet-based treatment programs. Six articles describe and evaluate Internet-based interventions specific to adult hearing aid users. Conclusion: The fledgling field of Internet and audiology is remarkably broad. The Second International Meeting on Internet and Audiology ocurred in September 2015.

Objective: The aim of this study was to investigate if future thinking would change following two forms of Internet-delivered cognitive behavior therapy (ICBT) for major depression. A second aim was to study the association between pre-post changes in future thinking and prepost changes in depressive symptoms.

Background: Effects of psychological treatments are most often tested with self-report inventories and seldom with tests of cognitive function.

Method: We included data from 47 persons diagnosed with major depression who received either e-mail therapy or guided self-help during 8 weeks. Participants completed the future thinking task (FTT), in which they were asked to generate positive and negative events that they thought were going to happen in the future and rated the events in terms of emotion and likelihood. The FTT was completed before and after treatment. Data on depressive symptoms were also collected.

Results: FTT index scores for negative events were reduced after treatment. There was no increase for the positive events. Change scores for the FTT negative events and depression symptoms were significantly correlated.

Conclusions: We conclude that ICBT may lead to decreased negative future thinking and that changes in depression symptoms correlate to some extent with reductions in negative future thinking.

This study examined the extent to which different measures ofspeechreading performance correlated with particular cognitiveabilities in a population of hearing-impaired people. Althoughthe three speechreading tasks (isolated word identification,sentence comprehension, and text tracking) were highly intercorrelated,they tapped different cognitive skills. In this population,younger participants were better speechreaders, and, when agewas taken into account, speech tracking correlated primarilywith (written) lexical decision speed. In contrast, speechreadingfor sentence comprehension correlated most strongly with performanceon a phonological processing task (written pseudohomophone detection)but also on a span measure that may have utilized visual, nonverbalmemory for letters. We discuss the implications of this pattern.

Deafness has been associated with poor abilities to deal with digits in the context of arithmetic and memory, and language modality-specific differences in the phonological similarity of digits have been shown to influence short-term memory (STM). Therefore, the overall aim of the present thesis was to find out whether language modality-specific differences in phonological processing between sign and speech can explain why deaf signers perform at lower levels than hearing peers when dealing with digits. To explore this aim, the role of phonological processing in digit-based arithmetic and memory tasks was investigated, using both behavioural and neuroimaging methods, in adult deaf signers and hearing non-signers, carefully matched on age, sex, education and non-verbal intelligence. To make task demands as equal as possible for both groups, and to control for material effects, arithmetic, phonological processing, STM and working memory (WM) were all assessed using the same presentation and response mode for both groups. The results suggested that in digit-based STM, phonological similarity of manual numerals causes deaf signers to perform more poorly than hearing non-signers. However, for digit-based WM there was no difference between the groups, possibly due to differences in allocation of resources during WM. This indicates that similar WM for the two groups can be generalized from lexical items to digits. Further, we found that in the present work deaf signers performed better than expected and on a par with hearing peers on all arithmetic tasks, except for multiplication, possibly because the groups studied here were very carefully matched. However, the neural networks recruited for arithmetic and phonology differed between groups. During multiplication tasks, deaf signers showed an increased reliance on cortex of the right parietal lobe complemented by the left inferior frontal gyrus. In contrast, hearing non-signers relied on cortex of the left frontal and parietal lobes during multiplication. This suggests that while hearing non-signers recruit phonology-dependent arithmetic fact retrieval processes for multiplication, deaf signers recruit non-verbal magnitude manipulation processes. For phonology, the hearing non-signers engaged left lateralized frontal and parietal areas within the classical perisylvian language network. In deaf signers, however, phonological processing was limited to cortex of the left occipital lobe, suggesting that sign-based phonological processing does not necessarily activate the classical language network. In conclusion, the findings of the present thesis suggest that language modality-specific differences between sign and speech in different ways can explain why deaf signers perform at lower levels than hearing non-signers on tasks that include dealing with digits.

We have previously shown that deaf signers recruit partially different brain regions during simple arithmetic compared to a group of hearing non-signers, despite similar performance. Specifically, hearing individuals show more widespread activation in brain areas that have been related to the verbal system of numerical processing, i.e., the left angular and inferior frontal gyrus, whereas deaf individuals engaged brain areas that have been related to the quantity system of numerical processing, i.e., the right horizontal intraparietal sulcus. This indicates that compared to hearing non-signers, deaf signers can successfully make use of processes located in partially different brain areas during simple arithmetic. In this study, which is a conceptual replication and extension of the above-presented study, the main aim is to understand similarities and differences in neural correlates supporting arithmetic in deaf compared to hearing individuals. The primary objective is to investigate the role of the right horizontal intraparietal gyrus, the left inferior frontal gyrus, the hippocampus, and the left angular gyrus during simple and difficult arithmetic and how these regions are connected to each other. A second objective is to explore what other brain regions support arithmetic in deaf signers. Up to 34 adult deaf signers and the same amount of hearing non-signers will be enrolled in an functional magnetic resonance imaging study that will include simple and difficult subtraction and multiplication. Brain imaging data will be analyzed using whole-brain analysis, region of interest analysis and connectivity analysis. This is the first study to investigate neural underpinnings of arithmetic of different difficulties in deaf individuals.

Profoundly deaf individuals sometimes have difficulty with arithmetic and phonological tasks. In the present study we investigate if these differences can be attributed to differences in recruitment of neurobiological networks. Seventeen hearing non-signers (HN) and sixteen deaf signers (DS) matched on age, gender and non-verbal intelligence took part in an fMRI study. In the scanner three digit/letter pairs were visually presented and the participants performed six different blocked tasks tapping processing of digit and letter order, multiplication, subtraction and phonological ability. Data were analysed using two 2x2x2 ANOVAs; process (arithmetic, language) x level (high, low) x group (DS, HN). A main effect of process revealed language networks in the left inferior frontal gryus, supramarginal gyrus, fusiform gyrus and insula. Arithmetic networks included left middle orbital gyrus and superior medial gyrus. A main effect of level revealed low level processing (digit/letter order) in the right middle occipital gyrus and the right precuneus and high level processing (subtraction/multiplication/phonological ability) in left inferior frontal gyrus. There was no main effect of group but a significant task x group interaction in the right temporal pole which in DS (but not HN) was activated more for arithmetic than language processing (pfwe = .022) when multiplication was included in the analysis. This region is implicated in conceptual representation. These results suggest that both arithmetic and language are processed similarly by DS and HN with possible between-group differences in the use of conceptual representation in arithmetic and language tasks.

Evidence suggests that the lag reported in mathematics for deaf signers derives from difficulties related to the verbal system of number processing as described in the triple code model. For hearing individuals the verbal system has been shown to be recruited for both arithmetic and language tasks. In the present study we investigate for the first time neuronal representations of arithmetic in deaf signers. We examine if the neural network supporting arithmetic and language, including the horizontal portion of the intraparietal sulcus (HIPS), the superior parietal lobule (SPL) bilaterally, the left angular gyrus (AG), pars opercularis (POPE) and pars triangularis (PTRI) of the left inferior frontal gyrus (IFG), is differently recruited for deaf and hearing individuals. Imaging data were collected from 16 deaf signers and 16 well-matched hearing nonsigners, using the same stimulus material for all tasks, but with different cues. During multiplication, deaf signers recruited rHIPS more than hearing non-signers, suggesting greater involvement of magnitude manipulation processes related to the quantity system, whereas there was no evidence that the verbal system was recruited. Further, there was no support for the notion of a common representation of phonology for sign and speech as previously suggested.

17. The neural basis of arithmetic and phonology in deaf signing individuals

Deafness is generally associated with poor mental arithmetic, possibly due to neuronal differences in arithmetic processing across language modalities. Here, we investigated for the first time the neuronal networks supporting arithmetic processing in adult deaf signers. Deaf signing adults and hearing non-signing peers performed arithmetic and phonological tasks during fMRI scanning. At whole brain level, activation patterns were similar across groups. Region of interest analyses showed that although both groups activated phonological processing regions in the left inferior frontal gyrus to a similar extent during both phonological and multiplication tasks, deaf signers showed significantly more activation in the right horizontal portion of the inferior parietal sulcus. This region is associated with magnitude manipulation along the mental number line. This pattern of results suggests that deaf signers rely more on magnitude manipulation than hearing non-signers during multiplication, but that phonological involvement does not differ significantly between groups.Abbreviations: AAL: Automated Anatomy Labelling; fMRI: functional magnetic resonance imaging; HIPS: horizontal portion of the intraparietal sulcus; lAG: left angular gyrus; lIFG: left inferior frontal gyrus; rHIPS: right horizontal portion of the intraparietal sulcus

Congenital deafness is often compensated by early sign language use leading to typical language development with corresponding neural underpinnings. However, deaf individuals are frequently reported to have poorer numerical abilities than hearing individuals and it is not known whether the underlying neuronal networks differ between groups. In the present study, adult deaf signers and hearing nonsigners performed a digit and letter order tasks, during functional magnetic resonance imaging. We found the neuronal networks recruited in the two tasks to be generally similar across groups, with significant activation in the dorsal visual stream for the letter order task, suggesting letter identification and position encoding. For the digit order task, no significant activation was found for either of the two groups. Region of interest analyses on parietal numerical processing regions revealed different patterns of activation across groups. Importantly, deaf signers showed significant activation in the right horizontal portion of the intraparietal sulcus for the digit order task, suggesting engagement of magnitude manipulation during numerical order processing in this group.

Arithmetic and language processing involve similar neural networks, but the relative engagement remains unclear. In the present study we used fMRI to compare activation for phonological, multiplication and subtraction tasks, keeping the stimulus material constant, within a predefined language-calculation network including left inferior frontal gyrus and angular gyrus (AG) as well as superior parietal lobule and the intraparietal sulcus bilaterally. Results revealed a generally left lateralized activation pattern within the language-calculation network for phonology and a bilateral activation pattern for arithmetic, and suggested regional differences between tasks. In particular, we found a more prominent role for phonology than arithmetic in pars opercularis of the left inferior frontal gyrus but domain generality in pars triangularis. Parietal activation patterns demonstrated greater engagement of the visual and quantity systems for calculation than language. This set of findings supports the notion of a common, but regionally differentiated, language-calculation network. (C) 2015 The Authors. Published by Elsevier Inc.

Deaf students generally lag several years behind hearing peers in arithmetic, but little is known about the mechanisms behind this. In the present study we investigated how phonological skills interact with arithmetic. Eighteen deaf signers and eighteen hearing non-signers took part in an experiment that manipulated arithmetic and phonological knowledge in the language modalities of sign and speech. Independent tests of alphabetical and native language phonological skills were also administered. There was no difference in performance between groups on subtraction, but hearing non-signers performed better than deaf signers on multiplication. For the deaf signers but not the hearing non-signers, multiplicative reasoning was associated with both alphabetical and phonological skills. This indicates that deaf signing adults rely on language processes to solve multiplication tasks, possibly because automatization of multiplication is less well established in deaf adults.

Cognitive impairment may cause difficulties in planning and initiating daily activities, as well as remembering to do what is scheduled. This study investigates the effectiveness of an interactive web-based mobile reminder calendar that sends text messages to the users mobile phone as support in everyday life, for persons with cognitive impairment due to neurological injury/diagnoses. The study has a randomised controlled trail design with data collection at baseline and at follow-up sessions after two and four months. Data collection started in August 2016 and continues until December 2017. The interactive web-based mobile reminder calendar may give the needed support to remind the person and thus increase the ability to perform activities and to be independence in everyday life. Preliminary results will be presented regarding what effect the interactive web-based mobile reminder calendar have for the participants performance of everyday life activities as well as perceived quality of life.

35. Feasibility of an Intervention for Patients with Cognitive Impairment Using an Interactive Digital Calendar with Mobile Phone Reminders (RemindMe) to Improve the Performance of Activities in Everyday Life

The aim of this study is to increase evidence-based interventions by investigating the feasibility of an intervention using an interactive digital calendar with mobile phone reminders (RemindMe) as support in everyday life. Qualitative and quantitative data were collected from participating patients (n = 8) and occupational therapists (n = 7) from three rehabilitation clinics in Sweden. The intervention consisted of delivering the interactive digital calendar RemindMe, receiving an individualized introduction, a written manual, and individual weekly conversations for two months with follow-up assessments after two and four months. Feasibility areas of acceptability, demand, implementation, practicality, and integration were examined. Patients expressed their interest and intention to use RemindMe and reported a need for reminders and individualized support. By using reminders in activities in everyday life their autonomy was supported. The study also demonstrated the importance of confirming reminders and the possible role of habit-forming. Occupational therapists perceived the intervention to be useful at the rehabilitation clinics and the weekly support conversations enabled successful implementation. This study confirmed the importance of basing and tailoring the intervention to patients needs and thus being person-centered.

Objectives: This study considered speech modified by additive babble combined with noise-suppression processing. The purpose was to determine the relative importance of the signal modifications, individual peripheral hearing loss, and individual cognitive capacity on speech intelligibility and speech quality.

Design: The participant group consisted of 31 individuals with moderate high-frequency hearing loss ranging in age from 51 to 89 years (mean = 69.6 years). Speech intelligibility and speech quality were measured using low-context sentences presented in babble at several signal-to-noise ratios. Speech stimuli were processed with a binary mask noise-suppression strategy with systematic manipulations of two parameters (error rate and attenuation values). The cumulative effects of signal modification produced by babble and signal processing were quantified using an envelope-distortion metric. Working memory capacity was assessed with a reading span test. Analysis of variance was used to determine the effects of signal processing parameters on perceptual scores. Hierarchical linear modeling was used to determine the role of degree of hearing loss and working memory capacity in individual listener response to the processed noisy speech. The model also considered improvements in envelope fidelity caused by the binary mask and the degradations to envelope caused by error and noise.

Results: The participants showed significant benefits in terms of intelligibility scores and quality ratings for noisy speech processed by the ideal binary mask noise-suppression strategy. This benefit was observed across a range of signal-to-noise ratios and persisted when up to a 30% error rate was introduced into the processing. Average intelligibility scores and average quality ratings were well predicted by an objective metric of envelope fidelity. Degree of hearing loss and working memory capacity were significant factors in explaining individual listener’s intelligibility scores for binary mask processing applied to speech in babble. Degree of hearing loss and working memory capacity did not predict listeners’ quality ratings.

Conclusions: The results indicate that envelope fidelity is a primary factor in determining the combined effects of noise and binary mask processing for intelligibility and quality of speech presented in babble noise. Degree of hearing loss and working memory capacity are significant factors in explaining variability in listeners’ speech intelligibility scores but not in quality ratings.

Objectives: This study considered speech modified by additive babble combined with noise-suppression processing. The purpose was to determine the relative importance of the signal modifications, individual peripheral hearing loss, and individual cognitive capacity on speech intelligibility and speech quality. Design: The participant group consisted of 31 individuals with moderate high-frequency hearing loss ranging in age from 51 to 89 years (mean = 69.6 years). Speech intelligibility and speech quality were measured using low-context sentences presented in babble at several signal-to-noise ratios. Speech stimuli were processed with a binary mask noise-suppression strategy with systematic manipulations of two parameters (error rate and attenuation values). The cumulative effects of signal modification produced by babble and signal processing were quantified using an envelope-distortion metric. Working memory capacity was assessed with a reading span test. Analysis of variance was used to determine the effects of signal processing parameters on perceptual scores. Hierarchical linear modeling was used to determine the role of degree of hearing loss and working memory capacity in individual listener response to the processed noisy speech. The model also considered improvements in envelope fidelity caused by the binary mask and the degradations to envelope caused by error and noise. Results: The participants showed significant benefits in terms of intelligibility scores and quality ratings for noisy speech processed by the ideal binary mask noise-suppression strategy. This benefit was observed across a range of signal-to-noise ratios and persisted when up to a 30% error rate was introduced into the processing. Average intelligibility scores and average quality ratings were well predicted by an objective metric of envelope fidelity. Degree of hearing loss and working memory capacity were significant factors in explaining individual listeners intelligibility scores for binary mask processing applied to speech in babble. Degree of hearing loss and working memory capacity did not predict listeners quality ratings. Conclusions: The results indicate that envelope fidelity is a primary factor in determining the combined effects of noise and binary mask processing for intelligibility and quality of speech presented in babble noise. Degree of hearing loss and working memory capacity are significant factors in explaining variability in listeners speech intelligibility scores but not in quality ratings.

Several recent studies have shown a relationship between working memory and the ability of older adults to benefit from specific advanced signal processing algorithms in hearing aids. In this study, we quantify tradeoffs between benefit due to noise reduction and the perceptual costs associated with distortion caused by the noise reduction algorithm. We also investigate the relationship between these tradeoffs and working memory abilities. Speech intelligibility, speech quality, and perceived listening effort were measured in a cohort of elderly adults with hearing loss. Test materials were low-context sentences presented in fluctuating noise conditions at several signal-to-noise ratios. Speech stimuli were processed with a binary mask noise-reduction strategy. The amount of distortion produced by the noise reduction algorithm was parametrically varied by manipulating two binary mask parameters, error rate, and attenuation rate. Working memory was assessed with a reading span test. Results will be discussed in terms of the extent to which intelligibility, quality, and effort ratings are explained by the amount of distortion and/or noise and by working memory ability. [Funded by NIH, Oticon, and GN ReSound.].

and profound hearing impairment treated by cochlear implants (CI). In this study we explore this relationship in sixteen Swedish children with CI. We found that over 60% of the children with CI performed at the level of their hearing peers in a reading comprehension test. Demographic factors were not predictive of reading comprehension, but a complex working memory task was. Reading percentile was significantly correlated to the working memory test, but no other correlations between reading and cognitive/linguistic factors remained significant after age was factored out. Individual results from a comparison of the two best and the two poorest readers corroborate group results, confirming the important role of working memory for reading as measured by comprehension of words and sentences in this group of children.

42. Quality and readability of English-language internet information for aphasia

Purpose: Little is known about the quality and readability of treatment information in specific neurogenic disorders, such as aphasia. The purpose of this study was to assess quality and readability of English-language Internet information available for aphasia treatment. Method: Forty-three aphasia treatment websites were aggregated using five different country-specific search engines. Websites were then analysed using quality and readability assessments. Statistical calculations were employed to examine website ratings, differences between website origin and quality and readability scores, and correlations between readability instruments. Result: Websites exhibited low quality with few websites obtaining Health On the Net (HON) certification or clear, thorough information as measured by the DISCERN. Regardless of website origin, readability scores were also poor. Approximate educational levels required to comprehend information on aphasia treatment websites ranged from 13 to 16 years of education. Significant differences were found between website origin and readability measures with higher levels of education required to understand information on websites of non-profit organisations. Conclusion: Current aphasia treatment websites were found to exhibit low levels of quality and readability, creating potential accessibility problems for people with aphasia and significant others. Websites including treatment information for aphasia must be improved in order to increase greater information accessibility.

The mandate of the International Commission on Biological Effects of Noise (ICBEN) is to promote a high level of scientific research concerning all aspects of noise-induced effects on human beings and animals. In this review, ICBEN team chairs and co-chairs summarize relevant findings, publications, developments, and policies related to the biological effects of noise, with a focus on the period 2011-2014 and for the following topics: Noise-induced hearing loss; nonauditory effects of noise; effects of noise on performance and behavior; effects of noise on sleep; community response to noise; and interactions with other agents and contextual factors. Occupational settings and transport have been identified as the most prominent sources of noise that affect health. These reviews demonstrate that noise is a prevalent and often underestimated threat for both auditory and nonauditory health and that strategies for the prevention of noise and its associated negative health consequences are needed to promote public health.

Electrophysiological feedback on activity in the auditory pathway may potentially advance the next generation of hearing aids. Conventional electroencephalographic (EEG) systems are, however, impractical during daily life and incompatible with hearing aids. Ear-EEG is a method in which the EEG is recorded from electrodes embedded in a hearing aid like earpiece. The method therefore provides an unobtrusive way of measuring neural activity suitable for use in everyday life. This study aimed to determine whether ear-EEG could be used to estimate hearing thresholds in subjects with sensorineural hearing loss. Specifically, ear-EEG was used to determine physiological thresholds at 0.5, 1, 2, and 4 kHz using auditory steady-state response measurements. To evaluate ear-EEG in relation to current methods, thresholds were estimated from a concurrently recorded conventional scalp EEG. The threshold detection rate for ear-EEG was 20% lower than the detection rate for scalp EEG. Thresholds estimated using in-ear referenced ear-EEG were found to be elevated at an average of 5.9, 2.3, 5.6, and 1.5 dB relative to scalp thresholds at 0.5, 1, 2, and 4 kHz, respectively. No differences were found in the variance of means between in-ear ear-EEG and scalp EEG. In-ear ear-EEG, auditory steady-state response thresholds were found at 12.1 to 14.4 dB sensation level with an intersubject variation comparable to that of behavioral thresholds. Collectively, it is concluded that although further refinement of the method is needed to optimize the threshold detection rate, ear-EEG is a feasible method for hearing threshold level estimation in subjects with sensorineural hearing impairment.

Purpose: Preferences for patient-centeredness is an important indicator in healthcare service delivery. However, it remains largely unexplored in the field of communication science and disorders. This study investigated speech-language pathologists (SLPs) preferences for patient-centeredness Method: The study involved a cross-sectional survey design. SLPs (n = 102) fully completed the modified Patient-Practitioner Orientation Scale (PPOS; Krupat et al, 2000) and also provided demographic details. Data were analyzed using descriptive statistics, correlation, and linear regression methods. Results: Mean PPOS scores indicated that SLPs value patient-centeredness. There was a strong positive correlation among sharing and caring subscales with the full-scale. Results from the linear regression modeling suggested no relationship between demographic factors and preferences for patient-centeredness. Conclusions: SLPs value patient-centeredness, although there may be regional and cultural variations. Qualitative investigations may help uncover dimensions of patient-centeredness that were not captured in the PPOS scale. In addition, further research should explore congruence in preferences for patient-centeredness among SLPs and patients.

The audiogram predicts less than a third of the variance in speech reception thresholds (SRTs) for hearing-impaired (HI) listeners properly fit with individualized frequency-dependent gain. The remaining variance is often attributed to a combination of su-prathreshold distortion in the auditory pathway and non-auditory factors such as cogni-tive processing. Distinguishing between these factors requires a measure of suprathresh-old auditory processing to account for the non-cognitive contributions. Preliminary re-sults in 12 HI listeners identified a correlation between spectrotemporal modulation (STM) sensitivity and speech intelligibility in noise presented over headphones. The cur-IHCON 2014 27 August 13-17, 2014rent study assessed the effectiveness of STM sensitivity as a measure of suprathreshold auditory function to predict free-field SRTs in noise for a larger group of 47 HI listeners with hearing aids.SRTs were measured for Hagerman sentences presented at 65 dB SPL in stationary speech-weighted noise or four-talker babble. Pre-recorded speech and masker stimuli were played through a small anechoic chamber equipped with a master hearing aid pro-grammed with individualized gain. The output from an IEC711 Ear Simulator was played binaurally through insert earphones. Three processing algorithms were examined: linear gain, linear gain plus noise reduction, or fast-acting compressive gain.STM stimuli consist of spectrally-rippled noise with spectral-peak frequencies that shift over time. STM with a 2-cycle/octave spectral-ripple density and a 4-Hz modulation rate was applied to a 2-kHz lowpass-filtered pink-noise carrier. Stimuli were presented over headphones at 80 dB SPL (±5-dB roving). The threshold modulation depth was estimated adaptively in a two-alternative forced-choice task.STM sensitivity was strongly correlated (R2=0.48) with the global SRT (i.e., the SRTs averaged across masker and processing conditions). The high-frequency pure-tone aver-age (3-8 kHz) and age together accounted for 23% of the variance in global SRT. STM sensitivity accounted for an additional 28% of the variance in global SRT (total R2=0.51) when combined with these two other metrics in a multiple-regression analysis. Correla-tions between STM sensitivity and SRTs for individual conditions were weaker for noise reduction than for the other algorithms, and marginally stronger for babble than for sta-tionary noise.The results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low carrier frequencies is impaired by a reduced ability to use temporal fine-structure information to detect slowly shifting spectral peaks. STM detection is a fast, simple test of suprathreshold auditory function that accounts for a substantial pro-portion of variability in hearing-aid outcomes for speech perception in noise.

50. Spectrotemporal Modulation Sensitivity as a Predictor of Speech-Reception Performance in Noise With Hearing Aids

The audiogram predicts amp;lt;30% of the variance in speech-reception thresholds (SRTs) for hearing-impaired (HI) listeners fitted with individualized frequency-dependent gain. The remaining variance could reflect suprathreshold distortion in the auditory pathways or nonauditory factors such as cognitive processing. The relationship between a measure of suprathreshold auditory function-spectrotemporal modulation (STM) sensitivity-and SRTs in noise was examined for 154 HI listeners fitted with individualized frequency-specific gain. SRTs were measured for 65-dB SPL sentences presented in speech-weighted noise or four-talker babble to an individually programmed master hearing aid, with the output of an ear-simulating coupler played through insert earphones. Modulation-depth detection thresholds were measured over headphones for STM (2cycles/octave density, 4-Hz rate) applied to an 85-dB SPL, 2-kHz lowpass-filtered pink-noise carrier. SRTs were correlated with both the high-frequency (2-6 kHz) pure-tone average (HFA; R-2 = .31) and STM sensitivity (R-2 = .28). Combined with the HFA, STM sensitivity significantly improved the SRT prediction (Delta R-2 = .13; total R-2 = .44). The remaining unaccounted variance might be attributable to variability in cognitive function and other dimensions of suprathreshold distortion. STM sensitivity was most critical in predicting SRTs for listenersamp;lt;65 years old or with HFA amp;lt;53 dB HL. Results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low-frequency carriers is impaired by a reduced ability to use temporal fine-structure information to detect dynamic spectra. STM detection is a fast test of suprathreshold auditory function for frequencies amp;lt;2 kHz that complements the HFA to predict variability in hearing-aid outcomes for speech perception in noise.