Summary

Previous studies of emotion recognition suggest that detection of disgust relies on processing within the basal ganglia and insula. Research involving individuals with symptomatic and pre-diagnostic Huntington's disease (HD), a disease with known basal ganglia atrophy, has generally indicated a relative impairment in recognizing disgust. However, some data have suggested that recognition of other emotions (particularly fear and anger) may also be affected in HD, and a recent study found fear recognition deficits in the absence of other emotion-recognition impairments, including disgust. To further examine emotion recognition in HD, we administered a computerized facial emotion recognition task to 475 individuals with the HD CAG expansion and 57 individuals without. Logistic regression was used to examine associations of emotion recognition performance with estimated proximity to clinical diagnosis (based on CAG repeat length and current age) and striatal volumes. Recognition of anger, disgust, fear, sadness and surprise (but not happiness) was associated with estimated years to clinical diagnosis; performance was unrelated to striatal volumes. Compared to a CAG-normal control group, the CAG-expanded group demonstrated significantly less accurate recognition of all negative emotions (anger, disgust, fear, sadness). Additionally, participants with more pronounced motor signs of HD were significantly less accurate at recognizing negative emotions than were individuals with fewer motor signs. Findings indicate that recognition of all negative emotions declines early in the disease process, and poorer performance is associated with closer proximity to clinical diagnosis. In contrast to previous results, we found no evidence of relative impairments in recognizing disgust or fear, and no evidence to support a link between the striatum and disgust recognition.

presymptomatic Huntington's disease

emotion recognition

striatum

disgust

Predict-HD

Several studies of emotion recognition have led to the claim that dissociable neural substrates underlie the processing of basic emotions (Phillips et al., 1997; Calder et al., 2001). One prominent set of findings has indicated that recognition of disgust stimuli relies on processing within the basal ganglia and insula. Selective deficits in the recognition of disgust stimuli have been observed in a patient with circumscribed damage to these regions (Calder et al., 2000) and in clinical groups with known or hypothesized basal ganglia and/or insular cortex dysfunction, including Wilson's disease (Wang et al., 2003) and Obsessive-Compulsive disorder (Sprengelmeyer et al., 1997a). Furthermore, fMRI studies of healthy controls (Phillips et al., 1997, 1998; Sprengelmeyer et al., 1998) have demonstrated activation in the insula and basal ganglia during the presentation of disgust stimuli.

Huntington's disease (HD) is a genetic disease caused by expansion of a CAG trinucleotide within a gene on chromosome 4 (hereafter referred to as CAG-exp). This CAG-exp typically leads to cognitive decline, motor abnormalities and psychiatric symptoms, usually apparent by middle adulthood, as well as severe basal ganglia neurodegeneration measurable on MRI (Aylward et al., 2000; Thieben et al., 2002). Emotion recognition studies in HD and in individuals with CAG-exp who have not yet manifested clinical signs of HD (hereafter referred to as pre-HD) have been argued to support claims that the basal ganglia are necessary neural structures for the processing of disgust stimuli. Four published studies have examined facial emotion processing in people with clinically manifest HD. Three of these studies found impaired recognition of disgust stimuli, relative to other emotions (Sprengelmeyer et al., 1996; Wang et al., 2003; Montagne et al., 2006). In contrast, Milders et al. (2003) did not find evidence for a relative impairment in disgust, but instead showed relatively worse performance for fear stimuli in their HD sample.

Consistent with the findings in manifest HD, three of the four studies of emotion recognition in pre-HD also found relative impairments of disgust (Gray et al., 1997; Hennenlotter et al., 2004; Sprengelmeyer et al., 2006). In addition, Hennonlotter et al. (2004) found reduced fMRI activation of basal ganglia and insula during the processing of disgust stimuli in their pre-HD sample, lending further support to the claim that disgust processing is supported by these regions. However, the results of the Milders et al. (2003) study were, again, in contrast to other reports; they found intact emotion recognition in their pre-HD sample for all emotions.

Thus, both the nature of emotion recognition deficits and their specificity in HD and pre-HD are still unclear. Most studies of HD have found that recognition of facial expressions of emotions other than disgust is also impaired, with disgust recognition relatively more affected (Sprengelmeyer et al., 1996; Wang et al., 2003; Montagne et al., 2006). The progressive nature of HD has also not been systematically taken into account in previous studies. The specificity of these deficits could vary by disease stage, with selectivity for a deficit in disgust early on, and a broadening of impairment to other emotions as the disease progresses. Alternatively, relative deficits in disgust recognition could occur in the context of more subtle deficits in other emotions, and this picture may progress uniformly with disease, maintaining a relative deficit in disgust. It is also possible that disgust is not affected more than other emotions; a recent study found that the early HD group was impaired at recognizing all four basic negative emotions (disgust, anger, fear and sadness), with the greatest impairment in fear, whereas the pre-HD group did not differ from controls for recognition of any emotion (Milders et al., 2003). Methodological differences do not easily account for this different set of findings. The same set of stimuli, the Ekman and Friesen (1976) emotion expression faces, served as the source for study stimuli in the Milders’ study and in previous studies that had shown relative impairment in disgust recognition in diagnosed HD patients (Sprengelmeyer et al., 1996) and pre-HD (Gray et al., 1997). The subject characteristics of these samples were also comparable with regard to age, IQ and years since diagnosis in the HD groups.

Nonetheless, there are other methodological inconsistencies and limitations that may account for differences in study results. For example, Milders et al. (2003) highlighted that different analysis approaches can shift the results from supporting a relative deficit in disgust to suggesting a more general emotion recognition deficit. Also of interest, previous studies of emotion recognition in HD and pre-HD have neither included neutral stimuli nor provided neutral as a response choice. Because neutral facial stimuli are commonly confused with sadness stimuli (Katsikitis, 1997), this may inflate the accuracy of sadness recognition. One final methodological issue is that sample sizes in previous studies have been very limited. The size of HD and pre-HD groups has ranged from only 6 to 20 subjects, thus severely limiting the power of these studies to detect all but large group differences, and also increasing the possibility that findings reflect the influence of a few extreme cases.

The current study addresses some of these limitations by examining emotion recognition in a large, well-characterized sample of pre-HD participants from the Predict-HD project. Predict-HD is a prospective longitudinal study of neurobiological, cognitive and psychiatric changes in individuals with the HD CAG-exp, but who have not yet reached the stage of clinical diagnosis (based on the presence of unequivocal HD motor signs). At the time of this report, the sample included 478 pre-HD participants studied at their baseline assessment; as such, we had ample power to detect even very subtle abnormalities in emotion recognition. To examine possible relationships between emotion recognition and other participant characteristics, we compared pre-HD to controls without the HD CAG-exp, and also examined recognition accuracy of facial emotions in relation to estimates of proximity to clinical HD (Langbehn et al., 2004), basal ganglia volumes, neurological signs and facial (non-emotion) recognition.

Material and Methods

Participants

The study visit required about one full day of participation (over 1 or 2 days) including an approximately 3-h cognitive assessment, a neurological exam, structural MRI, blood draw, medical history and psychiatric/behavioural questionnaires. Data were collected as part of the baseline visit of the Predict-HD study (Paulsen et al., 2006a) at 24 sites in the United States, Canada and Australia. At the time this manuscript was written, 541 participants had been recruited using contacts through neurological clinics and genetic testing programs, as well as by regional informational talks, brochure distributions, advertisement through HD lay organizations, mail solicitations and recruiter attendance at conferences. Of the participants included in the baseline sample, 532 had complete sets of demographic and experimental data; 475 had the CAG-expansion for HD (CAG-Exp; CAG length >39) and 57 had CAG lengths in the normal range (CAG-Norm; CAG length <30).

Inclusion criteria for Predict-HD required participants to have undergone genetic testing for the presence of the CAG expansion in the HD gene. Exclusion criteria included: clinical evidence of unstable medical or psychiatric illness; alcohol or drug abuse within the previous year; learning disability or mental retardation requiring special education; history of other central nervous system disease or event, such as seizures or head trauma; pacemaker or metallic implants; age <26 years; prescribed antipsychotic medications within the past 6 months and use of phenothiazine-derivative anti-emetic medications for at least 3 months. Other prescribed, over-the-counter and natural remedies were not restricted. Individuals with symptoms of psychological distress were not excluded unless there was clear evidence of instability, such as a recent psychiatric hospitalization or current suicidal ideation.

Participant characterization

Neurological examination

All participants underwent a neurological exam by raters that were trained as part of the study protocol to complete a standardized exam, the Unified Huntington's Disease Rating Scale (UHDRS; Huntington Study Group, 1996). Based on results of the motor exam, the rater selected a confidence rating on a scale of 0–4 to represent the presence or absence of motor abnormalities and their opinion of the likelihood that the presence of abnormalities was indicative of HD. The ratings were defined as: (0) normal (no abnormalities, N = 159), (1) non-specific motor abnormalities (<50% confidence that the participant had sufficient motor abnormalities to warrant a diagnosis of HD, N = 211), (2) motor abnormalities that may be signs of HD (50–89% confidence, N = 70), (3) motor abnormalities that are likely signs of HD (90–98% confidence, N = 24), (4) motor abnormalities that are unequivocal signs of HD (≥99% confidence, N = 11). Given the small number of participants with the confidence level rating of 4 (n = 11), and given our focus on the period prior to diagnosis of HD, participants with a motor rating of 4 were excluded from the primary analyses.

Estimate of premorbid intellectual function

Participants also completed the American National Adult Reading Test (ANART; Schwartz and Saffran, 1987, cited in Grober and Sliwinski, 1991) as an estimate of general intellectual functioning (estimated IQ). There were no significant differences in age, education level or estimated IQ between the CAG-Exp and CAG-Norm groups (all P > 0.05; see Table 1).

Symptoms of depression

Participants completed the Beck Depression Inventory-II (BDI-II; Beck et al., 1996), a self-report of current symptoms of depression. The mean total score of the CAG-Exp group was higher than the mean of the CAG-Norm group [t (104) = 5.36, P < 0.0001; see Table 1]. Furthermore, a one-way ANOVA of BDI-II score including CAG-Norm and the CAG-diagnostic confidence level subgroups indicated an overall effect of group (F = 5.37, P < 0.001). Post hoc comparisons indicated that the CAG-Exp subgroups with confidence ratings of 1, 2 and 3 had higher mean BDI-II scores than the CAG-Norm group (all P < 0.05). It is important to note, however, that the mean BDI-II scores for all groups (Table 1) were in the clinically normal range (0–13).

Estimate of years to diagnosis of HD

We estimated the proximity to clinical diagnosis of HD for each CAG-Exp participant based on current age and CAG repeat length as per the formulas described by Langbehn et al. (2004). The probability of clinical diagnosis within the next 5 years can be estimated from their survival-analysis-based formulation of estimated years to diagnosis. We use this probability rather than estimated years to diagnosis because preliminary analyses of relationships with striatal volumes and other clinical variables suggests that more linear, and thus more easily modelled, relationships are likely using the probability scale.

Volumetric analyses of caudate nucleus and putamen

Participants underwent magnetic resonance imaging (MRI) to determine striatal volumes (see Table 1 for group means). Brain imaging data were collected using 1.5 Tesla scanners; 23 sites used a General Electric model and one site used a Siemens. Imaging data were obtained using a standard protocol designed to optimize visualization of the basal ganglia. Total scanning time was ∼15 min. Measurements were obtained by manually drawing boundaries of the caudate and putamen, as described previously (Aylward et al., 1996, 1997). All measurements were performed by a single rater after establishing inter-rater reliability with Dr Aylward (intraclass correlation of 0.98 for caudate and 0.99 for putamen, based on 10 scans). The data were analysed in the order received. MRI data from 180 of the CAG-Exp and 12 CAG-Norm Predict-HD participants had been analysed at the time of this report. The MRI sample is smaller because of a lag induced by the extra processing steps needed to produce the volumetric data and does not reflect any other known sampling bias.

Procedure and tasks

The Benton Facial Recognition Task (Benton et al., 1994) and the emotion recognition task were completed as part of the larger cognitive assessment battery. The Disgust Scale (Haidt et al., 1994) was administered as part of a set of self-report questionnaire measures that participants filled out during the study visit.

Benton facial recognition task

This measure assessed visual spatial abilities and processing of unfamiliar faces. Using the Benton Facial Recognition stimulus book, participants were simultaneously shown a target face and a set of six faces from which they were told to select photos that depicted the person in the target photo. Six items involved selecting an identical photo and seven items involved selecting three photos of the target that differed with regard to orientation and illumination, for a maximum possible score of 27 (Table 1).

Emotion recognition task

The task required participants to view a subset of 70 Ekman and Friesen faces (1976) on a computer touchscreen display. For each trial, a photograph of a face depicting an emotional or neutral expression was displayed in the middle of the screen and seven emotion labels (anger, disgust, fear, happiness, neutral, sadness, surprise) were presented simultaneously beneath the stimulus; the array of emotion labels (arranged in alphabetical order) was consistent across trials and participants. The emotion labels were introduced during a set of seven practice trials (one for each emotion) that were completed prior to the experimental trials. For the test trials, there were 10 different facial expressions for each of the 7 labels (70 total trials), and the stimuli were presented in a random order for each participant. Participants were instructed to decide which emotion the person was feeling based on his/her facial expression and to respond by touching the selected emotion with the dominant index finger.

The face stimuli were 3.7 × 5.6 inches, and the emotion labels were each presented in a 1.6 × 0.8 inch text box. Participants sat ∼20 inches from the screen. Each trial began with the display of a circle (diameter = 1.5 inches) at the bottom centre of the screen; this was referred to as the ‘home key’ (Fig. 1). To initiate the trial, the participant was to place the dominant index finger on the home key. After 500 ms the stimulus (i.e. face) was displayed. The participant was to press and hold the home key at all times except when ready to select one of the emotion labels. After an emotion label was selected, the face stimulus and labels disappeared and the home key was again presented to initiate the next trial. The overall response time was the duration from onset of stimulus to the selection of an emotion label.

Face stimuli were displayed for a maximum of 4 s and the emotion labels were displayed for up to 8 s, allowing the participant time to respond after stimulus offset. If no response occurred by the offset of the emotion labels, the participant was to release the home key and wait for it to reappear to initiate the next trial. All trials, including those when the participant did not respond, were followed by a 500 ms intertrial interval. The procedure used in the practice trials was identical to that of the experimental trials, except that face stimuli were replaced with emotion words.

Participants completed a 32-item self-report questionnaire that evaluated their responses to a variety of situations that are potentially disgust provoking. Half of the items are presented in True–False format and the other half require the participant to rate the level of disgust elicited by the statement according to the following options: not at all disgusting, slightly disgusting or very disgusting. The maximum total score for this scale is 32 (Table 1).

Equipment

The emotion recognition task was administered on an Athlon 900 MHz computer running Windows 98, 2nd edition (Microsoft, Seattle, WA). The task was displayed on a KDS Pixel Touch, 17″ FST Capacitive PC Touch Monitor (Ontario, CA) using interface software from Microsoft (TouchWare for Windows, Version 5.4, Methuen, MA). The monitor's 16″ viewable area was set to a resolution of 1024 × 768 pixels. Participant responses were recorded via the touchscreen.

Statistical methods

The distributions of accuracy scores for each emotion recognition task (range = 0–10 for each emotion) were such that a number of participants performed at the maximum score; these data violated mathematical assumptions required in the context of regression or ANOVA. Therefore, we treated these as ordered categorical data rather than continuous and analysed the data using proportional odds cumulative logit models (Mcullach, 1980; Agresti, 1990), the most common method for modelling ordered data. Diagnostic checks showed that the data fit this model reasonably well. (For readers interested in additional technical details, we include the following. The regression coefficients derived from these models can be interpreted as linear regression parameters for an underlying, latent, continuous variable that is assumed to be approximated by the ordinal categories. To the scientist familiar with latent variables, this interpretation may be more useful than the alternative odds ratio interpretation of these models which are more commonly used in public health research.)

For ease of interpretation, we converted the results of the logit models to percent correct, adjusted for several demographic and characterization measures. The conversion to adjusted percent correct yields slightly non-linear results, but the validity of P values is fully preserved. We also analysed mean response times for the correct responses to each emotion condition. In these analyses, we used the natural log of the response times to reduce skewness and thus provide a satisfactory distribution for traditional linear regression analysis.

We examined accuracy and response times for each of the seven emotions in four separate models to determine relationships to: (1) group membership (CAG-Exp versus CAG-Norm); (2) UHDRS confidence level rating; (3) estimated probability of clinical diagnosis in 5 years and (4) caudate, putamen and total striatal volumes for the CAG-Exp group (CAG-Norm sample size for the striatal measures was too small to complete this analysis). For each model, we also controlled for demographic variables, including gender, age, education and estimated intellectual ability (i.e. ANART score), as well as depression scores (i.e. BDI-II raw scores). To further narrow the focus of our analyses to facial emotion recognition abilities, we also controlled for the potential confound of individual variability in general face recognition by including as a covariate the raw score of the Benton Facial Recognition Task (Benton et al., 1994). All models were fit using SAS, version 9.1 software (SAS Institute. The SAS System for Windows. SAS Institute Inc., Cary, NC, 2002).

Exploratory analyses

To examine the consistency of the data from our large sample with a recent report of a much smaller sample indicating that 40% of a pre-HD sample showed selective disgust recognition deficits (Sprengelmeyer et al., 2006), we completed several exploratory analyses to examine the possibility that only a subpopulation of those with the HD expansion showed a preferential deficit in recognizing disgust. The analyses included visual examination of the pairwise scatter plots of performance on all emotion pairs, and a variety of cluster analyses. We used K-means (Kaufman and Rousseeuw, 1990; MacQueen, 1967), medioid-centred (Kaufman and Rousseeuw, 1990) and fuzzy logic-based clustering techniques (Kaufman and Rousseeuw, 1990). Furthermore, we examined possible multidimensionality in deficit patterns by directly examining the observed correlation matrix among emotion scores and also by performing an ordinal exploratory factor analysis based on latent variable assumptions (Bartholomew and Knott, 1999; Muthen and Muthen, 2001).

Results

Associations between emotion recognition accuracy and indicators of HD progression

The CAG-Exp group performed worse (i.e. had fewer correct responses) than the CAG-Norm group on all four negative emotions: anger, disgust, fear and sadness (all P < 0.05; see Table 2), but the groups did not differ significantly for happiness, surprise or neutral. For each of the four negative emotions, higher diagnostic confidence ratings (i.e. greater certainty of the presence of HD-related motor signs) were significantly associated with poorer recognition performance (all P < 0.05). The CAG-Exp subgroup with a confidence rating of 0 had fewer correct responses than the CAG-Norm group for anger only (P = 0.05, Table 2). The CAG-Exp subgroups with confidence ratings of 1 and 2 performed worse than the CAG-Norm group for anger, disgust, fear and sadness, whereas those with a confidence rating of 3 had fewer correct responses for anger, fear and sadness, but not disgust.

↵aAdjusted% correct for CAG-N may vary by up to 0.3% for the two models presented in this table. This is due to slight differences in the statistical likelihoods being maximized.

We also found that a higher estimated probability of HD clinical diagnosis within 5 years was associated with a decreased probability of performing well for anger, disgust, fear, sadness and surprise (all P < 0.05). In contrast, analyses examining the association between emotion recognition and striatal volumes indicated that the overall model was not significant. Therefore, we did not examine individual associations between caudate and putamen volumes and recognition performance for individual emotions.

Emotion recognition response times

For all emotions, a higher estimated probability of HD clinical diagnosis within 5 years was associated with slower response times (all P < 0.01). However, there were no differences between the response times of the CAG-Exp group and the CAG-Norm group for any emotion. An examination of responses times by CAG-Exp confidence rating subgroups revealed an overall difference for anger and happiness. For both of these emotions, the CAG-Exp subgroups with a confidence rating of 2 and 3 responded more slowly than the CAG-Norm group (all P < 0.05). For anger, the subgroup with a confidence rating of 1 also had slower RTs than the CAG-Norm group (P < 0.05). Analyses also revealed significant associations between response times and striatal volumes for all emotion categories (including neutral), except happiness (all P < 0.05).

We examined the independent associations between each demographic variable and emotion recognition performance. Lower estimated IQ (as indexed by errors on the ANART) was associated with poorer emotion recognition performance. The only exceptions were non-significant findings for fear, happiness, neutral and surprise performance in the striatal model. Less education was also associated with less accurate fear recognition across all logistic regression models. Furthermore, females had higher emotion recognition accuracy than males in all four logistic regression models for all negative emotions (anger, disgust, fear and sadness). Although the effect of age varied somewhat by model, in general, for each 10-year increase in age there was a significant reduction in emotion recognition performance for one or more emotions. The associations between demographic variables and emotion-recognition performance highlight the importance of controlling for these factors when analysing emotion recognition. In contrast to the findings for demographic variables, there were no significant associations between BDI-II scores and emotion recognition performance for any of the models.

A higher score on the Benton Facial Recognition Test score was associated with higher emotion recognition accuracy for surprise across all logistic regression models. For some models, higher Benton scores were also related to more accurate recognition of anger, disgust, fear and sadness. Benton scores were not associated with confidence level ratings, suggesting that relationships between facial recognition and emotion recognition are not indicative of HD-related changes on both measures, but that individual differences in facial perception ability may contribute to recognition of some emotions.

Associations between emotion recognition performance and self-report of disgust experience

Given that some previous studies of HD and pre-HD have reported attenuated disgust experiences (as measured with self-report questionnaires) in addition to relative deficits in disgust recognition, we examined the self-report scores of the Haidt Disgust scale (1994) by CAG-Exp confidence ratings, and the relationship between these scores and emotion recognition performance for all emotion categories. A regression analysis, controlling for demographic variables, revealed that UHDRS confidence level rating was not significantly related to self-reported disgust score. In addition, analysis of covariance revealed no difference in self-reported disgust score between the CAG-Exp subgroups (see Table 1 for M, SD). Furthermore, Spearman correlations, controlling for demographic variables, showed no significant relationships in the CAG-Norm, the CAG-Exp group or the CAG-Exp subgroups between self-reported disgust scores and recognition performance for any of the emotions, including disgust.

Exploratory analyses

We completed several follow-up analyses to explore possible explanations for the discrepancy between our findings and previous results that have shown a relative or specific deficit in recognizing and experiencing disgust.

Examining the possibility of subgroups

We performed multiple checks for the existence of subgroups in our sample; none of the methods we used provided any suggestion of distinct subpopulations. Furthermore, analyses that focused on possible multidimensionality in deficit patterns (principal components and factor analyses) did not reveal any noteworthy evidence of more than one dimension. For example, a principal components analysis of the estimated polychoric correlation matrix yielded a single large eigenvalue (2.27) and five additional eigenvalues between 0.56 and 0.96, suggesting only one primary dimension and thus, no evidence of distinct subgroups.

Error analysis

To further understand the current results, we examined the frequency of each type of error that was possible during the emotion recognition task (see Appendix A for error matrix). Neutral errors occurred most frequently for sadness and the rate of this error type was higher for CAG-Exp subgroups with higher confidence level ratings. Interestingly, the proportion of disgust responses when anger stimuli were presented, two emotions commonly known to be mistaken for each other, were similar to the rates for which neutral was selected during the presentation of sadness stimuli. These findings demonstrate that the recognition of some emotions (i.e. in this study, sadness) can be affected by the inclusion or exclusion of neutral as a response choice. Including neutral stimuli and a neutral response choice in the Predict-HD study may have led to less accurate performance for sadness.

Comparison of current results with previous HD and pre-HD studies of emotion recognition

To examine patterns of performance across studies of emotion recognition in HD and pre-HD, we compared our findings to three previous studies (Sprengelmeyer et al., 1996; Gray et al., 1997; Milders et al., 2003). To our knowledge, this set of studies constitutes all previously published English language studies of emotion recognition in HD and pre-HD that used the Ekman and Friesen stimuli and an emotion identification task similar to the current study. Although Sprengelmeyer et al. (2006) also used the Ekman and Friesen stimuli and a similar method, we were unable to include the results of this study because the data were combined with results from a second task that utilized different stimuli. Note that because Gray et al. (1997) presented fewer stimuli per emotion (4 rather than the 10 used in other studies), results for all studies were converted to mean percent correct for each emotion.

Figure 2 graphically depicts our findings and the results of the previous studies in three distinct samples: controls, prediagnostic CAG-expanded (pre-HD) and diagnosed HD. The pattern of performance across studies is very similar and largely supports our finding of a general emotion recognition deficit in HD and pre-HD. For the CAG-expanded samples, the Gray et al. results were very similar to our findings for anger, disgust and fear; but accuracy scores were much higher for recognition of sadness in their sample. The Milders et al. (2003) sample performed at a similar level to the Predict-HD CAG-Exp subgroups for fear, but had higher scores for anger, disgust and sadness. Regarding sadness recognition, inconsistencies between our findings and those of previous studies may be explained by the higher rate of neutral errors in our sample when sadness stimuli were presented. When viewed in the context of other HD and pre-HD studies, the exceptionally poor disgust performance observed in the Sprengelmeyer et al. (1996) sample emerges as a divergent finding. Nonetheless, the Sprengelmeyer HD sample also demonstrated difficulty recognizing other negative emotions; their findings are thus consistent with our results and those of other studies, which suggest a more general emotion recognition deficit.

The mean percent correct for controls, pre-HD and HD for each emotion are shown for three previous studies in comparison to the current study. Error bars represent the standard error.

Discussion

This study of a large, well-characterized sample clearly demonstrates decline in the recognition of all negative emotions, including anger, disgust, fear and sadness, beginning early in the prodromal, or pre-diagnostic, period of HD. Our findings contrast with previous reports that describe a relative or more severe deficit in recognizing disgust in HD (Sprengelmeyer et al., 1996; Sprengelmeyer et al., 1997b; Wang et al., 2003; Montagne et al., 2006) and pre-HD (Gray et al., 1997; Hennenlotter et al., 2004; Sprengelmeyer et al., 2006). Our findings do indicate robust associations between CAG-based estimates of proximity to clinical diagnosis and accuracy of recognition of all emotions except happiness, suggesting that individuals who are closer to clinical diagnosis have a general impairment in recognition of emotional expressions.

In keeping with our accuracy results, we found that higher probability of clinical diagnosis within 5 years was associated with slower response times for all emotions. Striatal volumes were also associated with response times for all emotions, except happiness. It is unclear if these findings reflect difficulty identifying emotion expressions or if slower response times are due, in part, to changes in motor function or generally slowed processing associated with closer proximity to onset and striatal changes. We also found that, although accuracy for all negative emotions was affected in pre-HD individuals, response times did not differ between pre-HD and control groups for most emotions. Specifically, response times were slower in the pre-HD group, relative to controls, for happiness and anger but not other negative emotions. The discrepancy between our accuracy and response time data is in contrast to Sprengelmeyer et al.'s (2006) findings of analogous accuracy and response time data, both showing impaired performance only for disgust. Other previous studies have not reported response time data and thus, it is not yet clear how emotion recognition accuracy and response times are related in HD and pre-HD participants.

A graphical comparison (Fig. 2) of the unadjusted mean scores from our study with the results of previous HD and pre-HD studies that employed an emotion recognition paradigm using the same stimuli (Ekman and Friesen faces) revealed that findings across studies are quite similar. Although our analyses involved adjusted mean scores, we present unadjusted means in this comparison in order to be consistent with previous studies. It might be argued that the findings of the current study differ from previous studies because we employed different analyses and/or because we adjusted percent correct for several demographic and characterization measures. We argue that this is not the case because, as Fig. 2 shows, unadjusted means are very similar to the adjusted means and our findings are highly consistent with previous findings. The comparison with previous studies fails to support the notion that disgust is selectively or relatively impaired in HD or pre-HD. Sprengelmeyer et al.'s (1996) small sample of HD patients (n = 13) is the only group that clearly demonstrated a relative impairment in disgust; that is, although the group showed impaired recognition performance for other negative emotions, recognition of disgust was disproportionately worse. As such, the preponderance of the evidence indicates that our finding of a general emotion recognition deficit is consistent with the results of previous studies.

We considered the possibility that a proportion of pre-HD individuals have a specific deficit in recognizing disgust, but that a large sample size could mask the effects of a subgroup. In one previous study, a selective impairment in disgust was identified in only a subsample of the pre-HD participants (40%), whereas another subsample (40%) performed normally and 20% had a general emotion recognition deficit (Sprengelmeyer et al., 2006). Using several different statistical methods, we were unable to find evidence for subgroups and thus, no support for even a small group of pre-HD individuals with recognition deficits for disgust only.

Neurobiological implications of a general emotion recognition deficit in pre-HD

The decline in general emotion recognition abilities in our pre-HD sample is likely attributable to early neurobiological changes. Our findings suggest that deficits in emotion recognition are not associated with changes in striatal volume. It is possible, however, that changes in emotion recognition abilities are associated with functional abnormalities in striatal circuitry such as the alteration or loss of synaptic connections or white matter reductions (Rosas et al., 2003; Beglinger et al., 2005; Reading et al. 2005; Paulsen et al., 2006b) that were not measured by volumetric MRI in the current study. Alternatively, it may be that decreased emotion recognition ability is associated with functional or structural changes in brain region(s) other than the caudate or putamen. Recent findings have identified structural (Thieben et al., 2002; Peinemann et al., 2005; Rosas et al., 2005; Paulsen et al., 2006b) and functional (Paulsen et al., 2004; Reading et al., 2004; Feigin et al., 2006) brain changes in regions other than the striatum in pre-HD including the cingulate cortex (Reading et al., 2004) and the insular cortex (Rosas et al., 2005). The insular cortex has also been implicated in the recognition and experience of disgust in fMRI studies of healthy subjects (Phillips et al., 1997, 1998; Sprengelmeyer et al., 1998; Hennenlotter et al., 2004) and behavioural studies of patients with known or suspected insular damage (Calder et al., 2000; Shapira et al., 2003). Thus, although our results failed to show a relationship between striatal volume and facial emotion recognition performance, we did not measure other brain regions and thus cannot evaluate the role of early changes in other brain structures in the recognition of emotional expressions.

Consistent with the possibility that extra-striatal brain changes may underlie the emotion recognition deficits in HD and pre-HD, evidence suggests that emotion processing involves several interconnected brain structures and regions including amygdala, striatum, orbitofrontal cortex, right somatosensory areas, insular cortex, occipito-temporal cortex and others (Johnston et al., 2001; Adolphs, 2002). Studies using fMRI (e.g. Phillips et al., 1998; Hennenlotter et al., 2004) have shown several areas of activation in response to specific types of emotional stimuli and although some facial expressions appear to elicit relatively greater activation in distinct regions (e.g. amygdala activation to fear stimuli), there is overlap in activation patterns, particularly for the negative emotions. Researchers working with a neural network model of emotion recognition (Johnston et al., 2001) demonstrated that a 10% lesion to the distributed system results in a general emotion recognition deficit that affects the four basic negative emotions (anger, fear, disgust and sadness) more strongly than happiness, surprise and neutral. We show clear evidence of a general decline in emotion recognition ability in pre-HD, particularly for negative emotions. These findings strongly argue that studies of HD should not be cited as evidence for the hypothesis that the striatum is specifically involved in the recognition of disgust stimuli. Additional studies are needed to determine the neurobiological underpinnings of the general emotion recognition impairment in HD and pre-HD.

Methodological considerations for emotion research in HD and other populations

Several important factors of study design and data analysis have been inconsistent. Previous studies of facial emotion recognition in HD and pre-HD have used methodological variations that have limited the possibility for the convergence of findings. The design of the current study addressed some previous limitations (e.g. small sample sizes) and our results provide some additional information about factors that merit consideration.

Intellectual functioning

In the Predict-HD sample, lower estimated pre-morbid verbal intellectual ability was related to worse emotion-recognition performance; this effect was stronger for the negative emotions, and of these, strongest for disgust. We note, however, that our estimate of pre-morbid intellectual function varied as a function of diagnostic confidence level ratings, suggesting that the ANART may also be sensitive to disease-related changes in cognitive ability and thus, not a pure measure of pre-morbid IQ in this population (Carlozzi et al., 2007). In our analyses, the relationships between IQ and emotion recognition performance were highly similar for the pre-HD and control groups, but in the controls, the effect sizes were small and non-significant possibly due to the small sample size (n = 57). Thus, although previous studies do not report co-varying for IQ, this may be an important covariate. Further studies are needed to determine if IQ has a differential relationship with either specific emotions or types of participants.

Demographic factors

We found that females generally performed better for all negative emotions and that younger age was associated with better emotion recognition performance. Thus, future studies of emotion recognition should take these factors into account.

Stimulus selection

In the current study, we evaluated the error patterns, including the extent to which facial expressions were confused with each other, and how frequently specific expressions were perceived as showing no emotion (i.e. neutral). The neural network model by Johnston et al. (2001) predicts that neutral errors occur most frequently for anger and sadness and that these errors increase when the system is lesioned, contributing to the general recognition deficit. We found that both the CAG-Norm and CAG-Exp groups mistook sadness for neutral and that the frequency of these errors was similar to that of confusions between anger and disgust, as well as confusions between fear and surprise. Thus, whereas our findings are consistent with Johnston et al.'s (2001) predicted pattern of errors for sadness, we did not find a high rate of neutral errors for anger. Nonetheless, the error patterns in our data and the neural network model (Johnston et al., 2001) make a strong argument for the inclusion of neutral stimuli in future studies of emotion recognition in HD and other populations because these stimuli (or response choice) may affect the accuracy results for one or more emotions.

Number of trials

The number of emotion recognition trials has varied in previous studies, ranging from four per emotion (Gray et al., 1997) to 30 per emotion (Sprengelmeyer et al., 2006). Previous studies demonstrate that even when a small number of stimuli were used with small sample sizes, differences between controls and HD and pre-HD groups have been detected (Sprengelmeyer et al., 1996; Gray et al., 1997; Montagne et al., 2006). The current study has the added advantage of large sample size, which appears to have adequately compensated for any limitations due to having only 10 trials per emotion.

Knowledge of gene status and participant distress

Some previous studies of emotion recognition in individuals that were experiencing distress (i.e. depression and anxiety) have shown enhanced ability to recognize negative emotions (Kan et al., 2004; Surguladze et al., 2005), whereas others (Langenecker et al., 2005) have found impaired emotion recognition performance. All Predict-HD study participants are aware of their HD gene status and thus, it might be argued that psychological factors related to knowledge of an impending disease affected performance. Although the pre-HD groups with confidence ratings of 1, 2 and 3 had higher BDI-II scores than controls, the means for all groups were within the normal range. We included BDI-II scores in our models to control for the possibility that depressive symptoms may have affected performance and the results indicate that BDI-II scores were unrelated to emotion recognition performance. Although it is currently unclear whether awareness (or lack of awareness) of HD gene-status affects emotion recognition performance, it seems important to consider and control for this factor in future studies.

Possible clinical implications

The clinical relevance of the current findings is yet unknown. It is important to note that the absolute change in emotion recognition performance in pre-HD is small and therefore, may not significantly affect every day function. In fact, no previous study of emotion recognition deficits in pre-HD or HD has investigated potential clinical correlates of these impairments. Thus, an important future direction is the study of the potential impact of decline in emotion recognition deficits in HD and pre-HD on social and emotional functioning.

Conclusions

Given that the current study represents a very diverse and well-characterized sample of pre-HD individuals, our findings strongly indicate that a selective or disproportionately more severe deficit in the recognition of disgust is not a reliable finding in pre-HD. In fact, our results suggest that disgust recognition may be less affected in the earliest stage of pre-HD than recognition of anger, fear and sadness. Longitudinal data are necessary to confirm these cross-sectional findings and will be essential for determining whether the recognition of specific emotions declines at differential rates in pre-HD. The lack of a relationship between striatal volumes and recognition of disgust, or any other emotion, argues that we may need to look beyond volumetric changes in the striatum to explain the general changes in emotion recognition during the prodromal phase of HD. The current findings necessitate reconsideration of the claim that disgust recognition is disproportionately affected in pre-HD.

Acknowledgements

Our thanks to the National Institute of Neurological Disorders and Stroke grant # NS40068 and the High Q Foundation for the project entitled, Neurobiological Predictors of Huntington's Disease (Predict-HD). Additional funding was provided by National Institutes of Mental Health grant # 01579, Roy J. Carver Trust Medicine Research Initiative, and Howard Hughes Medical Institute grants to Jane S. Paulsen, the Huntington's Disease Society of America, the Huntington's Society of Canada, and Hereditary Disease Foundation grants to the Huntington Study Group. Our thanks to the National Research Roster for Huntington Disease Patients and Families (HD Roster) (NIH: N01 NS 3 2357) located at Indiana University School of Medicine. The HD Roster has been funded by the NIH since 1979 and serves to recruit patients and families interested in participating in HD research.

↵Note: *indicates high error rates; as expected, anger and disgust were frequently confused and surprise was a common response when fear was presented; results also show that neutral responses were common when sadness was presented.