We examined the contribution of the amygdala to value signals within orbital prefrontal cortex (OFC) and medial prefrontal cortex (MFC). On each trial, monkeys chose between two stimuli that were associated with different quantities of reward. In intact monkeys, as expected, neurons in both OFC and MFC signaled the reward quantity associated with stimuli. Contrasted with MFC, OFC contained a larger proportion of neurons encoding reward quantity and did so with faster response latencies. Removing the amygdala eliminated these differences, mainly by decreasing value coding in OFC. Similar decreases occurred in OFC immediately before and after reward delivery. Although the amygdala projects to both OFC and MFC, we found that it has its greatest influence over reward-value coding in OFC. Notably, amygdala lesions did not abolish value coding in OFC, which shows that OFC’s representations of the value of objects, choices, and outcomes depends, in large part, on other sources.

The posterior cingulate cortex (CGp) is a major hub of the default mode network (DMN), a set of cortical areas with high resting activity that declines during task performance. This relationship suggests that DMN activity contributes to mental processes that are antagonistic to performance. Alternatively, DMN may detect conditions under which performance is poor and marshal cognitive resources for improvement. To test this idea, we recorded activity of CGp neurons in monkeys performing a learning task while varying reward size and novelty. We found that CGp neurons responded to errors, and this activity was magnified by small reward and novel stimuli. Inactivating CGp with muscimol impaired new learning when rewards were small but had no effect when rewards were large; inactivation did not affect performance on well-learned associations. Thus, CGp, and by extension the DMN, may support learning, and possibly other cognitive processes, by monitoring performance and motivating exploration.

In many settings, copying, learning from or assigning value to group behavior is rational because such behavior can often act as a proxy for valuable returns. However, such herd behavior can also be pathologically misleading by coaxing individuals into behaviors that are otherwise irrational and it may be one source of the irrational behaviors underlying market bubbles and crashes. Using a two-person tandem investment game, we sought to examine the neural and behavioral responses of herd instincts in situations stripped of the incentive to be influenced by the choices of one's partner. We show that the investments of the two subjects correlate over time if they are made aware of their partner's choices even though these choices have no impact on either player's earnings. We computed an “interpersonal prediction error”, the difference between the investment decisions of the two subjects after each choice. BOLD responses in the striatum, implicated in valuation and action selection, were highly correlated with this interpersonal prediction error. The revelation of the partner's investment occurred after all useful information about the market had already been revealed. This effect was confirmed in two separate experiments where the impact of the time of revelation of the partner's choice was tested at 2 seconds and 6 seconds after a subject's choice; however, the effect was absent in a control condition with a computer partner. These findings strongly support the existence of mechanisms that drive correlated behavior even in contexts where there is no explicit advantage to do so.

Evaluating the abilities of others is fundamental for successful economic and social behavior. We investigated the computational and neurobiological basis of ability tracking by designing an fMRI task that required participants to use and update estimates of both people and algorithms’ expertise through observation of their predictions. Behaviorally, we find a model-based algorithm characterized subject predictions better than several alternative models. Notably, when the agent’s prediction was concordant rather than discordant with the subject’s own likely prediction, participants credited people more than algorithms for correct predictions and penalized them less for incorrect predictions. Neurally, many components of the mentalizing network—medial prefrontal cortex, anterior cingulate gyrus, temporoparietal junction, and precuneus—represented or updated expertise beliefs about both people and algorithms. Moreover, activity in lateral orbitofrontal and medial prefrontal cortex reflected behavioral differences in learning about people and algorithms. These findings provide basic insights into the neural basis of social learning.

Recent findings suggest that tracking others’ beliefs is not always effortful and slow, but may rely on a fast and implicit system. An untested prediction of the automatic belief tracking account is that own and others’ beliefs should be activated in parallel. We tested this prediction measuring continuous movement trajectories in a task that required deciding between two possible object locations. We independently manipulated whether participants’ belief about the object location was true or false and whether an onlooker’s belief about the object location was true or false. Manipulating whether or not the agent’s belief was ever task relevant allowed us to compare performance in an explicit and implicit version of the same task. Movement parameters revealed an influence of the onlooker’s irrelevant belief in the implicit version of the task. This provides evidence for parallel activation of own and others’ beliefs.

Imitation typically occurs in social contexts where people interact and have common goals. Here, we show that people are also highly susceptible to imitate each other in a competitive context. Pairs of players performed a competitive and fast-reaching task (a variant of the arcade whac-a-mole game) in which money could be earned if players hit brief-appearing visual targets on a large touchscreen before their opponents. In three separate experiments, we demonstrate that reaction times and movements were highly correlated within pairs of players. Players affected their success by imitating each other, and imitation depended on the visibility of the opponent’s behavior. Imitation persisted, despite the competitive and demanding nature of the game, even if this resulted in lower scores and payoffs and even when there was no need to counteract the opponent’s actions.

Categorization is a cornerstone of perception and cognition. Computationally, categorization amounts to applying decision boundaries in the space of stimulus features. We designed a visual categorization task in which optimal performance requires observers to incorporate trial-to-trial knowledge of the level of sensory uncertainty when setting their decision boundaries. We found that humans and monkeys did adjust their decision boundaries from trial to trial as the level of sensory noise varied, with some subjects performing near optimally. We constructed a neural network that implements uncertainty-based, near-optimal adjustment of decision boundaries. Divisive normalization emerges automatically as a key neural operation in this network. Our results offer an integrated computational and mechanistic framework for categorization under uncertainty.

The biological mechanisms underlying long-term partner bonds in humans are unclear. The evolutionarily conserved neuropeptide oxytocin (OXT) is associated with the formation of partner bonds in some species via interactions with brain dopamine reward systems. However, whether it plays a similar role in humans has as yet not been established. Here, we report the results of a discovery and a replication study, each involving a double-blind, placebo-controlled, within-subject, pharmaco-functional MRI experiment with 20 heterosexual pair-bonded male volunteers. In both experiments, intranasal OXT treatment (24 IU) made subjects perceive their female partner's face as more attractive compared with unfamiliar women but had no effect on the attractiveness of other familiar women. This enhanced positive partner bias was paralleled by an increased response to partner stimuli compared with unfamiliar women in brain reward regions including the ventral tegmental area and the nucleus accumbens (NAcc). In the left NAcc, OXT even augmented the neural response to the partner compared with a familiar woman, indicating that this finding is partner-bond specific rather than due to familiarity. Taken together, our results suggest that OXT could contribute to romantic bonds in men by enhancing their partner's attractiveness and reward value compared with other women.

2013年12月15日日曜日

A decision is a commitment to a proposition or plan of action based on information and values associated with the possible outcomes. The process operates in a flexible timeframe that is free from the immediacy of evidence acquisition and the real time demands of action itself. Thus, it involves deliberation, planning, and strategizing. This Perspective focuses on perceptual decision making in nonhuman primates and the discovery of neural mechanisms that support accuracy, speed, and confidence in a decision. We suggest that these mechanisms expose principles of cognitive function in general, and we speculate about the challenges and directions before the field.

2013年12月12日木曜日

Nearly 25 years ago, the shared interests of psychologists and biologists in understanding the neural basis of social behavior led to the inception of social neuroscience. In the past decade, this field has exploded, in large part due to the infusion of studies that use fMRI. At the same time, tensions have arisen about how to prioritize a diverse range of questions and about the authority of neurobiological data in answering them. The field is now poised to tackle some of the most interesting and important questions about human and animal behavior but at the same time faces uncertainty about how to achieve focus in its research and cohesion among the scientists who tackle it. The next 25 years offer the opportunity to alleviate some of these growing pains, as well as the challenge of answering large questions that encompass the nature and bounds of diverse social interactions (in humans, including interactions through the internet); how to characterize, and treat, social dysfunction in psychiatric illness; and how to compare social cognition in humans with that in other animals.

Humans tend to use the self as a reference point to perceive the world and gain information about other people's mental states. However, applying such a self-referential projection mechanism in situations where it is inappropriate can result in egocentrically biased judgments. To assess egocentricity bias in the emotional domain (EEB), we developed a novel visuo-tactile paradigm assessing the degree to which empathic judgments are biased by one's own emotions if they are incongruent to those of the person we empathize with. A first behavioral experiment confirmed the existence of such EEB, and two independent fMRI experiments revealed that overcoming biased empathic judgments is associated with increased activation in the right supramarginal gyrus (rSMG), in a location distinct from activations in right temporoparietal junction reported in previous social cognition studies. Using temporary disruption of rSMG with repetitive transcranial magnetic stimulation resulted in a substantial increase of EEB, and so did reducing visuo-tactile stimulation time as shown in an additional behavioral experiment. Our findings provide converging evidence from multiple methods and experiments that rSMG is crucial for overcoming emotional egocentricity. Effective connectivity analyses suggest that this may be achieved by early perceptual regulation processes disambiguating proprioceptive first-person information (touch) from exteroceptive third-person information (vision) during incongruency between self- and other-related affective states. Our study extends previous models of social cognition. It shows that although shared neural networks may underlie emotional understanding in some situations, an additional mechanism subserved by rSMG is needed to avoid biased social judgments in other situations.

To investigate the mechanisms through which economic decisions are formed, I examined the activity of neurons in the orbitofrontal cortex while monkeys chose between different juice types. Different classes of cells encoded the value of individual offers (offer value), the value of the chosen option (chosen value), or the identity of the chosen juice (chosen juice). Choice variability was partly explained by the tendency to repeat choices (choice hysteresis). Surprisingly, near-indifference decisions did not reflect fluctuations in the activity of offer value cells. In contrast, near-indifference decisions correlated with fluctuations in the preoffer activity of chosen juice cells. After the offer, the activity of chosen juice cells reflected the decision difficulty but did not resemble a race-to-threshold. Finally, chosen value cells presented an “activity overshooting” closely related to the decision difficulty and possibly due to fluctuations in the relative value of the juices. This overshooting was independent of choice hysteresis.

Perception is often categorical: the perceptual system selects one interpretation of a stimulus even when evidence in favor of other interpretations is appreciable. Such categorization is potentially in conflict with normative decision theory, which mandates that the utility of various courses of action should depend on the probabilities of all possible states of the world, not just that of the one perceived. If these probabilities are lost as a result of categorization, choice will be suboptimal. Here we test for such irrationality in a task that requires human observers to combine perceptual evidence with the uncertain consequences of action. Observers made rapid pointing movements to targets on a touch screen, with rewards determined by perceptual and motor uncertainty. Across both visual and auditory decision tasks, observers consistently placed too much weight on perceptual uncertainty relative to action uncertainty. We show that this suboptimality can be explained as a consequence of categorical perception. Our findings indicate that normative decision making may be fundamentally constrained by the architecture of the perceptual system.

2013年11月26日火曜日

To advance our understanding of how the brain makes food decisions, it is essential to combine knowledge from two fields that have not yet been well integrated: the neuro-computational basis of decision-making and the homeostatic regulators of feeding. This Review integrates these two literatures from a neuro-computational perspective, with an emphasis in describing the variables computed by different neural systems and how they affect dietary choice. We highlight what is unique about feeding decisions, the mechanisms through which metabolic and endocrine factors affect the decision-making circuitry, why making healthy food choices is difficult for many people, and key processes at work in the obesity epidemic.

The lateral habenula (LHb) is believed to convey an aversive or 'anti-reward' signal, but its contribution to reward-related action selection is unknown. We found that LHb inactivation abolished choice biases, making rats indifferent when choosing between rewards associated with different subjective costs and magnitudes, but not larger or smaller rewards of equal cost. Thus, instead of serving as an aversion center, the evolutionarily conserved LHb acts as a preference center that is integral for expressing subjective decision biases.

Humans assess the credibility of information gained from others on a daily basis; this ongoing assessment is especially crucial for avoiding exploitation by others. We used a repeated, two-person bargaining game and a cognitive hierarchy model to test how subjects judge the information sent asymmetrically from one player to the other. The weight that they give to this information is the result of two distinct factors: their baseline suspicion given the situation and the suspicion generated by the other person's behavior. We hypothesized that human brains maintain an ongoing estimate of the credibility of the other player and sought to uncover neural correlates of this process. In the game, sellers were forced to infer the value of an object based on signals sent from a prospective buyer. We found that amygdala activity correlated with baseline suspicion, whereas activations in bilateral parahippocampus correlated with trial-by-trial uncertainty induced by the buyer's sequence of suggestions. In addition, the less credible buyers that appeared, the more sensitive parahippocampal activation was to trial-by-trial uncertainty. Although both of these neural structures have previously been implicated in trustworthiness judgments, these results suggest that they have distinct and separable roles that correspond to their theorized roles in learning and memory.

The ability to distinguish danger from safety is crucial for survival. On the other hand, anxiety disorders can result from failures to dissociate safe cues from those that predict dangerous outcomes. The amygdala plays a major role in learning and signaling danger, and recently, evidence accumulates that it also acquires information to signal safety. Traditionally, safety is explored by paradigms that change the value of a previously dangerous cue, such as extinction or reversal; or by paradigms showing that a safe cue can inhibit responses to another danger-predicting cue, as in conditioned-inhibition. In real-life scenarios, many cues are never paired or tested with danger and remain neutral all along. A detailed study of neural responses to unpaired conditioned-stimulus (CS−) can therefore indicate whether information on safety-by-comparison is also acquired in the amygdala. We designed a multiple-CS study, with CS− from both visual and auditory modalities. Using discriminative aversive-conditioning, we find that responses in the primate amygdala develop for CS− of the same modality and of a different modality from that of the aversive CS+. Moreover, we find that responses are comparable in proportion, sign (increase/decrease), onset, and magnitude. These results indicate that the primate amygdala actively acquires signals about safety, and strengthen the hypothesis that failure in amygdala processing can result in failure to distinguish dangerous cues from safe ones and lead to maladaptive behaviors.

All known human societies have maintained social order by enforcing compliance with social norms. The biological mechanisms underlying norm compliance are, however, hardly understood. We show that the right lateral prefrontal cortex (rLPFC) is involved in both voluntary and sanction-induced norm compliance. Both types of compliance could be changed by varying the neural excitability of this brain region with transcranial direct current stimulation, but they were affected in opposite ways, suggesting that the stimulated region plays a fundamentally different role in voluntary and sanction-based compliance. Brain stimulation had a particularly strong effect on compliance in the context of socially constituted sanctions, whereas it left beliefs about what the norm prescribes and about subjectively expected sanctions unaffected. Our findings suggest that rLPFC activity is a key biological prerequisite for an evolutionarily and socially important aspect of human behavior.

Effective decision-making requires consideration of costs and benefits. Previous studies have implicated orbitofrontal cortex (OFC), dorsolateral prefrontal cortex (DLPFC), and anterior cingulate cortex (ACC) in cost-benefit decision-making. Yet controversy remains about whether different decision costs are encoded by different brain areas, and whether single neurons integrate costs and benefits to derive a subjective value estimate for each choice alternative. To address these issues, we trained four subjects to perform delay- and effort-based cost-benefit decisions and recorded neuronal activity in OFC, ACC, DLPFC, and the cingulate motor area (CMA). Although some neurons, mainly in ACC, did exhibit integrated value signals as if performing cost-benefit computations, they were relatively few in number. Instead, the majority of neurons in all areas encoded the decision type; that is whether the subject was required to perform a delay- or effort-based decision. OFC and DLPFC neurons tended to show the largest changes in firing rate for delay- but not effort-based decisions; whereas, the reverse was true for CMA neurons. Only ACC contained neurons modulated by both effort- and delay-based decisions. These findings challenge the idea that OFC calculates an abstract value signal to guide decision-making. Instead, our results suggest that an important function of single PFC neurons is to categorize sensory stimuli based on the consequences predicted by those stimuli.

Novelty is an essential feature of creative ideas, yet the building blocks of new ideas are often embodied in existing knowledge. From this perspective, balancing atypical knowledge with conventional knowledge may be critical to the link between innovativeness and impact. Our analysis of 17.9 million papers spanning all scientific fields suggests that science follows a nearly universal pattern: The highest-impact science is primarily grounded in exceptionally conventional combinations of prior work yet simultaneously features an intrusion of unusual combinations. Papers of this type were twice as likely to be highly cited works. Novel combinations of prior work are rare, yet teams are 37.7% more likely than solo authors to insert novel combinations into familiar knowledge domains.

How did human societies evolve from small groups, integrated by face-to-face cooperation, to huge anonymous societies of today, typically organized as states? Why is there so much variation in the ability of different human populations to construct viable states? Existing theories are usually formulated as verbal models and, as a result, do not yield sharply defined, quantitative predictions that could be unambiguously tested with data. Here we develop a cultural evolutionary model that predicts where and when the largest-scale complex societies arose in human history. The central premise of the model, which we test, is that costly institutions that enabled large human groups to function without splitting up evolved as a result of intense competition between societies—primarily warfare. Warfare intensity, in turn, depended on the spread of historically attested military technologies (e.g., chariots and cavalry) and on geographic factors (e.g., rugged landscape). The model was simulated within a realistic landscape of the Afroeurasian landmass and its predictions were tested against a large dataset documenting the spatiotemporal distribution of historical large-scale societies in Afroeurasia between 1,500 BCE and 1,500 CE. The model-predicted pattern of spread of large-scale societies was very similar to the observed one. Overall, the model explained 65% of variance in the data. An alternative model, omitting the effect of diffusing military technologies, explained only 16% of variance. Our results support theories that emphasize the role of institutions in state-building and suggest a possible explanation why a long history of statehood is positively correlated with political stability, institutional quality, and income per capita.

Imagination, defined as the ability to interpret reality in ways that diverge from past experience, is fundamental to adaptive behavior. This can be seen at a simple level in our capacity to predict novel outcomes in new situations. The ability to anticipate outcomes never before received can also influence learning if those imagined outcomes are not received. The orbitofrontal cortex is a key candidate for where the process of imagining likely outcomes occurs; however, its precise role in generating these estimates and applying them to learning remain open questions. Here we address these questions by showing that single-unit activity in the orbitofrontal cortex reflects novel outcome estimates. The strength of these neural correlates predicted both behavior and learning, learning that was abolished by temporally specific inhibition of orbitofrontal neurons. These results are consistent with the proposal that the orbitofrontal cortex is critical for integrating information to imagine future outcomes.

Many choice situations require imagining potential outcomes, a capacity that was shown to involve memory brain regions such as the hippocampus. We reasoned that the quality of hippocampus-mediated simulation might therefore condition the subjective value assigned to imagined outcomes. We developed a novel paradigm to assess the impact of hippocampus structure and function on the propensity to favor imagined outcomes in the context of intertemporal choices. The ecological condition opposed immediate options presented as pictures (hence directly observable) to delayed options presented as texts (hence requiring mental stimulation). To avoid confounding simulation process with delay discounting, we compared this ecological condition to control conditions using the same temporal labels while keeping constant the presentation mode. Behavioral data showed that participants who imagined future options with greater details rated them as more likeable. Functional MRI data confirmed that hippocampus activity could account for subjects assigning higher values to simulated options. Structural MRI data suggested that grey matter density was a significant predictor of hippocampus activation, and therefore of the propensity to favor simulated options. Conversely, patients with hippocampus atrophy due to Alzheimer's disease, but not patients with Fronto-Temporal Dementia, were less inclined to favor options that required mental simulation. We conclude that hippocampus-mediated simulation plays a critical role in providing the motivation to pursue goals that are not present to our senses.

Humans show a natural tendency to discount bad news while incorporating good news into beliefs (the “good news–bad news effect”), an effect that may help explain seemingly irrational risk taking. Understanding how this bias develops with age is important because adolescents are prone to engage in risky behavior; thus, educating them about danger is crucial. We reveal a striking valence-dependent asymmetry in how belief updating develops with age. In the ages tested (9–26 y), younger age was associated with inaccurate updating of beliefs in response to undesirable information regarding vulnerability. In contrast, the ability to update beliefs accurately in response to desirable information remained relatively stable with age. This asymmetry was mediated by adequate computational use of positive but not negative estimation errors to alter beliefs. The results are important for understanding how belief formation develops and might help explain why adolescents do not respond adequately to warnings.

Human choice behavior often reflects a competition between inflexible computationally efficient control on the one hand and a slower more flexible system of control on the other. This distinction is well captured by model-free and model-based reinforcement learning algorithms. Here, studying human subjects, we show it is possible to shift the balance of control between these systems by disruption of right dorsolateral prefrontal cortex, such that participants manifest a dominance of the less optimal model-free control. In contrast, disruption of left dorsolateral prefrontal cortex impaired model-based performance only in those participants with low working memory capacity.

The “identifiable victim effect” refers to peoples' tendency to preferentially give to identified versus anonymous victims of misfortune, and has been proposed to partly depend on affect. By soliciting charitable donations from human subjects during behavioral and neural (i.e., functional magnetic resonance imaging) experiments, we sought to determine whether and how affect might promote the identifiable victim effect. Behaviorally, subjects gave more to orphans depicted by photographs versus silhouettes, and their shift in preferences was mediated by photograph-induced feelings of positive arousal, but not negative arousal. Neurally, while photographs versus silhouettes elicited activity in widespread circuits associated with facial and affective processing, only nucleus accumbens activity predicted and could statistically account for increased donations. Together, these findings suggest that presenting evaluable identifiable information can recruit positive arousal, which then promotes giving. We propose that affect elicited by identifiable stimuli can compel people to give more to strangers, even despite costs to the self.

In Bayesian brain theories, hierarchically related prediction errors (PEs) play a central role for predicting sensory inputs and inferring their underlying causes, e.g., the probabilistic structure of the environment and its volatility. Notably, PEs at different hierarchical levels may be encoded by different neuromodulatory transmitters. Here, we tested this possibility in computational fMRI studies of audio-visual learning. Using a hierarchical Bayesian model, we found that low-level PEs about visual stimulus outcome were reflected by widespread activity in visual and supramodal areas but also in the midbrain. In contrast, high-level PEs about stimulus probabilities were encoded by the basal forebrain. These findings were replicated in two groups of healthy volunteers. While our fMRI measures do not reveal the exact neuron types activated in midbrain and basal forebrain, they suggest a dichotomy between neuromodulatory systems, linking dopamine to low-level PEs about stimulus outcome and acetylcholine to more abstract PEs about stimulus probabilities.

An enduring and richly elaborated dichotomy in cognitive neuroscience is that of reflective versus reflexive decision making and choice. Other literatures refer to the two ends of what is likely to be a spectrum with terms such as goal-directed versus habitual, model-based versus model-free or prospective versus retrospective. One of the most rigorous traditions of experimental work in the field started with studies in rodents and graduated via human versions and enrichments of those experiments to a current state in which new paradigms are probing and challenging the very heart of the distinction. We review four generations of work in this tradition and provide pointers to the forefront of the field’s fifth generation.

How does long-term training and the development of motor skills modify the activity of the primary motor cortex (M1)? To address this issue, we trained monkeys for ~1–6 years to perform visually guided and internally generated sequences of reaching movements. Then, we used [14C]2-deoxyglucose (2DG) uptake and single-neuron recording to measure metabolic and neuron activity in M1. After extended practice, we observed a profound reduction of metabolic activity in M1 for the performance of internally generated compared to visually guided tasks. In contrast, measures of neuron firing displayed little difference during the two tasks. These findings suggest that the development of skill through extended practice results in a reduction in the synaptic activity required to produce internally generated, but not visually guided, sequences of movements. Thus, practice leading to skilled performance results in more efficient generation of neuronal activity in M1.

Memories can be unreliable. We created a false memory in mice by optogenetically manipulating memory engram–bearing cells in the hippocampus. Dentate gyrus (DG) or CA1 neurons activated by exposure to a particular context were labeled with channelrhodopsin-2. These neurons were later optically reactivated during fear conditioning in a different context. The DG experimental group showed increased freezing in the original context, in which a foot shock was never delivered. The recall of this false memory was context-specific, activated similar downstream regions engaged during natural fear memory recall, and was also capable of driving an active fear response. Our data demonstrate that it is possible to generate an internally represented and behaviorally expressed fear memory via artificial means.

In stable environments, decision makers can exploit their previously learned strategies for optimal outcomes, while exploration might lead to better options in unstable environments. Here, to investigate the cortical contributions to exploratory behavior, we analyzed single-neuron activity recorded from four different cortical areas of monkeys performing a matching-pennies task and a visual search task, which encouraged and discouraged exploration, respectively. We found that neurons in multiple regions in the frontal and parietal cortex tended to encode signals related to previously rewarded actions more reliably than unrewarded actions. In addition, signals for rewarded choices in the supplementary eye field were attenuated during the visual search task and were correlated with the tendency to switch choices during the matching-pennies task. These results suggest that the supplementary eye field might play a unique role in encouraging animals to explore alternative decision-making strategies.

Decision making under risk entails the anticipation of prospective outcomes, typically leading to the greater sensitivity to losses than gains known as loss aversion. Previous studies on the neural bases of choice-outcome anticipation and loss aversion provided inconsistent results, showing either bidirectional mesolimbic responses of activation for gains and deactivation for losses, or a specific amygdala involvement in processing losses. Here we focused on loss aversion with the aim to address interindividual differences in the neural bases of choice-outcome anticipation. Fifty-six healthy human participants accepted or rejected 104 mixed gambles offering equal (50%) chances of gaining or losing different amounts of money while their brain activity was measured with functional magnetic resonance imaging (fMRI). We report both bidirectional and gain/loss-specific responses while evaluating risky gambles, with amygdala and posterior insula specifically tracking the magnitude of potential losses. At the individual level, loss aversion was reflected both in limbic fMRI responses and in gray matter volume in a structural amygdala–thalamus–striatum network, in which the volume of the “output” centromedial amygdala nuclei mediating avoidance behavior was negatively correlated with monetary performance. We conclude that outcome anticipation and ensuing loss aversion involve multiple neural systems, showing functional and structural individual variability directly related to the actual financial outcomes of choices. By supporting the simultaneous involvement of both appetitive and aversive processing in economic decision making, these results contribute to the interpretation of existing inconsistencies on the neural bases of anticipating choice outcomes.

Perception is strongly influenced by expectations. Accordingly, perception has sometimes been cast as a process of inference, whereby sensory inputs are combined with prior knowledge. However, despite a wealth of behavioral literature supporting an account of perception as probabilistic inference, the neural mechanisms underlying this process remain largely unknown. One important question is whether top-down expectation biases stimulus representations in early sensory cortex, i.e., whether the integration of prior knowledge and bottom-up inputs is already observable at the earliest levels of sensory processing. Alternatively, early sensory processing may be unaffected by top-down expectations, and integration of prior knowledge and bottom-up input may take place in downstream association areas that are proposed to be involved in perceptual decision-making. Here, we implicitly manipulated human subjects' prior expectations about visual motion stimuli, and probed the effects on both perception and sensory representations in visual cortex. To this end, we measured neural activity noninvasively using functional magnetic resonance imaging, and applied a forward modeling approach to reconstruct the motion direction of the perceived stimuli from the signal in visual cortex. Our results show that top-down expectations bias representations in visual cortex, demonstrating that the integration of prior information and sensory input is reflected at the earliest stages of sensory processing.

Although several studies have investigated the neural mechanism of social comparison, it remains unclear whether and how cultural membership, particularly independent versus interdependent cultures, may differentially shape the neural processes underlying social comparison. In the present functional magnetic resonance imaging (fMRI) study, we examined the behaviors and neural response patterns of Korean (i.e., interdependent culture) and American (i.e., independent culture) participants while performing a financial gambling task simultaneously and independently with a partner. Upon seeing the partner's income, greater modulation of the activity in the ventral striatum (VS) and the ventromedial prefrontal cortex (vmPFC) by relative gain was observed in Korean than American participants, suggesting greater sensitivity of Koreans toward social comparison. The strength of functional connectivity between the VS and the vmPFC predicted individual variability in the degree to which participants' decisions were affected by relative incomes. Additional model-based fMRI analysis further confirmed the primary role of the vmPFC in biasing decisions based on relative incomes. In summary, the present study provides the first neural evidence for decision biases due to social comparison and their individual and cultural variations.

Remembering a past event involves reactivation of distributed patterns of neural activity that represent the features of that event—a process that depends on associative mechanisms supported by medial temporal lobe structures. Although efficient use of memory requires prioritizing those features of a memory that are relevant to current behavioral goals (target features) over features that may be goal-irrelevant (incidental features), there remains ambiguity concerning how this is achieved. We tested the hypothesis that although medial temporal lobe structures may support reactivation of both target and incidental event features, frontoparietal cortex preferentially reactivates those features that match current goals. Here, human participants were cued to remember either the category (face/scene) to which a picture belonged (category trials) or the location (left/right) in which a picture appeared (location trials). Multivoxel pattern analysis of fMRI data were used to measure reactivation of category information as a function of its behavioral relevance (target vs incidental reactivation). In ventral/medial temporal lobe (VMTL) structures, incidental reactivation was as robust as target reactivation. In contrast, frontoparietal cortex exhibited stronger target than incidental reactivation; that is, goal-modulated reactivation. Reactivation was also associated with later memory. Frontoparietal biases toward target reactivation predicted subsequent memory for target features, whereas incidental reactivation in VMTL predicted subsequent memory for nontested features. These findings reveal a striking dissociation between goal-modulated reactivation in frontoparietal cortex and incidental reactivation in VMTL.

Animals learn both whether and when a reward will occur. Neural models of timing posit that animals learn the mean time until reward perturbed by a fixed relative uncertainty. Nonetheless, animals can learn to perform actions for reward even in highly variable natural environments. Optimal inference in the presence of variable information requires probabilistic models, yet it is unclear whether animals can infer such models for reward timing. Here, we develop a behavioral paradigm in which optimal performance required knowledge of the distribution from which reward delays were chosen. We found that mice were able to accurately adjust their behavior to the SD of the reward delay distribution. Importantly, mice were able to flexibly adjust the amount of prior information used for inference according to the moment-by-moment demands of the task. The ability to infer probabilistic models for timing may allow mice to adapt to complex and dynamic natural environments.

Collaborative and competitive interactions have been investigated extensively so as to understand how the brain makes choices in the context of strategic games, yet such interactions are known to influence a more basic dimension of behavior: the energy invested in the task. The cognitive mechanisms that motivate effort production in social situations remain poorly understood, and their neural counterparts have not been explored so far. A dominant idea is that the motivation provided by the social context is reducible to the personal utility of effort production, which decreases in collaboration and increases in competition. Using functional magnetic resonance imaging, we scanned human participants while they produced a physical effort in a collaborative or competitive context. We found that motivation was indeed primarily driven by personal utility, which was reflected in brain regions devoted to reward processing (the ventral basal ganglia). However, subjects who departed from utility maximization, working more in collaborative situations, showed greater functional activation and anatomical volume in a brain region implicated previously in social cognition (the temporoparietal junction). Therefore, this region might mediate a purely pro-social motivation to produce greater effort in the context of collaboration. More generally, our findings suggest that the individual propensity to invest energy in collaborative work might have an identifiable counterpart in the brain functional architecture.

Interactions between people require shared high-level cognitive representations of action goals, intentions [1], and mental states [2], but do people also share their representation of space? The human ventral premotor (PMv) and parietal cortices contain neuronal populations coding for the execution and observation of actions [1,3,4,5], analogous to the mirror neurons identified in monkeys [1,5]. This neuronal system is tuned to the location of the acting person relative to the observer and the target of the action [4,5]. Therefore, it can be theorized that the observer’s brain constructs a low-level, body-centered representation of the space around others similar to one’s own peripersonal space representation [6,7,8,9,10,11]. Single-cell recordings have reported that parietal visuotactile neurons discharge for objects near specific parts of a monkey’s own body and near the corresponding body parts of another individual [9]. In humans, no neuroimaging study has investigated this issue. Here, we identified neuronal populations in the human PMv that encode the space near both one’s own hand and another person’s hand. The shared peripersonal space representation could support social interactions by coding sensory events, actions, and cognitive processes in a common spatial reference frame.

Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift–diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.

Risk is a ubiquitous feature of life. It plays an important role in economic decisions by affecting subjective reward value. Informed decisions require accurate risk information for each choice option. However, risk is often not constant but changes dynamically in the environment. Therefore, risk information should be updated to the current risk level. Potential mechanisms involve error-driven updating, whereby differences between current and predicted risk levels (risk prediction errors) are used to obtain currently accurate risk predictions. As a major reward structure, the orbitofrontal cortex is involved in coding key reward parameters such as reward value and risk. In this study, monkeys viewed different visual stimuli indicating specific levels of risk that deviated from the overall risk predicted by a common earlier stimulus. A group of orbitofrontal neurons displayed a risk signal that tracked the discrepancy between current and predicted risk. Such neuronal signals may be involved in the updating of risk information.

A central question in cognitive neuroscience regards the means by which options are compared and decisions are resolved during value-guided choice. It is clear that several component processes are needed; these include identifying options, a value-based comparison, and implementation of actions to execute the decision. What is less clear is the temporal precedence and functional organisation of these component processes in the brain. Competing models of decision making have proposed that value comparison may occur in the space of alternative actions, or in the space of abstract goods. We hypothesized that the signals observed might in fact depend upon the framing of the decision. We recorded magnetoencephalographic data from humans performing value-guided choices in which two closely related trial types were interleaved. In the first trial type, each option was revealed separately, potentially causing subjects to estimate each action's value as it was revealed and perform comparison in action-space. In the second trial type, both options were presented simultaneously, potentially leading to comparison in abstract goods-space prior to commitment to a specific action. Distinct activity patterns (in distinct brain regions) on the two trial types demonstrated that the observed frame of reference used for decision making indeed differed, despite the information presented being formally identical, between the two trial types. This provides a potential reconciliation of conflicting accounts of value-guided choice.

Prior experience is critical for decision-making. It enables explicit representation of potential outcomes and provides training to valuation mechanisms. However, we can also make choices in the absence of prior experience by merely imagining the consequences of a new experience. Using functional magnetic resonance imaging repetition suppression in humans, we examined how neuronal representations of novel rewards can be constructed and evaluated. A likely novel experience was constructed by invoking multiple independent memories in hippocampus and medial prefrontal cortex. This construction persisted for only a short time period, during which new associations were observed between the memories for component items. Together, these findings suggest that, in the absence of direct experience, coactivation of multiple relevant memories can provide a training signal to the valuation system that allows the consequences of new experiences to be imagined and acted on.

Learning by following explicit advice is fundamental for human cultural evolution, yet the neurobiology of adaptive social learning is largely unknown. Here, we used simulations to analyze the adaptive value of social learning mechanisms, computational modeling of behavioral data to describe cognitive mechanisms involved in social learning, and model-based functional magnetic resonance imaging (fMRI) to identify the neurobiological basis of following advice. One-time advice received before learning had a sustained influence on people’s learning processes. This was best explained by social learning mechanisms implementing a more positive evaluation of the outcomes from recommended options. Computer simulations showed that this ‘‘outcome-bonus’’ accumulates more rewards than an alternative mechanism implementing higher initial reward expectation for recommended options. fMRI results revealed a neural outcome-bonus signal in the septal area and the left caudate. This neural signal coded rewards in the absence of advice, and crucially, it signaled greater positive rewards for positive and negative feedback after recommended rather than after non-recommended choices. Hence, our results indicate that following advice is intrinsically rewarding. A positive correlation between the model’s outcome-bonus parameter and amygdala activity after positive feedback directly relates the computational model to brain activity. These results advance the understanding of social learning by providing a neurobiological account for adaptive learning from advice.

The ability to infer intentions of other agents, called theory of mind (ToM), confers strong advantages for individuals in social situations. Here, we show that ToM can also be maladaptive when people interact with complex modern institutions like financial markets. We tested participants who were investing in an experimental bubble market, a situation in which the price of an asset is much higher than its underlying fundamental value. We describe a mechanism by which social signals computed in the dorsomedial prefrontal cortex affect value computations in ventromedial prefrontal cortex, thereby increasing an individual’s propensity to ‘ride’ financial bubbles and lose money. These regions compute a financial metric that signals variations in order flow intensity, prompting inference about other traders’ intentions. Our results suggest that incorporating inferences about the intentions of others when making value judgments in a complex financial market could lead to the formation of market bubbles.

Experimental economic techniques have been widely used to evaluate human risk attitudes, but how these measured attitudes relate to overall individual wealth levels is unclear. Previous noneconomic work has addressed this uncertainty in animals by asking the following: (i) Do our close evolutionary relatives share both our risk attitudes and our degree of economic rationality? And (ii) how does the amount of food or water one holds (a nonpecuniary form of “wealth”) alter risk attitudes in these choosers? Unfortunately, existing noneconomic studies have provided conflicting insights from an economic point of view. We therefore used standard techniques from human experimental economics to measure monkey risk attitudes for water rewards as a function of blood osmolality (an objective measure of how much water the subjects possess). Early in training, monkeys behaved randomly, consistently violating first-order stochastic dominance and monotonicity. After training, they behaved like human choosers—technically consistent in their choices and weakly risk averse (i.e., risk averse or risk neutral on average)—suggesting that well-trained monkeys can serve as a model for human choice behavior. As with attitudes about money in humans, these risk attitudes were strongly wealth dependent; as the animals became “poorer,” risk aversion increased, a finding incompatible with some models of wealth and risk in human decision making.

Dopamine neurons are thought to promote learning by signaling prediction errors, that is, the difference between actual and expected outcomes. Whether these signals are sufficient for associative learning, however, remains untested. A recent study used optogenetics in a classic behavioral paradigm to confirm the role of dopamine prediction errors in learning.

Perceptual decision making is a computationally demanding process that requires the brain to interpret incoming sensory information in the context of goals, expectations, preferences, and other factors. These integrative processes engage much of cortex but also require contributions from subcortical structures to affect behavior. Here we summarize recent evidence supporting specific computational roles of the basal ganglia in perceptual decision making. These roles probably share common mechanisms with the basal ganglia’s other, more well-established functions in motor control, learning, and other aspects of cognition and thus can provide insights into the general roles of this important subcortical network in higher brain function.

Time interval estimation is involved in numerous behavioral processes, but its underlying neural mechanisms remain unclear. In particular, it has been controversial whether time is encoded on a linear or logarithmic scale. Based on our previous finding that inactivation of the medial prefrontal cortex (mPFC) profoundly impairs rat's ability to discriminate time intervals, we investigated how the mPFC processes temporal information by examining activity of mPFC neurons in rats performing a temporal bisection task. Many mPFC neurons dconveyed temporal information based on monotonically changing activity profiles over time with negative accelerations, so that their activity profiles were better described by logarithmic than linear functions. Moreover, the precision of time-interval discrimination based on neural activity was lowered in proportion to the elapse of time, but without proportional increase in neural variability, which is well accounted for by logarithmic, but not by linear functions. As a population, mPFC neurons conveyed precise information about the elapse of time with their activity tightly correlated with the animal's choice of target. These results suggest that the mPFC might be part of an internal clock in charge of controlling interval-timing behavior, and that linearly changing neuronal activity on a logarithmic time scale might be one way of representing the elapse of time in the brain.

Multiple loop circuits interconnect the basal ganglia and the frontal cortex, and each part of the cortico-basal ganglia loops plays an essential role in neuronal computational processes underlying motor behavior. To gain deeper insight into specific functions played by each component of the loops, we compared response properties of neurons in the globus pallidus (GP) with those in the dorsal premotor cortex (PMd) and the ventrolateral and dorsolateral prefrontal cortex (vlPFC and dlPFC) while monkeys performed a behavioral task designed to include separate processes for behavioral goal determination and action selection. Initially, visual signals instructed an abstract behavioral goal, and seconds later, a choice cue to select an action was presented. When the instruction cue appeared, GP neurons started to reflect visual features as early as vlPFC neurons. Subsequently, GP neurons began to reflect goals informed by the visual signals no later than neurons in the PMd, vlPFC, and dlPFC, indicating that the GP is involved in the early determination of behavioral goals. In contrast, action specification occurred later in the GP than in the cortical areas, and the GP was not as involved in the process by which a behavioral goal was transformed into an action. Furthermore, the length of time representing behavioral goal and action was shorter in the GP than in the PMd and dlPFC, indicating that the GP may play an important role in detecting individual behavioral events. These observations elucidate the involvement of the GP in goal-directed behavior.

Delusions are unfounded yet tenacious beliefs and a symptom of psychotic disorder. Varying degrees of delusional ideation are also found in the healthy population. Here, we empirically validated a neurocognitive model that explains both the formation and the persistence of delusional beliefs in terms of altered perceptual inference. In a combined behavioral and functional neuroimaging study in healthy participants, we used ambiguous visual stimulation to probe the relationship between delusion-proneness and the effect of learned predictions on perception. Delusional ideation was associated with less perceptual stability, but a stronger belief-induced bias on perception, paralleled by enhanced functional connectivity between frontal areas that encoded beliefs and sensory areas that encoded perception. These findings suggest that weakened lower-level predictions that result in perceptual instability are implicated in the emergence of delusional beliefs. In contrast, stronger higher-level predictions that sculpt perception into conformity with beliefs might contribute to the tenacious persistence of delusional beliefs.

The lateral prefrontal cortex (PFC), a hub of higher-level cognitive processing, is strongly modulated by midbrain dopamine (DA) neurons. The cellular mechanisms have been comprehensively studied in the context of short-term memory, but little is known about how DA regulates sensory inputs to PFC that precede and give rise to such memory activity. By preparing recipient cortical circuits for incoming signals, DA could be a powerful determinant of downstream cognitive processing. Here, we tested the hypothesis that prefrontal DA regulates the representation of sensory signals that are required for perceptual decisions. In rhesus monkeys trained to report the presence or absence of visual stimuli at varying levels of contrast, we simultaneously recorded extracellular single-unit activity and applied DA to the immediate vicinity of the neurons by micro-iontophoresis. We found that DA modulation of prefrontal neurons is not uniform but tailored to specialized neuronal classes. In one population of neurons, DA suppressed activity with high temporal precision but preserved signal/noise ratio. Neurons in this group had short visual response latencies and comprised all recorded narrow-spiking, putative interneurons. In a distinct population, DA increased excitability and enhanced signal/noise ratio by reducing response variability. These neurons had longer visual response latencies and were composed exclusively of broad-spiking, putative pyramidal neurons. By gating sensory inputs to PFC and subsequently strengthening the representation of sensory signals, DA might play an important role in shaping how the PFC initiates appropriate behavior in response to changes in the sensory environment.

The nucleus accumbens shell (NAc-S) plays an important role in the way stimuli that predict reward affect the performance of, and choice between, goal-directed actions in tests of outcome-specific Pavlovian-instrumental transfer (PIT). The neural processes involved in PIT downstream of the ventral striatum are, however, unknown. The NAc-S projects prominently to the ventral pallidum (VP), and in the current experiments, we assessed the involvement of the NAc-S to VP projection in specific PIT in rats. We first compared expression of the immediate-early gene c-Fos in the medial (VP-m) and lateral (VP-l) regions of the VP and in addition, used the retrograde tracer Fluoro-gold combined with c-Fos to assess the involvement of these pathways during PIT. Although there was no evidence of differential activation in neurons in the VP-l, the VP-m showed a selective increase in activity in rats tested for PIT compared with appropriate controls, as did NAc-S neurons projecting to the VP-m. To confirm that VP-m activity is important for PIT, we inactivated this region before test and found this inactivation blocked the influence of predictive learning on choice. Finally, to confirm the functional importance of the NAc-S to VP-m pathway we used a disconnection procedure, using asymmetrical inactivation of the NAc-S and either the ipsilateral or contralateral VP-m. Specific PIT was blocked but only by inactivation of the NAc-S and VP-m in contralateral hemispheres. These results suggest that the NAc-S and VP-m form part of a circuit mediating the effects of predictive learning on choice.

Dysfunctions in frontostriatal brain circuits have been implicated in neuropsychiatric disorders, including those characterized by the presence of repetitive behaviors. We developed an optogenetic approach to block repetitive, compulsive behavior in a mouse model in which deletion of the synaptic scaffolding gene, Sapap3, results in excessive grooming. With a delay-conditioning task, we identified in the mutants a selective deficit in behavioral response inhibition and found this to be associated with defective down-regulation of striatal projection neuron activity. Focused optogenetic stimulation of the lateral orbitofrontal cortex and its terminals in the striatum restored the behavioral response inhibition, restored the defective down-regulation, and compensated for impaired fast-spiking neuron striatal microcircuits. These findings raise promising potential for the design of targeted therapy for disorders involving excessive repetitive behavior.

It is widely accepted that dorsal striatum neurons participate in either the direct pathway (expressing dopamine D1 receptors) or the indirect pathway (expressing D2 receptors), controlling voluntary movements in an antagonistically balancing manner. The D1- and D2-expressing neurons are activated and inactivated, respectively, by dopamine released from substantia nigra neurons encoding reward expectation. However, little is known about the functional representation of motor information and its reward modulation in individual striatal neurons constituting the two pathways. In this study, we juxtacellularly recorded the spike activity of single neurons in the dorsolateral striatum of rats performing voluntary forelimb movement in a reward-predictable condition. Some of these neurons were identified morphologically by a combination of juxtacellular visualization and in situ hybridization for D1 mRNA. We found that the striatal neurons exhibited distinct functional activations before and during the forelimb movement, regardless of the expression of D1 mRNA. They were often positively, but rarely negatively, modulated by expecting a reward for the correct motor response. The positive reward modulation was independent of behavioral differences in motor performance. In contrast, regular-spiking and fast-spiking neurons in any layers of the motor cortex displayed only minor and unbiased reward modulation of their functional activation in relation to the execution of forelimb movement. Our results suggest that the direct and indirect pathway neurons cooperatively rather than antagonistically contribute to spatiotemporal control of voluntary movements, and that motor information is subcortically integrated with reward information through dopaminergic and other signals in the skeletomotor loop of the basal ganglia.

Finding sought visual targets requires our brains to flexibly combine working memory information about what we are looking for with visual information about what we are looking at. To investigate the neural computations involved in finding visual targets, we recorded neural responses in inferotemporal cortex (IT) and perirhinal cortex (PRH) as macaque monkeys performed a task that required them to find targets in sequences of distractors. We found similar amounts of total task-specific information in both areas; however, information about whether a target was in view was more accessible using a linear read-out or, equivalently, was more untangled in PRH. Consistent with the flow of information from IT to PRH, we also found that task-relevant information arrived earlier in IT. PRH responses were well-described by a functional model in which computations in PRH untangle input from IT by combining neurons with asymmetric tuning correlations for target matches and distractors.

Hierarchical organization is widespread in the societies of humans and other animals, both in social structure and in decision-making contexts. In the case of collective motion, the majority of case studies report that dominant individuals lead group movements, in agreement with the common conflation of the terms “dominance” and “leadership.” From a theoretical perspective, if social relationships influence interactions during collective motion, then social structure could also affect leadership in large, swarm-like groups, such as fish shoals and bird flocks. Here we use computer-vision–based methods and miniature GPS tracking to study, respectively, social dominance and in-flight leader–follower relations in pigeons. In both types of behavior we find hierarchically structured networks of directed interactions. However, instead of being conflated, dominance and leadership hierarchies are completely independent of each other. Although dominance is an important aspect of variation among pigeons, correlated with aggression and access to food, our results imply that the stable leadership hierarchies in the air must be based on a different set of individual competences. In addition to confirming the existence of independent and context-specific hierarchies in pigeons, we succeed in setting out a robust, scalable method for the automated analysis of dominance relationships, and thus of social structure, applicable to many species. Our results, as well as our methods, will help to incorporate the broader context of animal social organization into the study of collective behavior.

Maximizing rewards per unit time is ideal for success and survival in humans and animals. This goal can be approached by speeding up behavior aiming at rewards and this is done most efficiently by acquiring skills. Importantly, reward-directed skills consist of two components: finding a good object (i.e., object skill) and acting on the object (i.e., action skill), which occur sequentially. Recent studies suggest that object skill is based on high-capacity memory for object–value associations. When a learned object is encountered the corresponding memory is quickly expressed as a value-based gaze bias, leading to the automatic acquisition or avoidance of the object. Object skill thus plays a crucial role in increasing rewards per unit time.

Predictions about future rewarding events have a powerful influence on behaviour. The phasic spike activity of dopamine-containing neurons, and corresponding dopamine transients in the striatum, are thought to underlie these predictions, encoding positive and negative reward prediction errors1, 2, 3, 4, 5. However, many behaviours are directed towards distant goals, for which transient signals may fail to provide sustained drive. Here we report an extended mode of reward-predictive dopamine signalling in the striatum that emerged as rats moved towards distant goals. These dopamine signals, which were detected with fast-scan cyclic voltammetry (FSCV), gradually increased or—in rare instances—decreased as the animals navigated mazes to reach remote rewards, rather than having phasic or steady tonic profiles. These dopamine increases (ramps) scaled flexibly with both the distance and size of the rewards. During learning, these dopamine signals showed spatial preferences for goals in different locations and readily changed in magnitude to reflect changing values of the distant rewards. Such prolonged dopamine signalling could provide sustained motivational drive, a control mechanism that may be important for normal behaviour and that can be impaired in a range of neurologic and neuropsychiatric disorders.

Trait sensation-seeking, defined as a need for varied, complex, and intense sensations, represents a relatively underexplored hedonic drive in human behavioral neuroscience research. It is related to increased risk for a range of behaviors including substance use, gambling, and risky sexual practice. Individual differences in self-reported sensation-seeking have been linked to brain dopamine function, particularly at D2-like receptors, but so far no causal evidence exists for a role of dopamine in sensation-seeking behavior in humans. Here, we investigated the effects of the selective D2/D3 agonist cabergoline on performance of a probabilistic risky choice task in healthy humans using a sensitive within-subject, placebo-controlled design. Cabergoline significantly influenced the way participants combined different explicit signals regarding probability and loss when choosing between response options associated with uncertain outcomes. Importantly, these effects were strongly dependent on baseline sensation-seeking score. Overall, cabergoline increased sensitivity of choice to information about probability of winning; while decreasing discrimination according to magnitude of potential losses associated with different options. The largest effects of the drug were observed in participants with lower sensation-seeking scores. These findings provide evidence that risk-taking behavior in humans can be directly manipulated by a dopaminergic drug, but that the effectiveness of such a manipulation depends on baseline differences in sensation-seeking trait. This emphasizes the importance of considering individual differences when investigating manipulation of risky decision-making, and may have relevance for the development of pharmacotherapies for disorders involving excessive risk-taking in humans, such as pathological gambling.

The perirhinal cortex (PRh) and basolateral amygdala (BLA) appear to mediate distinct aspects of learning and memory. Here, we used rats to investigate the involvement of the PRh and BLA in acquisition and extinction of associations between two different environmental stimuli (e.g., a tone and a light) in higher-order conditioning. When both stimuli were neutral, infusion of the GABAA, muscimol, or the NMDA receptor (NMDAR) antagonist ifenprodil into the PRh impaired associative formation. However, when one stimulus was neutral and the other was a learned danger signal, acquisition and extinction of the association between them was unaffected by manipulations targeting the PRh. Temporary inactivation of the BLA had the opposite effect: formation and extinction of an association between two stimuli was spared when both stimuli were neutral, but impaired when one stimulus was a learned danger signal. Subsequent experiments showed that the experience of fear per se shifts processing of an association between neutral stimuli from the PRh to the BLA. When training was conducted in a dangerous environment, formation and extinction of an association between neutral stimuli was impaired by BLA inactivation or NMDAR blockade in this region, but was unaffected by PRh inactivation. These double dissociations in the roles of the PRh and BLA in learning under different stimulus and environmental conditions imply that fear-induced activation of the amygdala changes how the brain processes sensory stimuli. Harmless stimuli are treated as potentially harmful, resulting in a shift from cortical to subcortical processing in the BLA.

People vary widely in how much they discount delayed rewards, yet little is known about the sources of these differences. Here we demonstrate that neural activity in ventromedial prefrontal cortex (VMPFC) and ventral striatum (VS) when human subjects are asked to merely think about the future—specifically, to judge the subjective length of future time intervals—predicts delay discounting. High discounters showed lower activity for longer time delays, while low discounters showed the opposite pattern. Our results demonstrate that the correlation between VMPFC and VS activity and discounting occurs even in the absence of choices about future rewards, and does not depend on a person explicitly evaluating future outcomes or judging their self-relevance. This suggests a link between discounting and basic processes involved in thinking about the future, such as temporal perception. Our results also suggest that reducing impatience requires not suppression of VMPFC and VS activity altogether, but rather modulation of how these regions respond to the present versus the future.

Dopamine is essential to cognitive functions. However, despite abundant studies demonstrating that dopamine neuron activity is related to reinforcement and motivation, little is known about what signals dopamine neurons convey to promote cognitive processing. We therefore examined dopamine neuron activity in monkeys performing a delayed matching-to-sample task that required working memory and visual search. We found that dopamine neurons responded to task events associated with cognitive operations. A subset of dopamine neurons were activated by visual stimuli if the monkey had to store the stimuli in working memory. These neurons were located dorsolaterally in the substantia nigra pars compacta, whereas ventromedial dopamine neurons, some in the ventral tegmental area, represented reward prediction signals. Furthermore, dopamine neurons monitored visual search performance, becoming active when the monkey made an internal judgment that the search was successfully completed. Our findings suggest an anatomical gradient of dopamine signals along the dorsolateral-ventromedial axis of the ventral midbrain.