The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning.

It has become an accepted paradigm that humans have “prosocial preferences” that lead to higher levels of cooperation than those that would maximize their personal financial gain. However, the existence of prosocial preferences has been inferred post hoc from the results of economic games, rather than with direct experimental tests. Here, we test how behavior in a public-goods game is influenced by knowledge of the consequences of actions for other players. We found that (i) individuals cooperate at similar levels, even when they are not informed that their behavior benefits others; (ii) an increased awareness of how cooperation benefits others leads to a reduction, rather than an increase, in the level of cooperation; and (iii) cooperation can be either lower or higher than expected, depending on experimental design. Overall, these results contradict the suggested role of the prosocial preferences hypothesis and show how the complexity of human behavior can lead to misleading conclusions from controlled laboratory experiments.

Older adults are disproportionately vulnerable to fraud, and federal agencies have speculated that excessive trust explains their greater vulnerability. Two studies, one behavioral and one using neuroimaging methodology, identified age differences in trust and their neural underpinnings. Older and younger adults rated faces high in trust cues similarly, but older adults perceived faces with cues to untrustworthiness to be significantly more trustworthy and approachable than younger adults. This age-related pattern was mirrored in neural activation to cues of trustworthiness. Whereas younger adults showed greater anterior insula activation to untrustworthy versus trustworthy faces, older adults showed muted activation of the anterior insula to untrustworthy faces. The insula has been shown to support interoceptive awareness that forms the basis of “gut feelings,” which represent expected risk and predict risk-avoidant behavior. Thus, a diminished “gut” response to cues of untrustworthiness may partially underlie older adults’ vulnerability to fraud.

Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days' history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed covary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction-specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields.

The division of human learning systems into reward and punishment opponent modules is still a debated issue. While the implication of ventral prefrontostriatal circuits in reward-based learning is well established, the neural underpinnings of punishment-based learning remain unclear. To elucidate the causal implication of brain regions that were related to punishment learning in a previous functional neuroimaging study, we tested the effects of brain damage on behavioral performance, using the same task contrasting monetary gains and losses. Cortical and subcortical candidate regions, the anterior insula and dorsal striatum, were assessed in patients presenting brain tumor and Huntington disease, respectively. Both groups exhibited selective impairment of punishment-based learning. Computational modeling suggested complementary roles for these structures: the anterior insula might be involved in learning the negative value of loss-predicting cues, whereas the dorsal striatum might be involved in choosing between those cues so as to avoid the worst.

The strong reciprocity model of the evolution of human cooperation has gained some acceptance, partly on the basis of support from experimental findings. The observation that unfair offers in the ultimatum game are frequently rejected constitutes an important piece of the experimental evidence for strong reciprocity. In the present study, we have challenged the idea that the rejection response in the ultimatum game provides evidence of the assumption held by strong reciprocity theorists that negative reciprocity observed in the ultimatum game is inseparably related to positive reciprocity as the two sides of a preference for fairness. The prediction of an inseparable relationship between positive and negative reciprocity was rejected on the basis of the results of a series of experiments that we conducted using the ultimatum game, the dictator game, the trust game, and the prisoner’s dilemma game. We did not find any correlation between the participants’ tendencies to reject unfair offers in the ultimatum game and their tendencies to exhibit various prosocial behaviors in the other games, including their inclinations to positively reciprocate in the trust game. The participants’ responses to postexperimental questions add support to the view that the rejection of unfair offers in the ultimatum game is a tacit strategy for avoiding the imposition of an inferior status.

The superior capability of cognitive experts largely depends on automatic, quick information processing, which is often referred to as intuition. Intuition develops following extensive long-term training. There are many cognitive models on intuition development, but its neural basis is not known. Here we trained novices for 15 weeks to learn a simple board game and measured their brain activities in early and end phases of the training while they quickly generated the best next-move to a given board pattern. We found that the activation in the head of caudate nucleus developed over the course of training, in parallel to the development of the capability to quickly generate the best next-move, and the magnitude of the caudate activity was correlated with the subject's performance. In contrast, cortical activations, which already appeared in the early phase of training, did not further change. Thus, neural activation in the caudate head, but not those in cortical areas, tracked the development of capability to quickly generate the best next-move, indicating that circuitries including the caudate head may automate cognitive computations.

Acquiring the significance of events based on reward-related information is critical for animals to survive and to conduct social activities. The importance of the perirhinal cortex for reward-related information processing has been suggested. To examine whether or not neurons in this cortex represent reward information flexibly when a visual stimulus indicates either a rewarded or unrewarded outcome, neuronal activity in the macaque perirhinal cortex was examined using a conditional-association cued-reward task. The task design allowed us to study how the neuronal responses depended on the animal's prediction of whether it would or would not be rewarded. Two visual stimuli, a color stimulus as Cue1 followed by a pattern stimulus as Cue2, were sequentially presented. Each pattern stimulus was conditionally associated with both rewarded and unrewarded outcomes depending on the preceding color stimulus. We found an activity depending upon the two reward conditions during Cue2, i.e., pattern stimulus presentation. The response appeared after the response dependent upon the image identity of Cue2. The response delineating a specific cue sequence also appeared between the responses dependent upon the identity of Cue2 and reward conditions. Thus, when Cue1 sets the context for whether or not Cue2 indicates a reward, this region represents the meaning of Cue2, i.e., the reward conditions, independent of the identity of Cue2. These results suggest that neurons in the perirhinal cortex do more than associate a single stimulus with a reward to achieve flexible representations of reward information.

Humans are able to flexibly devise and implement rules to reach their desired goals. For simple situations, we can use single rules, such as “if traffic light is green then cross the street.” In most cases, however, more complex rule sets are required, involving the integration of multiple layers of control. Although it has been shown that prefrontal cortex is important for rule representation, it has remained unclear how the brain encodes more complex rule sets. Here, we investigate how the brain represents the order in which different parts of a rule set are evaluated. Participants had to follow compound rule sets that involved the concurrent application of two single rules in a specific order, where one of the rules always had to be evaluated first. The rules and their assigned order were independently manipulated. By applying multivariate decoding to fMRI data, we found that the identity of the current rule was encoded in a frontostriatal network involving right ventrolateral prefrontal cortex, right superior frontal gyrus, and dorsal striatum. In contrast, rule order could be decoded in the dorsal striatum and in the right premotor cortex. The nonhomogeneous distribution of information across brain areas was confirmed by follow-up analyses focused on relevant regions of interest. We argue that the brain encodes complex rule sets by “decomposing” them in their constituent features, which are represented in different brain areas, according to the aspect of information to be maintained.

Verbal communication is a joint activity; however, speech production and comprehension have primarily been analyzed as independent processes within the boundaries of individual brains. Here, we applied fMRI to record brain activity from both speakers and listeners during natural verbal communication. We used the speaker's spatiotemporal brain activity to model listeners’ brain activity and found that the speaker's activity is spatially and temporally coupled with the listener's activity. This coupling vanishes when participants fail to communicate. Moreover, though on average the listener's brain activity mirrors the speaker's activity with a delay, we also find areas that exhibit predictive anticipatory responses. We connected the extent of neural coupling to a quantitative measure of story comprehension and find that the greater the anticipatory speaker–listener coupling, the greater the understanding. We argue that the observed alignment of production- and comprehension-based processes serves as a mechanism by which brains convey information.

Computational and learning theory models propose that behavioral control reflects value that is both cached (computed and stored during previous experience) and inferred (estimated on the fly on the basis of knowledge of the causal structure of the environment). The latter is thought to depend on the orbitofrontal cortex. Yet some accounts propose that the orbitofrontal cortex contributes to behavior by signaling “economic” value, regardless of the associative basis of the information. We found that the orbitofrontal cortex is critical for both value-based behavior and learning when value must be inferred but not when a cached value is sufficient. The orbitofrontal cortex is thus fundamental for accessing model-based representations of the environment to compute value rather than for signaling value per se.

Motor actions are facilitated when expected reward value is high. It is hypothesized that there are neurons that encode expected reward values to modulate impending actions and potentially represent motivation signals. Here, we present evidence suggesting that the ventral pallidum (VP) may participate in this process. We recorded single neuronal activity in the monkey VP using a saccade task with a direction-dependent reward bias. Depending on the amount of the expected reward, VP neurons increased or decreased their activity tonically until the reward was delivered, for both ipsiversive and contraversive saccades. Changes in expected reward values were also associated with changes in saccade performance (latency and velocity). Furthermore, bilateral muscimol-induced inactivation of the VP abolished the reward-dependent changes in saccade latencies. These data suggest that the VP provides expected reward value signals that are used to facilitate or inhibit motor actions.

Neural activity in orbitofrontal cortex has been linked to flexible representations of stimulus-outcome associations. Such value representations are known to emerge with learning, but the neural mechanisms supporting this phenomenon are not well understood. Here, we provide evidence for a causal role for NMDA receptors (NMDARs) in mediating spike pattern discriminability, neural plasticity, and rhythmic synchronization in relation to evaluative stimulus processing and decision making. Using tetrodes, single-unit spike trains and local field potentials were recorded during local, unilateral perfusion of an NMDAR blocker in rat OFC. In the absence of behavioral effects, NMDAR blockade severely hampered outcome-selective spike pattern formation to olfactory cues, relative to control perfusions. Moreover, NMDAR blockade shifted local rhythmic synchronization to higher frequencies and degraded its linkage to stimulus-outcome selective coding. These results demonstrate the importance of NMDARs for cue-outcome associative coding in OFC during learning and illustrate how NMDAR blockade disrupts network dynamics.

Categorical choices are preceded by the accumulation of sensory evidence in favor of one action or another. Current models describe evidence accumulation as a continuous process occurring at a constant rate, but this view is inconsistent with accounts of a psychological refractory period during sequential information processing. During multisample perceptual categorization, we found that the neural encoding of momentary evidence in human electrical brain signals and its subsequent impact on choice fluctuated rhythmically according to the phase of ongoing parietal delta oscillations (1–3 Hz). By contrast, lateralized beta-band power (10–30 Hz) overlying human motor cortex encoded the integrated evidence as a response preparation signal. These findings draw a clear distinction between central and motor stages of perceptual decision making, with successive samples of sensory evidence competing to pass through a serial processing bottleneck before being mapped onto action.

Receiving social feedback such as praise or blame for one's character traits is a key component of everyday human interactions. It has been proposed that humans are positively biased when integrating social feedback into their self-concept. However, a mechanistic description of how humans process self-relevant feedback is lacking. Here, participants received feedback from peers after a real-life interaction. Participants processed feedback in a positively biased way, i.e., they changed their self-evaluations more toward desirable than toward undesirable feedback. Using functional magnetic resonance imaging we investigated two feedback components. First, the reward-related component correlated with activity in ventral striatum and in anterior cingulate cortex/medial prefrontal cortex (ACC/MPFC). Second, the comparison-related component correlated with activity in the mentalizing network, including the MPFC, the temporoparietal junction, the superior temporal sulcus, the temporal pole, and the inferior frontal gyrus. This comparison-related activity within the mentalizing system has a parsimonious interpretation, i.e., activity correlated with the differences between participants' own evaluation and feedback. Importantly, activity within the MPFC that integrated reward-related and comparison-related components predicted the self-related positive updating bias across participants offering a mechanistic account of positively biased feedback processing. Thus, theories on both reward and mentalizing are important for a better understanding of how social information is integrated into the human self-concept.

A critical component of decision making is the ability to adjust criteria for classifying stimuli. fMRI and drift diffusion models were used to explore the neural representations of perceptual criteria in decision making. The specific focus was on the relative engagement of perceptual- and decision-related neural systems in response to adjustments in perceptual criteria. Human participants classified visual stimuli as big or small based on criteria of different sizes, which effectively biased their choices toward one response over the other. A drift diffusion model was fit to the behavioral data to extract estimates of stimulus size, criterion size, and difficulty for each participant and condition. These parameter values were used as modulated regressors to create a highly constrained model for the fMRI analysis that accounted for several components of the decision process. The results show that perceptual criteria values were reflected by activity in left inferior temporal cortex, a region known to represent objects and their physical properties, whereas stimulus size was reflected by activation in occipital cortex. A frontoparietal network of regions, including dorsolateral prefrontal cortex and superior parietal lobule, corresponded to the decision variables resulting from the downstream stimulus–criterion comparison, independent of stimulus type. The results provide novel evidence that perceptual criteria are represented in stimulus space and serve as inputs to be compared with the presented stimulus, recruiting a common network of decision regions shown to be active in other simple decisions. This work advances our understanding of the neural correlates of decision flexibility and adjustments of behavioral bias.

Individual risk preferences have a large influence on decisions, such as financial investments, career and health choices, or gambling. Decision making under risk has been studied both behaviorally and on a neural level. It remains unclear, however, how risk attitudes are encoded and integrated with choice. Here, we investigate how risk preferences are reflected in neural regions known to process risk. We collected functional magnetic resonance images of 56 human subjects during a gambling task (Preuschoff et al., 2006). Subjects were grouped into risk averters and risk seekers according to the risk preferences they revealed in a separate lottery task. We found that during the anticipation of high-risk gambles, risk averters show stronger responses in ventral striatum and anterior insula compared to risk seekers. In addition, risk prediction error signals in anterior insula, inferior frontal gyrus, and anterior cingulate indicate that risk averters do not dissociate properly between gambles that are more or less risky than expected. We suggest this may result in a general overestimation of prospective risk and lead to risk avoidance behavior. This is the first study to show that behavioral risk preferences are reflected in the passive evaluation of risky situations. The results have implications on public policies in the financial and health domain.

Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of “face-selective” cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face-selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full-face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features—consistent with parts-based models—grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy.

Our gaze tends to be directed to objects previously associated with rewards. Such object values change flexibly or remain stable. Here we present evidence that the monkey substantia nigra pars reticulata (SNr) in the basal ganglia represents stable, rather than flexible, object values. After across-day learning of object–reward association, SNr neurons gradually showed a response bias to surprisingly many visual objects: inhibition to high-valued objects and excitation to low-valued objects. Many of these neurons were shown to project to the ipsilateral superior colliculus. This neuronal bias remained intact even after >100 d without further learning. In parallel with the neuronal bias, the monkeys tended to look at high-valued objects. The neuronal and behavioral biases were present even if no value was associated during testing. These results suggest that SNr neurons bias the gaze toward objects that were consistently associated with high values in one's history.

Mesocorticolimbic dopamine (DA) has been implicated in cost/benefit decision making about risks and rewards. The prefrontal cortex (PFC) and nucleus accumbens (NAc) are two DA terminal regions that contribute to decision making in distinct manners. However, how fluctuations of tonic DA levels may relate to different aspects of decision making remains to be determined. The present study measured DA efflux in the PFC and NAc with microdialysis in well trained rats performing a probabilistic discounting task. Selection of a small/certain option always delivered one pellet, whereas another, large/risky option yielded four pellets, with probabilities that decreased (100–12.5%) or increased (12.5–100%) across four blocks of trials. Yoked-reward groups were also included to control for reward delivery. PFC DA efflux during decision making decreased or increased over a session, corresponding to changes in large/risky reward probabilities. Similar profiles were observed from yoked-rewarded rats, suggesting that fluctuations in PFC DA reflect changes in the relative rate of reward received. NAc DA efflux also showed decreasing/increasing trends over the session during both tasks. However, DA efflux was higher during decision making on free- versus forced-choice trials and during periods of greater reward uncertainty. Moreover, changes in NAc DA closely tracked shifts in choice biases. These data reveal dynamic and dissociable fluctuations in PFC and NAc DA transmission associated with different aspects of risk-based decision making. PFC DA may signal changes in reward availability that facilitates modification of choice biases, whereas NAc DA encodes integrated signals about reward rates, uncertainty, and choice, reflecting implementation of decision policies.

Optimal choices benefit from previous learning. However, it is not clear how previously learned stimuli influence behavior to novel but similar stimuli. One possibility is to generalize based on the similarity between learned and current stimuli. Here, we use neuroscientific methods and a novel computational model to inform the question of how stimulus generalization is implemented in the human brain. Behavioral responses during an intradimensional discrimination task showed similarity-dependent generalization. Moreover, a peak shift occurred, i.e., the peak of the behavioral generalization gradient was displaced from the rewarded conditioned stimulus in the direction away from the unrewarded conditioned stimulus. To account for the behavioral responses, we designed a similarity-based reinforcement learning model wherein prediction errors generalize across similar stimuli and update their value. We show that this model predicts a similarity-dependent neural generalization gradient in the striatum as well as changes in responding during extinction. Moreover, across subjects, the width of generalization was negatively correlated with functional connectivity between the striatum and the hippocampus. This result suggests that hippocampus–striatal connections contribute to stimulus-specific value updating by controlling the width of generalization. In summary, our results shed light onto the neurobiology of a fundamental, similarity-dependent learning principle that allows learning the value of stimuli that have never been encountered.

Estimating the value of potential actions is crucial for learning and adaptive behavior. We know little about how the human brain represents action-specific value outside of motor areas. This is, in part, due to a difficulty in detecting the neural correlates of value using conventional (region of interest) functional magnetic resonance imaging (fMRI) analyses, due to a potential distributed representation of value. We address this limitation by applying a recently developed multivariate decoding method to high-resolution fMRI data in subjects performing an instrumental learning task. We found evidence for action-specific value signals in circumscribed regions, specifically ventromedial prefrontal cortex, putamen, thalamus, and insula cortex. In contrast, action-independent value signals were more widely represented across a large set of brain areas. Using multivariate Bayesian model comparison, we formally tested whether value–specific responses are spatially distributed or coherent. We found strong evidence that both action-specific and action-independent value signals are represented in a distributed fashion. Our results suggest that a surprisingly large number of classical reward-related areas contain distributed representations of action-specific values, representations that are likely to mediate between reward and adaptive behavior.

Animals respond to changing contingencies to maximize reward. The orbitofrontal cortex (OFC) is important for flexible responding when established contingencies change, but the underlying cognitive mechanisms are debated. We tested rats with sham or OFC lesions in radial maze tasks that varied the frequency of contingency changes and measured both perseverative and non-perseverative errors. When contingencies were changed rarely, rats with sham lesions learned quickly and performed better than rats with OFC lesions. Rats with sham lesions made fewer non-perseverative errors, rarely entering non-rewarded arms, and more win–stay responses by returning to recently rewarded arms compared with rats with OFC lesions. When contingencies were changed rapidly, however, rats with sham lesions learned slower, made more non-perseverative errors and fewer lose–shift responses, and returned more often to non-rewarded arms than rats with OFC lesions. The results support the view that the OFC integrates reward history and suggest that the availability of outcome expectancy signals can either improve or impair adaptive responding depending on reward stability.

Poor individuals often engage in behaviors, such as excessive borrowing, that reinforce the conditions of poverty. Some explanations for these behaviors focus on personality traits of the poor. Others emphasize environmental factors such as housing or financial access. We instead consider how certain behaviors stem simply from having less. We suggest that scarcity changes how people allocate attention: It leads them to engage more deeply in some problems while neglecting others. Across several experiments, we show that scarcity leads to attentional shifts that can help to explain behaviors such as overborrowing. We discuss how this mechanism might also explain other puzzles of poverty.

Primates are remarkably adept at ranking each other within social hierarchies, a capacity that is critical to successful group living. Surprisingly little, however, is understood about the neurobiology underlying this quintessential aspect of primate cognition. In our experiment, participants first acquired knowledge about a social and a nonsocial hierarchy and then used this information to guide investment decisions. We found that neural activity in the amygdala tracked the development of knowledge about a social, but not a nonsocial, hierarchy. Further, structural variations in amygdala gray matter volume accounted for interindividual differences in social transitivity performance. Finally, the amygdala expressed a neural signal selectively coding for social rank, whose robustness predicted the influence of rank on participants’ investment decisions. In contrast, we observed that the linear structure of both social and nonsocial hierarchies was represented at a neural level in the hippocampus. Our study implicates the amygdala in the emergence and representation of knowledge about social hierarchies and distinguishes the domain-general contribution of the hippocampus.

Intelligent agents balance speed of responding with accuracy of deciding. Stochastic accumulator models commonly explain this speed-accuracy tradeoff by strategic adjustment of response threshold. Several laboratories identify specific neurons in prefrontal and parietal cortex with this accumulation process, yet no neurophysiological correlates of speed-accuracy tradeoff have been described. We trained macaque monkeys to trade speed for accuracy on cue during visual search and recorded the activity of neurons in the frontal eye field. Unpredicted by any model, we discovered that speed-accuracy tradeoff is accomplished through several distinct adjustments. Visually responsive neurons modulated baseline firing rate, sensory gain, and the duration of perceptual processing. Movement neurons triggered responses with activity modulated in a direction opposite of model predictions. Thus, current stochastic accumulator models provide an incomplete description of the neural processes accomplishing speed-accuracy tradeoffs. The diversity of neural mechanisms was reconciled with the accumulator framework through an integrated accumulator model constrained by requirements of the motor system.

We often have to make risky decisions between alternatives with outcomes that can be better or worse than the outcomes of safer alternatives. Although previous studies have implicated various brain regions in risky decision making, it remains unknown which regions are crucial for balancing whether to take a risk or play it safe. Here, we focused on the anterior insular cortex (AIC), the causal involvement of which in risky decision making is still unclear, although human imaging studies have reported AIC activation in various gambling tasks. We investigated the effects of temporarily inactivating the AIC on rats' risk preference in two types of gambling tasks, one in which risk arose in reward amount and one in which it arose in reward delay. As a control within the same subjects, we inactivated the adjacent orbitofrontal cortex (OFC), which is well known to affect risk preference. In both gambling tasks, AIC inactivation decreased risk preference whereas OFC inactivation increased it. In risk-free control situations, AIC and OFC inactivations did not affect decision making. These results suggest that the AIC is causally involved in risky decision making and promotes risk taking. The AIC and OFC may be crucial for the opposing motives of whether to take a risk or avoid it.

The basal ganglia play a pivotal role in reward-oriented behavior. The striatum, an input channel of the basal ganglia, is composed of subdivisions that are topographically connected with different cortical and subcortical areas. To test whether reward information is differentially processed in the different parts of the striatum, we compared reward-related neuronal activity along the dorsolateral–ventromedial axis in the caudate nucleus of monkeys performing an asymmetrically rewarded oculomotor task. In a given block, a target in one position was associated with a large reward, whereas the other target was associated with a small reward. The target position–reward value contingency was switched between blocks. We found the following: (1) activity that reflected the block-wise reward contingency emerged before the appearance of a visual target, and it was more prevalent in the dorsal, rather than central and ventral, caudate; (2) activity that was positively related to the reward size of the current trial was evident, especially after reward delivery, and it was more prevalent in the ventral and central, rather than dorsal, caudate; and (3) activity that was modulated by the memory of the outcomes of the previous trials was evident in the dorsal and central caudate. This multiple reward information, together with the target-direction information, was represented primarily by individual caudate neurons, and the different reward information was represented in caudate subpopulations with distinct electrophysiological properties, e.g., baseline firing and spike width. These results suggest parallel processing of different reward information by the basal ganglia subdivisions defined by extrinsic connections and intrinsic properties.

Humans frequently make real-world decisions based on rapid evaluations of minimal information; for example, should we talk to an attractive stranger at a party? Little is known, however, about how the brain makes rapid evaluations with real and immediate social consequences. To address this question, we scanned participants with functional magnetic resonance imaging (fMRI) while they viewed photos of individuals that they subsequently met at real-life “speed-dating” events. Neural activity in two areas of dorsomedial prefrontal cortex (DMPFC), paracingulate cortex, and rostromedial prefrontal cortex (RMPFC) was predictive of whether each individual would be ultimately pursued for a romantic relationship or rejected. Activity in these areas was attributable to two distinct components of romantic evaluation: either consensus judgments about physical beauty (paracingulate cortex) or individualized preferences based on a partner's perceived personality (RMPFC). These data identify novel computational roles for these regions of the DMPFC in even very rapid social evaluations. Even a first glance, then, can accurately predict romantic desire, but that glance involves a mix of physical and psychological judgments that depend on specific regions of DMPFC.

2012年10月31日水曜日

We often perform movements and actions on the basis of internal motivations and without any explicit instructions or cues. One common example of such behaviors is our ability to initiate movements solely on the basis of an internally generated sense of the passage of time. In order to isolate the neuronal signals responsible for such timed behaviors, we devised a task that requires nonhuman primates to move their eyes consistently at regular time intervals in the absence of any external stimulus events and without an immediate expectation of reward. Despite the lack of sensory information, we found that animals were remarkably precise and consistent in timed behaviors, with standard deviations on the order of 100 ms. To examine the potential neural basis of this precision, we recorded from single neurons in the lateral intraparietal area (LIP), which has been implicated in the planning and execution of eye movements. In contrast to previous studies that observed a build-up of activity associated with the passage of time, we found that LIP activity decreased at a constant rate between timed movements. Moreover, the magnitude of activity was predictive of the timing of the impending movement. Interestingly, this relationship depended on eye movement direction: activity was negatively correlated with timing when the upcoming saccade was toward the neuron's response field and positively correlated when the upcoming saccade was directed away from the response field. This suggests that LIP activity encodes timed movements in a push-pull manner by signaling for both saccade initiation towards one target and prolonged fixation for the other target. Thus timed movements in this task appear to reflect the competition between local populations of task relevant neurons rather than a global timing signal.

fMRI research suggests that both the posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC) help individuals select better long-term monetary gains during intertemporal choice. Previous neuromodulation research has demonstrated that disruption of the DLPFC interferes with this ability. However, it is unclear whether the PPC performs a similarly important function during intertemporal choice, and whether the functions performed by either region impact choices involving losses. In the current study, we used low-frequency repetitive transcranial magnetic stimulation to examine whether the PPC and DLPFC both normally facilitate selection of gains and losses with better long-term value than alternatives during intertemporal choice. We found that disruption of either region in the right hemisphere led to greater selection of both gains and losses that had better immediate, but worse long-term value than alternatives. This indicates that activity in both regions helps individuals optimize long-term value relative to immediate value in general, rather than being specific to choices involving gains. However, there were slightly different patterns of effects following disruption of the right PPC and right DLPFC, suggesting that each region may perform somewhat different functions that help optimize choice.

Newly experienced events are often remembered together with how rewarding the experiences are personally. Although the hippocampus is a candidate structure where subjective values are integrated with other elements of episodic memory, it is uncertain whether and how the hippocampus processes value-related information. We examined how activity of dorsal CA1 and dorsal subicular neurons in rats performing a dynamic foraging task was related to reward values that were estimated using a reinforcement learning model. CA1 neurons carried significant signals related to action values before the animal revealed its choice behaviorally, indicating that the information on the expected values of potential choice outcomes was available in CA1. Moreover, after the outcome of the animal's goal choice was revealed, CA1 neurons carried robust signals for the value of chosen action and they temporally overlapped with the signals related to the animal's goal choice and its outcome, indicating that all the signals necessary to evaluate the outcome of an experienced event converged in CA1. On the other hand, value-related signals were substantially weaker in the subiculum. These results suggest a major role of CA1 in adding values to experienced events during episodic memory encoding. Given that CA1 neuronal activity is modulated by diverse attributes of an experienced event, CA1 might be a place where all the elements of episodic memory are integrated.

Using neuroimaging in combination with computational modeling, this study shows that decision threshold modulation for reward maximization is accompanied by a change in effective connectivity within corticostriatal and cerebellar–striatal brain systems. Research on perceptual decision making suggests that people make decisions by accumulating sensory evidence until a decision threshold is crossed. This threshold can be adjusted to changing circumstances, to maximize rewards. Decision making thus requires effectively managing the amount of accumulated evidence versus the amount of available time. Importantly, the neural substrate of this decision threshold modulation is unknown. Participants performed a perceptual decision-making task in blocks with identical duration but different reward schedules. Behavioral and modeling results indicate that human subjects modulated their decision threshold to maximize net reward. Neuroimaging results indicate that decision threshold modulation was achieved by adjusting effective connectivity within corticostriatal and cerebellar–striatal brain systems, the former being responsible for processing of accumulated sensory evidence and the latter being responsible for automatic, subsecond temporal processing. Participants who adjusted their threshold to a greater extent (and gained more net reward) also showed a greater modulation of effective connectivity. These results reveal a neural mechanism that underlies decision makers' abilities to adjust to changing circumstances to maximize reward.

The social and neural sciences share a common interest in understanding the mechanisms that underlie human behaviour. However, interactions between neuroscience and social science disciplines remain strikingly narrow and tenuous. We illustrate the scope and challenges for such interactions using the paradigmatic example of neuroeconomics. Using quantitative analyses of both its scientific literature and the social networks in its intellectual community, we show that neuroeconomics now reflects a true disciplinary integration, such that research topics and scientific communities with interdisciplinary span exert greater influence on the field. However, our analyses also reveal key structural and intellectual challenges in balancing the goals of neuroscience with those of the social sciences. To address these challenges, we offer a set of prescriptive recommendations for directing future research in neuroeconomics.

Every day people make new choices between alternatives that they have never directly experienced. Yet, such decisions are often made rapidly and confidently. Here, we show that the hippocampus, traditionally known for its role in building long-term declarative memories, enables the spread of value across memories, thereby guiding decisions between new choice options. Using functional brain imaging in humans, we discovered that giving people monetary rewards led to activation of a preestablished network of memories, spreading the positive value of reward to nonrewarded items stored in memory. Later, people were biased to choose these nonrewarded items. This decision bias was predicted by activity in the hippocampus, reactivation of associated memories, and connectivity between memory and reward regions in the brain. These findings explain how choices among new alternatives emerge automatically from the associative mechanisms by which the brain builds memories. Further, our findings demonstrate a previously unknown role for the hippocampus in value-based decisions.

2012年10月17日水曜日

Humans form beliefs asymmetrically; we tend to discount bad news but embrace good news. This reduced impact of unfavorable information on belief updating may have important societal implications, including the generation of financial market bubbles, ill preparedness in the face of natural disasters, and overly aggressive medical decisions. Here, we selectively improved people’s tendency to incorporate bad news into their beliefs by disrupting the function of the left (but not right) inferior frontal gyrus using transcranial magnetic stimulation, thereby eliminating the engrained “good news/bad news effect.” Our results provide an instance of how selective disruption of regional human brain function paradoxically enhances the ability to incorporate unfavorable information into beliefs of vulnerability.

2012年10月11日木曜日

Modeling work in neuroscience can be classified using two different criteria. The first one is the complexity of the model, ranging from simplified conceptual models that are amenable to mathematical analysis to detailed models that require simulations in order to understand their properties. The second criterion is that of direction of workflow, which can be from microscopic to macroscopic scales (bottom-up) or from behavioral target functions to properties of components (top-down). We review the interaction of theory and simulation using examples of top-down and bottom-up studies and point to some current developments in the fields of computational and theoretical neuroscience.

2012年10月9日火曜日

The time of reward and the temporal structure of reward occurrence fundamentally influence behavioral reinforcement and decision processes [1,2,3,4,5,6,7,8,9,10,11]. However, despite knowledge about timing in sensory and motor systems [12,13,14,15,16,17], we know little about temporal mechanisms of neuronal reward processing. In this experiment, visual stimuli predicted different instantaneous probabilities of reward occurrence that resulted in specific temporal reward structures. Licking behavior demonstrated that the animals had developed expectations for the time of reward that reflected the instantaneous reward probabilities. Neurons in the amygdala, a major component of the brain's reward system [18,19,20,21,22,23,24,25,26,27,28,29], showed two types of reward signal, both of which were sensitive to the expected time of reward. First, the time courses of anticipatory activity preceding reward delivery followed the specific instantaneous reward probabilities and thus paralleled the temporal reward structures. Second, the magnitudes of responses following reward delivery covaried with the instantaneous reward probabilities, reflecting the influence of temporal reward structures at the moment of reward delivery. In being sensitive to temporal reward structure, the reward signals of amygdala neurons reflected the temporally specific expectations of reward. The data demonstrate an active involvement of amygdala neurons in timing processes that are crucial for reward function.

Repeated performance of visual tasks leads to long-lasting increased sensitivity to the trained stimulus, a phenomenon termed perceptual learning. A ubiquitous property of visual learning is specificity: performance improvement obtained during training applies only for the trained stimulus features, which are thought to be encoded in sensory brain regions [1,2,3]. However, recent results show performance decrements with an increasing number of trials within a training session [4,5]. This selective sensitivity reduction is thought to arise due to sensory adaptation [5,6]. Here we show, using the standard texture discrimination task [7], that location specificity is a consequence of sensory adaptation; that is, it results from selective reduced sensitivity due to repeated stimulation. Observers practiced the texture task with the target presented at a fixed location within a background texture. To remove adaptation, we added task-irrelevant (“dummy”) trials with the texture oriented 45° relative to the target’s orientation, known to counteract adaptation [8]. The results indicate location specificity with the standard paradigm, but complete generalization to a new location when adaptation is removed. We suggest that adaptation interferes with invariant pattern-discrimination learning by inducing network-dependent changes in local visual representations.

2012年10月4日木曜日

Regions within the prefrontal cortex are thought to process beliefs about the world, but little is known about the circuit dynamics underlying the formation and modification of these beliefs. Using a task that permits dissociation between the activity encoding an animal’s internal state and that encoding aspects of behavior, we found that transient increases in the volatility of activity in the rat medial prefrontal cortex accompany periods when an animal’s belief is modified after an environmental change. Activity across the majority of sampled neurons underwent marked, abrupt, and coordinated changes when prior belief was abandoned in favor of exploration of alternative strategies. These dynamics reflect network switches to a state of instability, which diminishes over the period of exploration as new stable representations are formed.

In monkeys deciding between alternative saccadic eye movements, lateral intraparietal (LIP) neurons representing each saccade fire at a rate proportional to the value of the reward expected upon its completion. This observation has been interpreted as indicating that LIP neurons encode saccadic value and that they mediate value-based decisions between saccades. Here, we show that LIP neurons representing a given saccade fire strongly not only if it will yield a large reward but also if it will incur a large penalty. This finding indicates that LIP neurons are sensitive to the motivational salience of cues. It is compatible neither with the idea that LIP neurons represent action value nor with the idea that value-based decisions take place in LIP neurons.

Persons with autism spectrum disorders (ASD) are known to have difficulty in eye contact (EC). This may make it difficult for their partners during face to face communication with them. To elucidate the neural substrates of live inter-subject interaction of ASD patients and normal subjects, we conducted hyper-scanning functional MRI with 21 subjects with autistic spectrum disorder (ASD) paired with typically-developed (normal) subjects, and with 19 pairs of normal subjects as a control. Baseline EC was maintained while subjects performed real-time joint-attention task. The task-related effects were modeled out, and inter-individual correlation analysis was performed on the residual time-course data. ASD–Normal pairs were less accurate at detecting gaze direction than Normal–Normal pairs. Performance was impaired both in ASD subjects and in their normal partners. The left occipital pole (OP) activation by gaze processing was reduced in ASD subjects, suggesting that deterioration of eye-cue detection in ASD is related to impairment of early visual processing of gaze. On the other hand, their normal partners showed greater activity in the bilateral occipital cortex and the right prefrontal area, indicating a compensatory workload. Inter-brain coherence in the right IFG that was observed in the Normal-Normal pairs (Saito et al., 2010) during EC diminished in ASD–Normal pairs. Intra-brain functional connectivity between the right IFG and right superior temporal sulcus (STS) in normal subjects paired with ASD subjects was reduced compared with in Normal–Normal pairs. This functional connectivity was positively correlated with performance of the normal partners on the eye-cue detection. Considering the integrative role of the right STS in gaze processing, inter-subject synchronization during EC may be a prerequisite for eye cue detection by the normal partner.

A leading hypothesis to explain the social dysfunction in people with autism spectrum disorders (ASD) is that they exhibit a deficit in reward processing and motivation specific to social stimuli. However, there have been few direct tests of this hypothesis to date. Here we used an instrumental reward learning task that contrasted learning with social rewards (pictures of positive and negative faces) against learning with monetary reward (winning and losing money). The two tasks were structurally identical except for the type of reward, permitting direct comparisons. We tested 10 high-functioning people with ASD (7M, 3F) and 10 healthy controls who were matched on gender, age, and education. We found no significant differences between the two groups in terms of overall ability behaviorally to discriminate positive from negative slot machines, reaction-times, and valence ratings, However, there was a specific impairment in the ASD group in learning to choose social rewards, compared to monetary rewards: they had a significantly lower cumulative number of choices of the most rewarding social slot machine, and had a significantly slower initial learning rate for the socially rewarding slot machine, compared to the controls. The findings show a deficit in reward learning in ASD that is greater for social rewards than for monetary rewards, and support the hypothesis of a disproportionate impairment in social reward processing in ASD.

Human choice is not free—we are bounded by a multitude of biological constraints. Yet, within the various landscapes we face, we do express choice, preference, and varying degrees of so-called willful behavior. Moreover, it appears that the capacity for choice in humans is variable. Empirical studies aimed at investigating the experience of “free will” will benefit from theoretical disciplines that constrain the language used to frame the relevant issues. The combination of game theory and computational reinforcement learning theory with empirical methods is already beginning to provide valuable insight into the biological variables underlying capacity for choice in humans and how things may go awry in individuals with brain disorders. These disciplines operate within abstract quantitative landscapes, but have successfully been applied to investigate strategic and adaptive human choice guided by formal notions of optimal behavior. Psychiatric illness is an extreme, but interesting arena for studying human capacity for choice. The experiences and behaviors of patients suggest these individuals fundamentally suffer from a diminished capacity of willful choice. Herein, I will briefly discuss recent applications of computationally guided approaches to human choice behavior and the underlying neurobiology. These approaches can be integrated into empirical investigation at multiple temporal scales of analysis including the growing body of experiments in human functional magnetic resonance imaging (fMRI), and newly emerging sub-second electrochemical and electrophysiological measurements in the human brain. These cross-disciplinary approaches hold promise for revealing the underlying neurobiological mechanisms for the variety of choice capacity in humans.

Neural processing faces three rather different, and perniciously tied, communication problems. First, computation is radically distributed, yet point-to-point interconnections are limited. Second, the bulk of these connections are semantically uniform, lacking differentiation at their targets that could tag particular sorts of information. Third, the brain's structure is relatively fixed, and yet different sorts of input, forms of processing, and rules for determining the output are appropriate under different, and possibly rapidly changing, conditions. Neuromodulators address these problems by their multifarious and broad distribution, by enjoying specialized receptor types in partially specific anatomical arrangements, and by their ability to mold the activity and sensitivity of neurons and the strength and plasticity of their synapses. Here, I offer a computationally focused review of algorithmic and implementational motifs associated with neuromodulators, using decision making in the face of uncertainty as a running example.

To decide effectively, information must not only be integrated from multiple sources, but it must be distributed across the brain if it is to influence structures such as motor cortex that execute choices. Human participants integrated information from multiple, but only partially informative, cues in a probabilistic reasoning task in an optimal manner. We tested whether lateralization of alpha- and beta-band oscillatory brain activity over sensorimotor cortex reflected decision variables such as the sum of the evidence provided by observed cues, a key quantity for decision making, and whether this could be dissociated from an update signal reflecting processing of the most recent cue stimulus. Alpha- and beta-band activity in the electroencephalogram reflected the logarithm of the likelihood ratio associated with the each piece of information witnessed, and the same quantity associated with the previous cues. Only the beta-band, however, reflected the most recent cue in a manner that suggested it reflected updating processes associated with cue processing. In a second experiment, transcranial magnetic stimulation-induced disruption was used to demonstrate that the intraparietal sulcus played a causal role both in decision making and in the appearance of sensorimotor beta-band activity.

Memories become labile when recalled. In humans and rodents alike, reactivated fear memories can be attenuated by disrupting reconsolidation with extinction training. Using functional brain imaging, we found that, after a conditioned fear memory was formed, reactivation and reconsolidation left a memory trace in the basolateral amygdala that predicted subsequent fear expression and was tightly coupled to activity in the fear circuit of the brain. In contrast, reactivation followed by disrupted reconsolidation suppressed fear, abolished the memory trace, and attenuated fear-circuit connectivity. Thus, as previously demonstrated in rodents, fear memory suppression resulting from behavioral disruption of reconsolidation is amygdala-dependent also in humans, which supports an evolutionarily conserved memory-update mechanism.

Cognitive regulation is often used to influence behavioral outcomes. However, the computational and neurobiological mechanisms by which it affects behavior remain unknown. We studied this issue using an fMRI task in which human participants used cognitive regulation to upregulate and downregulate their cravings for foods at the time of choice. We found that activity in both ventromedial prefrontal cortex (vmPFC) and dorsolateral prefrontal cortex (dlPFC) correlated with value. We also found evidence that two distinct regulatory mechanisms were at work: value modulation, which operates by changing the values assigned to foods in vmPFC and dlPFC at the time of choice, and behavioral control modulation, which operates by changing the relative influence of the vmPFC and dlPFC value signals on the action selection process used to make choices. In particular, during downregulation, activation decreased in the value-sensitive region of dlPFC (indicating value modulation) but not in vmPFC, and the relative contribution of the two value signals to behavior shifted toward the dlPFC (indicating behavioral control modulation). The opposite pattern was observed during upregulation: activation increased in vmPFC but not dlPFC, and the relative contribution to behavior shifted toward the vmPFC. Finally, ventrolateral PFC and posterior parietal cortex were more active during both upregulation and downregulation, and were functionally connected with vmPFC and dlPFC during cognitive regulation, which suggests that they help to implement the changes to the decision-making circuitry generated by cognitive regulation.

Background
Uncertainty shapes our perception of the world and the decisions we make. Two aspects of uncertainty are commonly distinguished: uncertainty in previously acquired knowledge (prior) and uncertainty in current sensory information (likelihood). Previous studies have established that humans can take both types of uncertainty into account, often in a way predicted by Bayesian statistics. However, the neural representations underlying these parameters remain poorly understood.

Results
By varying prior and likelihood uncertainty in a decision-making task while performing neuroimaging in humans, we found that prior and likelihood uncertainty had quite distinct representations. Whereas likelihood uncertainty activated brain regions along the early stages of the visuomotor pathway, representations of prior uncertainty were identified in specialized brain areas outside this pathway, including putamen, amygdala, insula, and orbitofrontal cortex. Furthermore, the magnitude of brain activity in the putamen predicted individuals' personal tendencies to rely more on either prior or current information.

Conclusions
Our results suggest different pathways by which prior and likelihood uncertainty map onto the human brain and provide a potential neural correlate for higher reliance on current or prior knowledge. Overall, these findings offer insights into the neural pathways that may allow humans to make decisions close to the optimal defined by a Bayesian statistical framework.

Adaptive success in social animals depends on an ability to infer the likely actions of others. Little is known about the neural computations that underlie this capacity. Here, we show that the brain models the values and choices of others even when these values are currently irrelevant. These modeled choices use the same computations that underlie our own choices, but are resolved in a distinct neighboring medial prefrontal brain region. Crucially, however, when subjects choose on behalf of a partner instead of themselves, these regions exchange their functional roles. Hence, regions that represented values of the subject’s executed choices now represent the values of choices executed on behalf of the partner, and those that previously modeled the partner now model the subject. These data tie together neural computations underlying self-referential and social inference, and in so doing establish a new functional axis characterizing the medial wall of prefrontal cortex.

Cooperation is central to human social behaviour1, 2, 3, 4, 5, 6, 7, 8, 9. However, choosing to cooperate requires individuals to incur a personal cost to benefit others. Here we explore the cognitive basis of cooperative decision-making in humans using a dual-process framework10, 11, 12, 13, 14, 15, 16, 17, 18. We ask whether people are predisposed towards selfishness, behaving cooperatively only through active self-control; or whether they are intuitively cooperative, with reflection and prospective reasoning favouring ‘rational’ self-interest. To investigate this issue, we perform ten studies using economic games. We find that across a range of experimental designs, subjects who reach their decisions more quickly are more cooperative. Furthermore, forcing subjects to decide quickly increases contributions, whereas instructing them to reflect and forcing them to decide slowly decreases contributions. Finally, an induction that primes subjects to trust their intuitions increases contributions compared with an induction that promotes greater reflection. To explain these results, we propose that cooperation is intuitive because cooperative heuristics are developed in daily life where cooperation is typically advantageous. We then validate predictions generated by this proposed mechanism. Our results provide convergent evidence that intuition supports cooperation in social dilemmas, and that reflection can undermine these cooperative impulses.

2012年9月18日火曜日

The debate about the origins of human prosociality has focused on the presence or absence of similar tendencies in other species, and, recently, attention has turned to the underlying mechanisms. We investigated whether direct reciprocity could promote prosocial behavior in brown capuchin monkeys (Cebus apella). Twelve capuchins tested in pairs could choose between two tokens, with one being “prosocial” in that it rewarded both individuals (i.e., 1/1), and the other being “selfish” in that it rewarded the chooser only (i.e., 1/0). Each monkey’s choices with a familiar partner from their own group was compared with choices when paired with a partner from a different group. Capuchins were spontaneously prosocial, selecting the prosocial option at the same rate regardless of whether they were paired with an in-group or out-group partner. This indicates that interaction outside of the experimental setting played no role. When the paradigm was changed, such that both partners alternated making choices, prosocial preference significantly increased, leading to mutualistic payoffs. As no contingency could be detected between an individual’s choice and their partner’s previous choice, and choices occurred in rapid succession, reciprocity seemed of a relatively vague nature akin to mutualism. Having the partner receive a better reward than the chooser (i.e., 1/2) during the alternating condition increased the payoffs of mutual prosociality, and prosocial choice increased accordingly. The outcome of several controls made it hard to explain these results on the basis of reward distribution or learned preferences, and rather suggested that joint action promotes prosociality, resulting in so-called attitudinal reciprocity.

Major cognitive and emotional faculties are dominantly lateralized in the human cerebral cortex. The mechanism of this lateralization has remained elusive owing to the inaccessibility of human brains to many experimental manipulations. In this study we demonstrate the hemispheric lateralization of observational fear learning in mice. Using unilateral inactivation as well as electrical stimulation of the anterior cingulate cortex (ACC), we show that observational fear learning is controlled by the right but not the left ACC. In contrast to the cortex, inactivation of either left or right thalamic nuclei, both of which are in reciprocal connection to ACC, induced similar impairment of this behavior. The data suggest that lateralization of negative emotions is an evolutionarily conserved trait and mainly involves cortical operations. Lateralization of the observational fear learning behavior in a rodent model will allow detailed analysis of cortical asymmetry in cognitive functions.

Although the lateral prefrontal cortex (lPFC) and dorsal premotor cortex (PMd) are thought to be involved in goal-directed behavior, the specific roles of each area still remain elusive. To characterize and compare neuronal activity in two sectors of the lPFC [dorsal (dlPFC) and ventral (vlPFC)] and the PMd, we designed a behavioral task for monkeys to explore the differences in their participation in four aspects of information processing: encoding of visual signals, behavioral goal retrieval, action specification, and maintenance of relevant information. We initially presented a visual object (an instruction cue) to instruct a behavioral goal (reaching to the right or left of potential targets). After a subsequent delay, a choice cue appeared at various locations on a screen, and the animals could specify an action to achieve the behavioral goal. We found that vlPFC neurons amply encoded object features of the instruction cues for behavioral goal retrieval and, subsequently, spatial locations of the choice cues for specifying the actions. By contrast, dlPFC and PMd neurons rarely encoded the object features, although they reflected the behavioral goals throughout the delay period. After the appearance of the choice cues, the PMd held information for action throughout the specification and preparation of reaching movements. Remarkably, lPFC neurons represented information for the behavioral goal continuously, even after the action specification as well as during its execution. These results indicate that area-specific representation and information processing at progressive stages of the perception–action transformation in these areas underlie goal-directed behavior.

Humans take into account their own movement variability as well as potential consequences of different movement outcomes in planning movement trajectories. When variability increases, planned movements are altered so as to optimize expected consequences of the movement. Past research has focused on the steady-state responses to changing conditions of movement under risk. Here, we study the dynamics of such strategy adjustment in a visuomotor decision task in which subjects reach toward a display with regions that lead to rewards and penalties, under conditions of changing uncertainty. In typical reinforcement learning tasks, subjects should base subsequent strategy by computing an estimate of the mean outcome (e.g., reward) in recent trials. In contrast, in our task, strategy should be based on a dynamic estimate of recent outcome uncertainty (i.e., squared error). We find that subjects respond to increased movement uncertainty by aiming movements more conservatively with respect to penalty regions, and that the estimate of uncertainty they use is well characterized by a weighted average of recent squared errors, with higher weights given to more recent trials.

Previous neurophysiological studies of perceptual decision-making have focused on single-unit activity, providing insufficient information about how individual decisions are accomplished. For the first time, we recorded simultaneously from multiple decision-related neurons in parietal cortex of monkeys performing a perceptual decision task and used these recordings to analyze the neural dynamics during single trials. We demonstrate that decision-related lateral intraparietal area neurons typically undergo gradual changes in firing rate during individual decisions, as predicted by mechanisms based on continuous integration of sensory evidence. Furthermore, we identify individual decisions that can be described as a change of mind: the decision circuitry was transiently in a state associated with a different choice before transitioning into a state associated with the final choice. These changes of mind reflected in monkey neural activity share similarities with previously reported changes of mind reflected in human behavior.

2012年9月11日火曜日

Punishment can help maintain cooperation by deterring free-riding and cheating. Of particular importance in large-scale human societies is third-party punishment in which individuals punish a transgressor or norm violator even when they themselves are not affected. Nonhuman primates and other animals aggress against conspecifics with some regularity, but it is unclear whether this is ever aimed at punishing others for noncooperation, and whether third-party punishment occurs at all. Here we report an experimental study in which one of humans' closest living relatives, chimpanzees (Pan troglodytes), could punish an individual who stole food. Dominants retaliated when their own food was stolen, but they did not punish when the food of third-parties was stolen, even when the victim was related to them. Third-party punishment as a means of enforcing cooperation, as humans do, might therefore be a derived trait in the human lineage.

The emergence of complex cultural practices in simple hunter-gatherer groups poses interesting questions on what drives social complexity and what causes the emergence and disappearance of cultural innovations. Here we analyze the conditions that underlie the emergence of artificial mummification in the Chinchorro culture in the coastal Atacama Desert in northern Chile and southern Peru. We provide empirical and theoretical evidence that artificial mummification appeared during a period of increased coastal freshwater availability and marine productivity, which caused an increase in human population size and accelerated the emergence of cultural innovations, as predicted by recent models of cultural and technological evolution. Under a scenario of increasing population size and extreme aridity (with little or no decomposition of corpses) a simple demographic model shows that dead individuals may have become a significant part of the landscape, creating the conditions for the manipulation of the dead that led to the emergence of complex mortuary practices.

The evolution of cooperation in nature and human societies depends crucially on how the benefits from cooperation are divided and whether individuals have complete information about their payoffs. We tackle these questions by adopting a methodology from economics called mechanism design. Focusing on reproductive skew as a case study, we show that full cooperation may not be achievable due to private information over individuals’ outside options, regardless of the details of the specific biological or social interaction. Further, we consider how the structure of the interaction can evolve to promote the maximum amount of cooperation in the face of the informational constraints. Our results point to a distinct avenue for investigating how cooperation can evolve when the division of benefits is flexible and individuals have private information.

2012年9月5日水曜日

Given a noisy sensory world, the nervous system integrates perceptual evidence over time to optimize decision-making. Neurophysiological accumulation of sensory information is well-documented in the animal visual system, but how such mechanisms are instantiated in the human brain remains poorly understood. Here we combined psychophysical techniques, drift-diffusion modeling, and functional magnetic resonance imaging (fMRI) to establish that odor evidence integration in the human olfactory system enhances discrimination on a two-alternative forced-choice task. Model-based measures of fMRI brain activity highlighted a ramp-like increase in orbitofrontal cortex (OFC) that peaked at the time of decision, conforming to predictions derived from an integrator model. Combined behavioral and fMRI data further suggest that decision bounds are not fixed but collapse over time, facilitating choice behavior in the presence of low-quality evidence. These data highlight a key role for the orbitofrontal cortex in resolving sensory uncertainty and provide substantiation for accumulator models of human perceptual decision-making.

Forming place-reward associations critically depends on the integrity of the hippocampal–ventral striatal system. The ventral striatum (VS) receives a strong hippocampal input conveying spatial-contextual information, but it is unclear how this structure integrates this information to invigorate reward-directed behavior. Neuronal ensembles in rat hippocampus (HC) and VS were simultaneously recorded during a conditioning task in which navigation depended on path integration. In contrast to HC, ventral striatal neurons showed low spatial selectivity, but rather coded behavioral task phases toward reaching goal sites. Outcome-predicting cues induced a remapping of firing patterns in the HC, consistent with its role in episodic memory. VS remapped in conjunction with the HC, indicating that remapping can take place in multiple brain regions engaged in the same task. Subsets of ventral striatal neurons showed a “flip” from high activity when cue lights were illuminated to low activity in intertrial intervals, or vice versa. The cues induced an increase in spatial information transmission and sparsity in both structures. These effects were paralleled by an enhanced temporal specificity of ensemble coding and a more accurate reconstruction of the animal's position from population firing patterns. Altogether, the results reveal strong differences in spatial processing between hippocampal area CA1 and VS, but indicate similarities in how discrete cues impact on this processing.

In contrast to the well-established roles of the striatum in movement generation and value-based decisions, its contributions to perceptual decisions lack direct experimental support. Here, we show that electrical microstimulation in the monkey caudate nucleus influences both choice and saccade response time on a visual motion discrimination task. Within a drift-diffusion framework, these effects consist of two components. The perceptual component biases choices toward ipsilateral targets, away from the neurons’ predominantly contralateral response fields. The choice bias is consistent with a nonzero starting value of the diffusion process, which increases and decreases decision times for contralateral and ipsilateral choices, respectively. The nonperceptual component decreases and increases nondecision times toward contralateral and ipsilateral targets, respectively, consistent with the caudate’s role in saccade generation. The results imply a causal role for the caudate in perceptual decisions used to select saccades that may be distinct from its role in executing those saccades.

Perceptual decision making is believed to be driven by the accumulation of sensory evidence following stimulus encoding. More controversially, some studies report that neural activity preceding the stimulus also affects the decision process. We used a multivariate pattern classification approach for the analysis of the human electroencephalogram (EEG) to decode choice outcomes in a perceptual decision task from spatially and temporally distributed patterns of brain signals. When stimuli provided discriminative information, choice outcomes were predicted by neural activity following stimulus encoding; when stimuli provided no discriminative information, choice outcomes were predicted by neural activity preceding the stimulus. Moreover, in the absence of discriminative information, the recent choice history primed the choices on subsequent trials. A diffusion model fitted to the choice probabilities and response time distributions showed that the starting point of the evidence accumulation process was shifted toward the previous choice, consistent with the hypothesis that choice priming biases the accumulation process toward a decision boundary. This bias is reflected in prestimulus brain activity, which, in turn, becomes predictive of future decisions. Our results provide a model of how non-stimulus-driven decision making in humans could be accomplished on a neural level.

A considerable body of previous research on the prefrontal cortex (PFC) has helped characterize the regional specificity of various cognitive functions, such as cognitive control and decision making. Here we provide definitive findings on this topic, using a neuropsychological approach that takes advantage of a unique dataset accrued over several decades. We applied voxel-based lesion-symptom mapping in 344 individuals with focal lesions (165 involving the PFC) who had been tested on a comprehensive battery of neuropsychological tasks. Two distinct functional-anatomical networks were revealed within the PFC: one associated with cognitive control (response inhibition, conflict monitoring, and switching), which included the dorsolateral prefrontal cortex and anterior cingulate cortex and a second associated with value-based decision-making, which included the orbitofrontal, ventromedial, and frontopolar cortex. Furthermore, cognitive control tasks shared a common performance factor related to set shifting that was linked to the rostral anterior cingulate cortex. By contrast, regions in the ventral PFC were required for decision-making. These findings provide detailed causal evidence for a remarkable functional-anatomical specificity in the human PFC.