Assistant Professor of Marketing, Rotman School of Management, University of Toronto

Research

Published Papers

Abstract:
Humans are often inconsistent (irrational) when choosing among simple bundles of goods, even without any particular changes to framing or context. However, the neural computations that give rise to such inconsistencies are still unknown. Similar to sensory perception and motor output, we propose that a substantial component of inconsistent behavior is due to variability in the neural computation of value. Here, we develop a novel index that measures the severity of inconsistency of each choice, enabling us to directly trace its neural correlates. We find that the BOLD signal in the vmPFC, ACC, and PCC is correlated with the severity of inconsistency on each trial and with the subjective value of the chosen alternative. This suggests that deviations from rational choice arise in the regions responsible for value computation. We offer a computational model of how variability in value computation is a source of inconsistent choices.

When making decisions, people tend to look back and forth between the alternatives until eventually making a choice. Eye-tracking research has established that these shifts in attention are strongly linked to choice outcomes. A predominant framework for understanding the dynamics of the choice process, and thus the effects of attention, is sequential sampling of information. However, existing methods for estimating the attention parameters in these models are computationally costly and overly flexible, and yield estimates with unknown precision and bias. Here we propose an estimation method that relies on a link between sequential sampling models and random utility models (RUM). This method uses familiar econometric tools, i.e. Logit regression, and yields estimates that appear to be unbiased and relatively precise, while requiring a small fraction of the usual computation time. The RUM approach thus appears to be a very useful tool for estimating the effects of attention on choice.Read the paper (PDF) →

Abstract: The random utility model is the standard empirical framework for modelling stochastic choice behaviour in applied settings. Though the distribution of stochastic choice has important implications for both testing behavioural theories and predicting behaviour, the theoretical and empirical foundations of this distribution are not well understood. Moreover, the random utility framework has so far been agnostic about the dynamics of the decision process that are of considerable interest in psychology and neuroscience, captured by a class of bounded accumulation models which relate decision times to stochastic behaviour. This article demonstrates that a random utility model can be derived from a general class of bounded accumulation models, in which particular features of this dynamic process restrict the form of the relationship between observables and the distribution of stochastic choice.Read the paper (PDF) →

Abstract: We assess whether a cardinal model can be used to relate neural observables to stochastic choice behaviour. We develop a general empirical framework for relating any neural observable to choice prediction, and propose a means of bench-marking their predictive power. In a previous study, measurements of neural activity were made while subjects considered consumer goods. Here, we find that neural activity predicts choice behaviour, with the degree of stochasticity in choice related to the cardinality of the measurement. However, we also find that current methods have a significant degree of measurement error, severely limiting their inferential and predictive performance.

Abstract: Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior.Read the paper (PDF) →

Abstract: It is well established that neural imaging technology can predict preferences for consumer products. However, the applicability of this method to consumer marketing research remains uncertain, partly because of the expense required. In this article, the authors demonstrate that neural measurements made with a relatively low-cost and widely available measurement method—electroencephalography (EEG)—can predict future choices of consumer products. In the experiment, participants viewed individual consumer products in isolation, without making any actual choices, while their neural activity was measured with EEG. At the end of the experiment, participants were offered choices between pairs of the same products. The authors find that neural activity measured from a midfrontal electrode displays an increase in the N200 component and a weaker theta band power that correlates with a more preferred product. Using recent techniques for relating neural measurements to choice prediction, they demonstrate that these measures predict subsequent choices. Moreover, the accuracy of prediction depends on both the ordinal and cardinal distance of the EEG data; the larger the difference in EEG activity between two products, the better the predictive accuracy.Read the paper (PDF) →

Abstract: Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding.Read the paper (PDF) →

Abstract: Normalization is a widespread neural computation in both early sensory coding and higher-order processes such as attention and multisensory integration. It has been shown that during decision-making, normalization implements a context-dependent value code in parietal cortex. In this paper we develop a simple differential equations model based on presumed neural circuitry that implements normalization at equilibrium and predicts specific time-varying properties of value coding. Moreover, we show that when parameters representing value are changed, the solution curves change in a manner consistent with normalization theory and experiment. We show that these dynamic normalization models naturally implement a time-discounted normalization over past activity, implying an intrinsic reference-dependence in value coding of a kind seen experimentally. These results suggest that a single network mechanism can explain transient and sustained decision activity, reference dependence through time discounting, and hence emphasizes the importance of a dynamic rather than static view of divisive normalization in neural coding.Read the paper (PDF) →

Abstract: In learning models of strategic game play, an agent constructs a valuation (action value) over possible future choices as a function of past actions and rewards. Choices are then stochastic functions of these action values. Our goal is to uncover a neural signal that correlates with the action value posited by behavioral learning models. We measured activity from neurons in the superior colliculus (SC), a midbrain region involved in planning saccadic eye movements, while monkeys performed two saccade tasks. In the strategic task, monkeys competed against a computer in a saccade version of the mixed-strategy game “matching-pennies”. In the instructed task, saccades were elicited through explicit instruction rather than free choices. In both tasks neuronal activity and behavior were shaped by past actions and rewards with more recent events exerting a larger influence. Further, SC activity predicted upcoming choices during the strategic task and upcoming reaction times during the instructed task. Finally, we found that neuronal activity in both tasks correlated with an established learning model, the Experience Weighted Attraction model of action valuation (Camerer and Ho, 1999). Collectively, our results provide evidence that action values hypothesized by learning models are represented in the motor planning regions of the brain in a manner that could be used to select strategic actions.Read the paper (PDF) →

Working Papers

Abstract: Biology places a resource constraint on the neural computations that ultimately characterize stochastic choice behaviour. We incorporate one such computation, divisive normalization, into a neuroeconomic choice model and predict that the composition and size of the choice set will adversely influence choice. Evidence for novel violations of the IIA axiom is provided from two behavioural experiments, and the model more accurately captures observed behaviour compared to existing econometric models. Finally, we demonstrate that normalization implements an efficient means for the brain to represent valuations given neurobiological constraints, yielding the fewest choice errors possible.Read the paper →

Abstract:
A number of recent studies have used the random utility framework to examine whether neural data can assess and predict demand for consumer products, both within and across individuals. However the effectiveness of this methodology has been limited by the large degree of measurement error in neural data. The resulting “error-in-variables” problem severely biases the estimates of the relationship between neural measurements and choice behaviour, thus limiting the role such data can play in assessing marginal contributions to utility. In this article, we propose a method for controlling for this large degree of measurement error in value regions of the brain. We propose that additional observations can serve as “proxies” for the measurement error in these value regions, substantially alleviating the bias in model estimates. We also demonstrate that standard methods for dealing with the error-in-variables problem, like additional instrumental variables, are limited in the context of neural data. We demonstrate the feasibility of our proposed method on an existing dataset of fMRI measurements and consumer choices.Read the paper →

Abstract: We propose a theory of multi-attribute choice with foundation in the neuroscience of sensory perception. The theory relies on within-attribute pairwise comparisons by means of divisive normalization, a neural computation widely observed across many sensory modalities and species. The theory captures and unifies a range of phenomena observed in the empirical literature, including the asymmetric dominance effect, the compromise effect, the similarity effect, “majority rule” transitivity violations, subadditive attribute weighting, and comparability biases. Our analysis also demonstrates how pairwise attribute normalization can implement diminishing sensitivity (i.e. Weber’s Law), revealing a link between our theory and the previously-established concept of attribute salience found in the economics literature. A general formulation of our theory contains some canonical microeconomic preference representations as special cases, including CES (constant elasticity of substitution) and Cobb-Douglas utility.Read the paper →

Combining Choices and Response Times in the Field: a Drift-Diffusion Model of Mobile Advertisements
Chiong, K., Shum, M., Webb, R. and Chen, R.

Abstract:

We study how choice and response time data can be combined to estimate the effectiveness of manipulating attention to advertisements. We utilize the class of drift-diffusionmodels — originally developed in psychology and neuroeconomics to jointly explain subjects’ choices and response times in laboratory experiments — to model users’ responses to video advertisements on mobile devices. The combination of response time with choice data allows separate identification of the diffusion processes characterizing users’ preferences when the ad is playing, as well as when users face a subsequent decision to click-through on the ad.

Using our estimates, we address the counterfactual of whether users should be permitted to skip part or all of a video advertisement before making a choice. Overall, we find that allowing users to skip the ad after ten seconds yields roughly the same revenue as forcing them to view the entire thirty-second ad, thus rationalizing the practice of some platforms (e.g. YouTube) where users can skip an ad after 5 or 10 seconds. However, the effects are very heterogeneous across users. Ad revenue can be higher if the “skip-ability” of the ad could be targeted and individualized according to users’ demographics.

A Neural Model of Stochastic Choice in a Mixed Strategy Game
Webb, R., Dorris, M.C., (2014).

Abstract: In strategic games with a unique mixed strategy equilibrium, players face both an incentive to best-respond to valuations and to act unpredictably. We developed a model of how neural circuitry represents a balance between these two incentives in the course of a decision. Choice is modelled as the result of the interaction between action value input from upstream brain areas and the noise inherent in neuronal networks: large differences in action valuations between options lead to reliable best-response choices whereas small differences result in a choice selection process dominated by noise. Action value input was measured in superior colliculus activity while monkeys played a saccade version of matching pennies. We found that model simulations based on these measures exhibit similar choice biases as found in behavioural data. Deviations from the mixed equilibrium strategy were predicted by the action value measurements within the model. This yields a neural choice mechanism that is capable of implementing both critical aspects of equilibrium formation in strategic games: best-response and stochastic behaviour.Read the paper →

Does Neuroeconomics Have a Role in Explaining Choice?
Webb, R. (2011).

Abstract: The central question that has confronted neuroeconomists is whether understanding how a decision maker chooses will help us explain what he will choose. Is there anything to be gained from neural sources of data? Gul and Pesendorfer (2008) have argued to the negative, that economic models are falsified on choices alone, and Bernheim (2009) notes that a model of the neural decision process is ultimately equivalent to a model which uses only choice data. Instead of this equivalence being a weakness of neuroeconomics, we argue it is its greatest strength. Expanding on Bernheim’s theoretical framework, we demonstrate the role neural data can play in building better behavioural models through this equivalence. We then survey the current neuroeconomic literature and provide three examples of this process at work.Read the paper →