Additional Information

Jennifer M. Logg

Jennifer Logg is a Post-Doctoral Fellow in the Negotiation, Organizations & Markets Unit at Harvard Business School. She studies how people can improve the accuracy of their decisions. Specifically, her work examines when people are most likely to leverage the power of algorithms to improve their accuracy. Her other work tests factors that exacerbate overconfidence, people's overly positive beliefs about themselves, and whether unrealistic optimism helps performance as much as people think it does. She received her Ph.D. in Management of Organizations from the Haas School of Business at the University of California, Berkeley.

A series of experiments investigated why people value optimism and whether they are right to do so. In Experiments 1A and 1B, participants prescribed more optimism for someone implementing decisions than for someone deliberating, indicating that people prescribe optimism selectively, when it can affect performance. Furthermore, participants believed optimism improved outcomes when a person’s actions had considerable, rather than little, influence over the outcome (Experiment 2). Experiments 3 and 4 tested the accuracy of this belief; optimism improved persistence, but it did not improve performance as much as participants expected. Experiments 5A and 5B found that participants overestimated the relationship between optimism and performance even when their focus was not on optimism exclusively. In summary, people prescribe optimism when they believe it has the opportunity to improve the chance of success—unfortunately, people may be overly optimistic about just how much optimism can do.

Algorithms—scripts for mathematical calculations—are powerful. Even though algorithms often outperform human judgment, people resist allowing a numerical formula to make decisions for them (Dawes, 1979). Nevertheless, people increasingly depend on algorithms to inform their decisions. Eight experiments examined trust in algorithms. Experiments 1A and 1B found that advice influenced participants more when they thought it came from an algorithm than when they thought it came from other people. This effect was robust to presenting the advisor jointly or separately (Experiment 2). Experiment 3 tested a moderator: excessive confidence in one’s own knowledge attenuated reliance on algorithms. These tests are important because participants can improve their accuracy by relying more on algorithms (Experiment 4). Experiments 5 and 6 tested a mechanism for reliance: subjectivity of the decision. For objective decisions, participants preferred algorithmic advice, and for subjective decisions, participants preferred advice from people. Experiment 6 tested the interaction of subjectivity and the availability of expert advice. Participants preferred an expert to an algorithm, regardless of the domain (Experiment 6). Experiment 7 examined how decision makers’ own expertise influenced reliance on algorithms. Experts in national security, who regularly make forecasts, relied less on algorithmic advice than lay people did. These results shed light on the important question of when people rely on algorithmic advice over advice from people and have implications for the use of technological algorithms.