Most judgments and decisions are based, to some extent, on forecasts about some states or outcomes. It has been asserted that the accuracy of these forecasts and the confidence placed in them are affected by four factors: (1) environmental uncertainty, (2) outcome desirability, (3) expertise, and time horizon. Some studies have addressed the effects of one or two of these variables, but no study has investigated the way the four variables interact with one another. The two studies reported here were designed to alleviate this situation.In Study I, twenty-seven experts (paid political consultants) and forty nonexperts provided forecasts related to the 1992 U.S. presidential election. The experts made their first forecast in July, then experts and nonexperts made predictions simultaneously three more times beginning in early September and ending just before the election. Results showed that when uncertainty was moderate, experts were generally unbiased and outperformed nonexperts. But when uncertainty was high, experts were frequently biased and their performance suffered accordingly. Surprisingly, both experts and nonexperts were generally underconfident.In Study II, fifty-four students from five Big Ten schools predicted the outcomes of Big Ten basketball games. Measures of environmental uncertainty, desirability, and expertise were collected. Unlike Study I, there was no relationship between expertise and performance. Bias was unrelated to level of uncertainty and participants demonstrated typical levels of overconfidence.The conflicting results are interpreted in context with the reexamination of previously published data. Theoretical implications for forecasting research are discussed. A possible refinement in our understanding of the overconfidence phenomenon is presented. Also, the limited usefulness of a time horizon taxonomy is addressed. Potential areas of application also are considered, particularly with respect to the use of experts as judgmental forecasters.