There are numerous sophisticated forecasting methods developed over time, including time-series analyses, decomposition analyses and filtering approaches. But, time and again, forecasts are erroneous, and quite often, materially so.

What are the reasons? Simple; the errors come from many sources but consumers and decision-makers rely on point estimates. We have come to believe the world is deterministic when it is really stochastic. But human beings change their preferences, societies and economies do not necessarily behave as we imagine and events take place that are beyond our control. So, inherent in any forecast is error because things change. But when the eco-system for such potential changes is ripe, we should be even more careful. Errors also creep in from errors in data collection.

Therefore, we should not rely on point estimates but on a range. That is why good forecasts always provide standard error or margin of error. For instance, let’s say we estimate the demand for a service to be at 43 units and the standard error to be 3 units.

That simply means that we can assume with about 66 percent confidence that the demand is likely to be somewhere between 40 and 46. What if we want over 90 percent confidence in our estimate of demand? In that case, we can only state that the demand is likely to be somewhere between 37 units and 49 units – a stunning spread of 12 units. That’s the point. But as decision makers and policy makers, we time and again forget this simple fact and make misjudgments.

Here, using the recent US Presidential elections as the subject matter for forecasting preferences and choices, I have illustrated the challenges of forecasting. Please read on.

It was believed that Trump’s odds were very good even as early as in January-February (when Trump was not even the Republican Party nominee, and was only one among the 16-17 Republican Party candidates) as reminded by a student. See here: https://twitter.com/Kalyanaram_G/st atus/796385149399273472