AER’s “amazingly accurate” winter outlook

The AER winter forecast has now been correct for four years in a row (Read Press Release). Even before that we had published a paper documenting that the AER model was the most skillful, e.g., accurate, winter seasonal forecast model that can be verified. Yet before every new forecast, winter or summer, there is great pressure to be correct and the need to validate the accuracy and usefulness of our techniques. And to make things more difficult this past winter, December 2012 got off to a blazing start. The consensus quickly became this winter would simply be a repeat of last year’s record mild winter. But our predictors continued to signal bullish for cold from January onward and we stayed the course.

It is very satisfying knowing now that the winter is over, that the warm start in December was an aberration and the remainder of the winter was indeed cold and in many cities very snowy, including in our hometown of Boston (though pity the people of Duluth that had their snowiest month ever this April!). Perseverance has paid off and it is nice to be recognized for another correct forecast.

Throughout my career I have heard that we are not good just lucky. But getting the forecast correct even once is extremely difficult and we have produced, the best forecast four years in a row, including this winter; a result that is highly improbable simply due to chance. The correct forecast is not just for the US but also for the entire Northern Hemisphere.

AER Forecast Proves Exceptionally Accurate for 2013 U.S. Heating Demand Forecast from autumn 2012 outperformed all public forecast benchmarks.. Achieved average error of 1°F (0.52°C) for the U.S., and 1.5°F (0.87°C) for the Northern Hemisphere.

Forecast Skill

One metric of the forecast skill or accuracy is the pattern correlation between the predicted and observed temperatures. A perfect forecast would produce a value of 1 and any value above 0 is considered skillful. The pattern correlation for both December through February (Dec., Jan., Feb, or DJF) and January through March were comparable at 0.6 and 0.65 respectively. The skill for Eurasia was also comparable, for DJF it was 0.67 and for JFM 0.64. But the best overall score was for North America for the January-March period with a pattern correlation of 0.79.

Another metric of skill is the root mean square error (RMSE), which tallies the average error at every point. For the US the area average RMSE was less than 1ºF.

"Job Well Done!"

Both of these metrics are not only great scores for a three-month forecast; they are great scores for a three day forecast!

The accurate forecast has not gone unnoticed and wishes of congratulations and job well done have come from clients, fellow scientists and even the media including Harvey Leonard the chief meteorologist at WCVB. But the most effusive praise so far has been from the Washington Post’s Capital Weather Gang. Whose blog title says it all: “AER’s Judah Cohen produces amazingly accurate winter outlook.” It seems that jaw dropping was the most common reaction to the verification of the forecast, which was Jason Samenow’s reaction as written in the article and from a client.

Here at AER we believe that we produce the best seasonal forecast even with less manpower and resources at our disposal than the government forecast centers. A big reason is because we made the decision from the beginning not to follow the herd and obsess with ENSO as the only seasonal forecast predictor. Instead our focus has been on the Arctic Oscillation and of course Siberian snow cover; focusing on these predictors has paid off for our clients. And as Jason concludes in his blog: “Perhaps it’s time … government centers begin to incorporate Cohen’s methodology into the preparation of their outlooks.”

It is very gratifying for others to acknowledge our leadership role in seasonal forecasting. But there is no time to rest on our laurels and Jason Furtado, Justin Jones and I are already sweating the summer forecast.