contact us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right.


123 Street Avenue, City Town, 99999

(123) 555-6789


You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

Lessons from election forecasting


Lessons from election forecasting

Impacto Social Consultores

Por Juan Manuel Puyana

A brief history of forecasting

Election forecasting has been at the center of major elections since 1916, when Literary Digest correctly predicted Woodrow Wilson’s election as US president. The publication continued to correctly predict the US election four more times, until 1936, when a landslide victory was predicted for Kansas Governor Alf Landon. That year, Franklin Roosevelt won with over 60% of the popular vote, shaming the publication and making it go out of business in the following years (Crossen, 2006).

Faulty polling methods were to blame, and it was at the same time that George Gallup developed the first demographically representative poll, achieving a far more precise prediction with a smaller sample (Squire, 1988). Polling methods have evolved ever since, and with them the desire to accurately forecast the result of elections. The first econometric model for forecasting was developed by Ray Fair in 1970, using macroeconomic data since 1916 (Fair, 2002). These models effectively started the use of econometrics and regression analysis for election forecasting.

Most common forecasting models

Since Ray Fair, several statistical methods have been used to forecast elections. The most widely used models are incumbency models (or regression models) and pooling of polls. The first kind of model is an extension of Fair’s model, where incumbency status has been found to be a particularly strong predictor (Gelman & Huang, 2008). These models often include popularity ratings, macroeconomic variables and governability measures.

The second kind of model uses different weights to average the results of media and polling agencies polls (Jackman, 2005). These averages rely on subjective ratings of polls. These models have recently become popular due to their simplicity and prediction power.

An alternative method is Dynamic Bayesian Forecasting, which combines information from regression models and polls. These models are more commonly used to make predictions close to the election date, as statistical assumptions make it so earlier predictions are heavily influenced by subjective priors. Linzer (2013) develops such a model to analyze the 2008 US presidential elections, finding that given the information known at the moment, the election could have been easily predicted in favor of Barack Obama using this forecasting method (Linzer, 2013).

What the models say about the 2016 US presidential election

Pooling the polls has undoubtedly become the most common forecasting model, popularized by people like Nate Silver ( and newspapers like The Huffington Post and The New York Times. Each outlet uses their own weights to average the polls, depending on previous accuracy, time since the poll was released, sample size and method, among others. This gives a different estimated fraction of the popular vote for the likely nominees, Hillary Clinton (DNC) and Donald Trump (GOP). As of July 16th, 2016, The New York Times predicts 41% vs 39% in favor of Hillary Clinton. The Huffington Post predicts 43% for Clinton to Trump’s 40%. FiveThirtyEight gives 46.9% to Hillary and 43.1% to Trump.

These results translate into a high probability of Hilary Clinton clinching the presidency, estimated by FiveThirtyEight to be around 65% as of July 16th, 2016. This estimation considers state by state polling results.

This presents a stark contrast with the most recent incumbency models results. Ray Fair gives Trump a 60% chance of winning the presidency (Doyle McManus, 2016), Abramowitz’s model (Abramowitz, 2012), a prominent model in election forecasting, predicts Trump as the winner when applied to the current economic and political conditions (Dylan Matthews, 2016), and Helmut Norpoth, a professor at Stony Brook University, gives Trump a 97% chance of winning, given the performance in the primaries (Cameron, 2016).

Why is there such a difference?

How can these models, that have previously been found to be extremely accurate, display such different outcomes? The answer lies in the assumption, which at the same time reflect their particular flaws.

Regression models rely heavily on past observations. At its core, it implies every election is a regular election. It sets a specific weight to incumbency status and popularity of the current president. It ignores, in many cases, the character of the nominees. It overlooks criminal investigations and business failures, previous experience and controversies. It strives to utilize previous information to maximize prediction power, leaving certain important events without consideration.

On the other side, averaging the polls leaves aside any lesson given by the past, and suffers from immense variation during the election cycle. Just 15 days ago, FiveThirtyEight’s probability of Hillary winning stood at 80%, 15pp higher than where it currently sits. This begs the question, are the polls simply converging to the incumbency model?

This contrast seems to indicate that we are not dealing with a regular election. Never have we had two such unpopular candidates running for office (Aaron Zitner & Julia Wolfe, 2016), and there are many controversies and shifts of opinion polls to be seen, as has already been the norm this election.

What can we take from these models?

This election cycle will be an opportunity to test these two models, their assumptions and power. Come November there will be many lessons to be learned and adjustments to improve this school of econometrics. For now, it’s crucial to continue the rigorous estimation of models.

Finally, the eventual results of the election won’t prove or disprove either model. As statistical models mainly work towards probability estimations, at the end of the road anything can happen. Roughly speaking, the latest FiveThirtyEight estimation says that if the election was held 20 times, Trump would win 7 of those. Every model has its faults and merits, and more than ever, we are facing an exciting time for the development of forecasts.


·       Aaron Zitner, & Julia Wolfe. (2016, May 24). Donald Trump and Hillary Clinton’s Popularity Problem. Retrieved July 16, 2016, from

·       Abramowitz, A. (2012). Forecasting in a Polarized Era: The Time for Change Model and the 2012 Presidential Election. PS: Political Science & Politics, 45(04), 618–619.

·       Cameron, C. (2016, February 23). Political science professor forecasts Trump as general election winner. Retrieved from

·       Crossen, C. (2006, October 2). Fiasco in 1936 Survey Brought “Science” To Election Polling. Wall Street Journal. Retrieved from

·       Doyle McManus. (2016, May 15). Election forecasting in the age of Trump. Retrieved July 16, 2016, from

·       Dylan Matthews. (2016, June 14). One of the best election models predicts a Trump victory. Its creator doesn’t believe it. Retrieved July 16, 2016, from

·       Fair, R. C. (2002). Predicting presidential elections and other things. Stanford, CA: Stanford University Press.

·       Gelman, A., & Huang, Z. (2008). Estimating Incumbency Advantage and Its Variation, as an Example of a Before–After Study. Journal of the American Statistical Association, 103(482), 437–446.

·       Jackman, S. (2005). Pooling the polls over an election campaign. Australian Journal of Political Science, 40(4), 499–517.

·       Linzer, D. A. (2013). Dynamic Bayesian Forecasting of Presidential Elections in the States. Journal of the American Statistical Association, 108(501), 124–134.

·       Squire, P. (1988). Why the 1936 Literary Digest poll failed. Public Opinion Quarterly, 52(1), 125–133.