YOU ARE HERE: LAT HomeCollections
(Page 2 of 3)

# The election prediction game: The winners and the losers

## The Times asked a few pundits and prognosticators to talk about how they arrived at their predictions and why they got them right — or wrong.

November 11, 2012

For all that, I flipped my prediction at the last minute, seeing that Ohio would stay in the Obama camp. I was wrong, but less wrong.

WINNERS:

Sam Wang is an associate professor of molecular biology and neuroscience at Princeton University and founder of the Princeton Election Consortium. Obama 303, Romney 235.

I promised my readers that if I was wrong in my prediction about the outcome of the presidential race, I would eat a bug. I didn't have to pay up. I called 50 out of 50 races correctly, as well as the popular vote and 10 out of 10 close Senate races.

I did this by analyzing polls, relying on the fact that individual pollsters may make small errors but, as a group, they are wise. Applying the right statistical tools collects their wisdom to give a sharp picture of one race — or of the electoral college. For example, if we at the consortium have three polls for Ohio showing Obama up by 3, Obama up by 2 and Romney up by 3, then the middle value — the median — is likely to be closest to the true result. This kind of information can be used to calculate the odds that a candidate is ahead. Combining the probabilities requires more advanced math.

We used similar approaches to pinpoint the race's pivotal events. Michelle Obama's speech at the Democratic convention, for example, improved 20 million opinions about her husband's job performance — overnight. And the largest swing in candidate preference occurred after the first debate, when Romney nearly closed the gap with the president. However, this change did not last long.

Why did experienced pundits such as Karl Rove fail in their predictions? Those who expected Romney to win were selectively questioning polls that they found disagreeable. When evaluating hard data, it is essential to avoid such reasoning errors, whether with polls or with evidence for climate change. On Tuesday we saw an example of the consequences.

Drew Linzer is an assistant professor of political science at Emory University. Obama 332 / Romney 206

On Nov. 6, I predicted that Obama would win 332 electoral votes, with 206 for Romney. But I also predicted the exact same outcome on June 23, and the prediction barely budged through election day.

How is this possible? Statistics. I did it by systematically combining information from long-term historical factors — economic growth, presidential popularity and incumbency status — with the results of state-level public opinion polls. The political and economic "fundamentals" of the race indicated at the outset that Obama was on track to win reelection. The polls never contradicted this, even after the drop in support for Obama following the first presidential debate. In fact, state-level voter preferences were remarkably stable this year; varying by no more than 2 or 3 percentage points over the entire campaign (as compared to the 5% to 10% swings in 2008).

The actual mechanics of my forecasts were performed using a statistical model that I developed and posted on my website, votamatic.org. While quantitative election forecasting is still an emerging area, many analysts were able to predict the result on the day of the election by aggregating the polls. The challenge remains to improve estimates of the outcome early in the race, and use this information to better understand what campaigns can accomplish and how voters make up their minds.

Markos Moulitsas is the founder and publisher of Daily Kos. Obama 332 / Romney 206.

This election delivered a triumph to data junkies — those of us who view politics through numbers as opposed to ideological conceits or biases. At Daily Kos, we have long prided ourselves on our slavish devotion to that data. How can we move the nation toward a more progressive path unless we accurately understand the public?

We partnered with the pollsters at Public Policy Polling, which was just declared the most accurate pollster of 2012 in a Fordham University study. But I never rely on any single point of data. By definition, five out of every 100 polls will be wrong, and the more polling responses you aggregate, the smaller the margin of error. So I did what the smartest political prognosticators did: lump all the polling together and average it out.

I then predicted the vote differentials in the nine battleground states and the national vote. I was within 2 percentage points of the final results in eight of the 10. None of this required any fancy insider sources — just a realization that political campaigns aren't magic, and a handy-dandy calculator.

Larry Sabato is the director of the University of Virginia Center for Politics and editor of the Crystal Ball newsletter. Obama 290 / Romney 248.