Tuesday, November 22, 2011

The nerd war goes on

If you thought the debate over election forecast models was over, you thought wrong. Sean Trende has pushed back against my pushback. Well, fair enough. And he does make some important points in there, chief among which is that none of us need to be debating against straw men. Almost no journalists think the economy irrelevant to elections, just as almost no political scientists think the campaign irrelevant. To that extent, we largely agree.

Nonetheless, Trende goes on to question the value of economic forecast models of elections. He notes several presidential and congressional elections in which political science forecasts were wide of the mark, and suggests that we can't ever really know ahead of time whether we're going into an election in which the classic models will work well or not.

I want to respond a bit to that. First, I don't think it's fair to lump congressional elections into this argument; as any election modeler will concede, those elections turn on local as well as national factors and are much harder to predict. Second, yes, we can certainly cherry-pick some presidential election forecasts that missed, but they still, on the whole, tend to come quite close.

Note this collection of forecasts (PDF) submitted roughly two months prior to the 2008 presidential election. The median of the nine forecasts had McCain getting 48 percent of the two-party vote, just one percent more than he actually got. Seven of the nine forecasts came within three points of the actual vote. The most accurate forecast was made 99 days before the election.

Note also Nyhan and Montgomery's collection of forecast models for elections since 1976. Sure, some individual forecasts miss by quite a bit, but the "ensemble" forecasting model rarely deviates from the outcome by more than a point or two.

Now, my posse and I (yeah, I've got a posse) could keep going back and forth with Trende and his posse like this, but I'm not sure how productive that would be. I think instead it might be more useful to address the question of just what these forecasts bring to our understanding of elections.

Forecasts are certainly entertaining. They can also be lucrative. But their real value for political science is that they allow us to test theories about elections. This is why modelers spend a lot of time "predicting" elections that have already happened, a task that might seem silly to some. We're trying to understand just what drives elections. We have theories about the importance of the economy, even about different measures of the economy. We have theories about ideology, about wars, and other things. We also have theories, as John Sides notes, about how the campaigns take advantage of these features of the political environment and make them matter to voters. When we make a forecast, we're attempting an empirical test of our theories. We're trying to figure out just what matters in elections and how much it matters. And each new election improves our understanding of the fundamentals of elections.

In the comments on my last post on this subject, Jay Cost wrote in to say the following:
I would say that I have learned so much more from the history than from the political science. If somebody asked me "How do I understand the 1968 election?" I'd point them to Ted White's Making of the President before any quantitative study in the APSR.
If we're talking about comparing an in-depth study of campaign actors with a forecast model, well then I totally agree. These models tell us very little about any one given election. However, a thick study like White's will offer quite a few reasons why one candidate won and another lost, from the economy to the Vietnam War to the presence of a divisive third-party candidate. What our forecasting models do is provide us with some explanations of just which aspects of the political environment tend to drive elections and which ones don't.

Okay, enough for now -- I've got to go deal with some Thanksgiving stuff.

Update: Hans Noel has a great post up at the Monkey Cage further explaining how forecast models are used to test theories.

1 comment: