Tuesday, October 20, 2009

What do political scientists do?

Well, Sen. Coburn's attack on political scientists has had at least one upside -- people outside the discipline are noticing us. It's even prompted a pretty interesting piece in the NY Times. I like most of what's in there -- it discusses the Perestroika movement within poli sci from a few years back, it seems to get that there is a vast diversity of methods and approaches within the discipline today, it notes that there is a debate within poli sci about how involved we should be in policy disputes, etc. All relevant stuff.

But let me briefly disagree with this statement:
Rogers Smith, a political scientist at the University of Pennsylvania who has been active in the “Perestroika” movement, said that the question should determine the method. If you want to test cause and effect, “quantitative methods are the preferred way to go,” he said, but they can’t tell “how political phenomena should be understood and interpreted” — whether a protest, for instance, is the result of a genuine social movement or an interest group, whether it is religious or secular.
Actually, quantitative methods are pretty terrible at testing causality. They are great at examining relationships, but they can't tell us what is causing what. For example, quantitative methods might help us see that there is a firm relationship between ice cream sales and drowning deaths, but it requires either interviews, reading, or human intuition to figure out that neither is causing the other, and that temperature is causing both. This is why I favor a mixed-methods approach, although it can be challenging to get that published sometimes.

6 comments:

Anonymous said...

They are great at examining relationships, but they can't tell us what is causing what.

What? That's not the job of any testing, that's the job of theory.

Yeah, if you regress everything against everything you might find that ice cream eating and drowning deaths are correlated. But regressing everything against everything is a silly thing to do.

But any researcher worth his salt* would stop and think "Is there any theoretical reason to expect that eating ice cream would cause more drowning deaths?" And, since there isn't, look for alternate theories that would explain the same data (and more data besides)

Even if you did somehow have a theory that eating ice cream caused people to drown to death, it's virtually certain that the other implications of whatever insane theory that would be would fail to be borne out by the data (whether examined quantitatively or qualitatively).

So you don't need to interview or read lots or do any sort of mixed methods; you just need to have a theory that you test the implications of, and you need not to engage in casual empiricism (whether that's founded in quantitative or qualititative stuff).

*Salt? Ice cream? Get it?

Eric Rubin said...

hippie!!

Joe said...

The old ice cream/drowning coverup. Where I grew up, the big kids held you underwater until you surrendered your ice cream. People died. When they got caught they used the old "heat is to blame" excuse.

Seth Masket said...

Yes, theory is the key. I was lumping that in with intuition, but theory is the better term.

Eric, what still?

Rogers Smith said...

Seth, you are not grasping the emphasis on "testing"! It's true that quantitative methods are not the sources of theories of causality; they are not best at discerning mechanisms of causality; they are not best at judging in advance what possible mechanisms are most likely. But if a causal relationship exists, then it must be discoverable across a set of appropriate cases. Quantitative methods are often the best way to examine systematically whether such relationships exist (just as you say)--and that's crucial to testing causal claims. Which was my point -- best, Rogers Smith

Seth Masket said...

Ah, point taken.