Thursday 17 February 2011

An upcoming engagement

Coming up next week at The Christchurch Press:
Eric Crampton, Senior Lecturer in Economics at the University of Canterbury, will present an educational workshop for editorial staff on understanding "Basic stats, inference, and the social sciences" . In particular, he will pick apart press releases to query how accurate their assumptions are.

Eric's challenge to us is: "The Press has played pretty close to the researchers' press releases, especially when those press releases make claims that aren't really substantiated by the underlying studies. There are a few simple(ish) things to watch out for and a few easy questions that reporters can ask that can really help them to sort out whether the press releases match what the researchers have found. I know that journalists work under pretty tight time pressure and that it's pretty unlikely that they'd be reading the original source papers. However, we've got to be able to do better than quotes from the press releases followed up by reaction quotes from lobby groups in the area."

This is a great opportunity for us to learn skills that will provide better, more accurate stories for our readers.
I'm giving a three hour session, two sections. Here's my draft outline. Have you any better suggestions of examples I should cover? I don't think I can cover more material - I have about an hour and a quarter for each half, with a break in the middle. I'd far sooner take a sledgehammer to two big problems than throw pebbles at a bunch of smaller ones.

Causation and inference
  • The basics of correlation
  • Ways that correlation isn't causation:
    • Reverse causation
    • Spurious correlation: an underlying omitted causal variable
  • How can you tell if the relationship is causal?
    • Methods that get closer to showing causality (don't worry - intuitive explanations only, no real statistics - just enough to know what the terms mean if somebody uses them):
      • Panel fixed-effects studies
      • Instrumental variable studies
      • Regression discontinuity design
      • Natural experiment
    • If the press release doesn't say anything about how causality was determined, ask the researcher before going out for the reaction quotes! Look how Reuters did it on Sally Casswell's latest piece.


    • If all there is is correlation, think of all the ways that correlation isn't causation. Is there something big and obvious lurking around that could be driving the results?
      • Pokies and crime rates: could results here be due to pokie machines being situated in bars? Other neighbourhood characteristics? Let's look at the press release and find the clues that they really haven't established causality.
      • Marriage and mental health: if married people are less likely to show onset of new mental illnesses, is that because marriage prevents mental illness? Or, is it because folks who have had a prior mental illness are more likely to develop new ones and are less likely to get married?
      • Does watching TV really kill you? Or, does having a more sedentary lifestyle correlate both with TV watching and increased mortality?
      • Substance abuse and crime: what does "alcohol-related" or "drug-related" mean in the police stats? About as much as "Oxygen-related": the offender had consumed the substance prior to the offence.
  • Workshop time: I propose some correlations: you tell me whether they seem plausible or if something else could be driving things.

Social costs and social benefits
Folks wanting to have a pet project subsidized, or a pet hate regulated or taxed, will commission social benefit and social cost studies to help push their case. They're usually terrible. How?
  • Obvious dodginess: ridiculous crazy assumptions. Example: PriceWaterhouse Coopers on Adult and Continuing Education. Is it really plausible that anyone taking a night course in Indian Cooking enjoys a fifty percent reduction in his likelihood of committing any crime?
  • Social versus private costs and benefits: why the difference matters
  • Are the stated benefits gross or net? Do they confuse stocks and flows? Do they account for the deadweight costs of the taxes needed to fund the project? Do they rely on and abuse multiplier effects?
  • Also be careful that the study is counting everything it ought to count. If a study claims to show health benefits of $(big number) if we ban, say, tricycles (kids have accidents on them), have they weighed the cost to kids of not being able to ride around on tricycles? In other words, are they only assessing the benefits on one side while ignoring the costs of implementing the policy - not just the costs to the government, but also the costs falling on those subject to the regulation or policy? This is especially prevalent in public health research that counts as zero all the fun people have doing somewhat risky things.
  • How to guard against being abused:
    • Ask about the biggest contributors to the costs and benefits. Do they seem plausible? Are they benefits that mostly accrue to the public at large, or to the person undertaking the activity?
    • Have a sense of scale: what does a $4.8 billion dollar cost or a $6.3 billion dollar benefit really mean in per capita terms? Does it seem plausible? Remember that there are only about 4 million people in the country. Any number around $4 billion means about $1000 per person for every man, woman and child in the country. If the benefits are that high, how is it that we're not already doing it? Your first reaction on seeing a really huge number shouldn't be "Wow, this is a really big problem!" It should be "That number smells bad."
    • Have there been prior estimates of the same thing? Are the current numbers very different? Why? Example: tobacco.
    • Treasury's handbook on cost-benefit analysis is an excellent resource. Their section on "Tips and Traps" ought be essential reading for journalists writing stories based on cost-benefit analyses.


  • In a perfect world, we'd have only disinterested researchers. In the current world, we have commissioned consultants selling products, academics whose universities' PR offices are trying to sell stories to papers to get the university's name out, and bureaucracies that are trying to build public support for their preferred policies. Guard against all of them!
  • Workshop time: I'll give press releases for example cost or benefit studies, you tell me what's wrong with them.
Anything I'm obviously missing?

10 comments:

  1. You should turn this into an Internet-based course and offer it as adult education. It would have a 300% private and 400% social return for the target market, according to my sample of 1.

    ReplyDelete
  2. Excellent! This sounds like the book I have long thought someone else should have written long ago. Any plans? This would also be of high value to students. The methods texts I have read all focus too much on the details while neglecting the big picture. They don't teach how to think about a research problem to get to a study design that's useful.

    I know you've said you can't cover more; here are a few ideas anyway (perhaps for future installments):

    - Plausibility is not to be underestimated as a criterion for establishing (or disestablishing) causality. (It seems you're only covering this in the cost-benefit section.) Possible materials: The parachute parody (BMJ?) and xkcd's famous "Correlation" comic.

    - Before getting into IVs etc. I would start with the ideal of the randomized lab experiment, explain why it is the ideal, and cover other methods as departures from it.

    - The main method for trying to get closer to causality is still simply controlling for other variables. Should that be left entirely uncovered? (Extension: the dangers of overcontrol.)

    But those are minor quibbles. Making this kind of thing obligatory for everyone who wants to write about studies in a paper would be a regulation I could endorse (externalities!).

    ReplyDelete
  3. @Dave: I really hope that it goes well enough that there's demand from the other papers. I don't mind a bit of one-day travel when the expected social returns are high (and I get a lot of utils from stopping stupid).

    @Lemmus: One of my undergrad profs had a chart up on his door showing the number of tattoos on a person as a function of the number of missing teeth, purporting to show that getting tattoos causes you to lose teeth. That example wouldn't work so well in NZ - the relationship doesn't hold. But still funny. Thanks for the xkcd reminder. You're right on the other stuff too. But the problem is that NZ journos seem to think that controlling for confounds is sufficient to establish causality.

    You think there'd be sufficient demand for that kind of book? I suppose it would be an update / upgrade of the Joel Best stuff, targeted more specifically at social sciences.

    ReplyDelete
  4. Eric, what would it take to get your courses filmed like the Econ 223 course?

    ReplyDelete
  5. @Anon: The university determining that IP in such recordings belongs to the lecturer, not to the University. I always have the nagging fear that they'll record the lectures then fire the lecturer. We're on deck to fire a half dozen people in the College of Business and Economics this year due to budget cuts.

    I also worry that if lectures are recorded, class attendance would drop.

    ReplyDelete
  6. Good on you Eric. There seems to be a lack of what I would call decent investigative journalism these days, the media too often either can't or won't question those purporting to be in authority. Whether this is from a lack of real understanding of the meaning of correlation vs causation I'm not sure; it may be that todays crop of journos are a bit lazy or are simply not asking the hard questions for fear of being refused further contact.

    Whatever the cause it is high time the mainstream media took to task those responsible for policy and decision making.

    ReplyDelete
  7. Eric,

    I'd never heard of Joel Best. Have just read through the PR texts for two of his books, and although they seem interesting, they seem to be mainly variations on the insight that correlation ain't causation. One might consider going instead doing something more like a dumbed-down, more fun version of Mostly Harmless Econometrics. Of course it would depend on the core audience one has in mind. Ah, lots of factors to consider, and I'm no expert on the market for books. There's one thing I know about it, though. If you write a book in English and manage to sell more than 10,000 copies, it gets translated into German. It's a law.

    ReplyDelete
  8. The Angrist book is nice, isn't it. But a book for policy laypersons - journos and wonks - might not be a crazy idea.

    ReplyDelete
  9. What I really want to know is, if night classes in Indian cooking are so effective at reducing the likelihood of an individual committing a crime, then why are we bothering with boot camp?

    ReplyDelete
  10. Difference between statistical significance and practical significance.
    Is there a meaningful mechanism to explain the correlation between variables?
    How much variation is explained by the model, if you still have 80% unexplained, do you really care about the model?
    Be careful about insisting too much on randomized controlled trials http://wmbriggs.com/blog/?p=2741

    ReplyDelete