A recent set of essays in the Boston Review looks at the role of behavioral economics in development. The collection gives a good introduction to the impact of randomized controlled trials (RCTs) on development thinking in recent years.

The starting essay comes from Rachel Glennerster and Michael Kramer, with a series of responses and criticisms from others. For example, Eran Bendavid calls attention to how development RCTs have failed to fully learn from the history of medical trials. In another critique, Chloe O’Gara argues that RCTs are more practical for health interventions than for education, especially if you’re concerned with educational quality in addition to access.

Two of the response essays touch on political factors. Jishnu Das, Shantayana Devarajan and Jeffrey S. Hammer hint at political complications by noting that RCTs can tell us what impact a particular program might have, but that they give little guidance on whether a government (or donor) should or would run that program. To answer the should question, you need to know more about the (opportunity) costs of running the program and the extent of the benefits in the particular context at hand. To answer the would question, you get into political economy and the incentives facing powerful actors in the system.

David Posner points out the importance of the level of analysis. Most of the landmark RCTs analyze at the level of the individual or the household. However, many important development impacts occur at the community-level. Furthermore, we may find that the individual/household-level outcomes don’t make sense apart from the community-level effects. Posner ties this into perhaps the most common criticism of RCTs, namely the importance of context and the difficulty of generalizing results (aka external validity). Individual behavior is shaped by our peers and communities. Posner gives a few examples, such as the way that peer pressure increases the use of chlorine in drinking water, and the way that someone might decide whether to attend a community meeting or join some collective activity based on an assessment of whether others would do so. He writes:

If these assessments [of others’ participation] vary with the characteristics of the community—its density of networks, its periodicity of interactions, its ethnic homogeneity, its social and cultural norms—then we cannot hope to understand individual behavior apart from the characteristics of the community itself, and our research designs must take account of this. Randomization across individuals or households but not community-level characteristics—often held constant in experiments—leaves our findings vulnerable to the omission of consequential causal factors. What we need, therefore, are not just more randomized studies of all sorts, but replications of existing studies, carried out in settings selected for the variation they offer in community-level characteristics. This way we can target research to contextual factors that we hypothesize will shape the peer effects that, in turn, affect the individual behaviors we hope to alter.

Chris Blattman responds on his own blog, saying specifically of Posner that:

I think Dan could have gone farther: context and complexity may be everything in the realm of politics. I suspect a half dozen deworming experiments or a half dozen vocational training program evaluations will yield somewhat consistent and generalizable results, at least on the same continent (assuming anyone ever gets around to serious and consistent replication). But experiments with community driven development programs? Corruption control? Electoral reform? Even if done well (most experiments are not) I expect inconsistent and erratic results.

I agree with Blattman’s comments. Context and complexity muddle RCT results on political issues. But I would go even further: context and complexity muddle all RCT results. Why? The short version: because all development is political.

The long version of the explanation will come in two follow-up posts. One will address what context and politics mean for measurement of an intervention’s impacts. The second will address the implications of context on the execution of an intervention.

It’s worth noting that Glennerster and Kremer’s rebuttal cites several RCTs under the heading “Politics” — but they focus more on issues of government administration, such as teacher incentives, nurse absenteeism, or user fees for public services. The closest they come to issues I would call political include tackling corruption (I assume this refers to community monitoring programspublic release of audit results, and the like) and community control of schools (not sure what this one was).

  1. […] post originally appeared at Dave’s blog, Find What Works. AKPC_IDS += […]


  2. […] in a three-part series on the limitations of randomized controlled trials (RCTs) in development. As mentioned in my previous post, a recent set of essays in the Boston Review looks at the role of behavioral economics in […]


  3. […] *Limitations of RCTs Part 1: Politics and context – https://algoso.org/2011/04/01/limitations-of-rcts-politics-and-context/ Limitations of RCTs Part 2: How politics and context confound measurement – […]


  4. […] the last in a three-part series on the limitations of randomized controlled trials (RCTs). See the first and second posts. The series responds to a recent set of essays in the Boston […]


  5. […] RCTs and wider applicable recommendations. As in any development intervention, context (see an excellent series of posts by Dave Algoso on Find What Works) is crucial; the claim of More Than Good Intentions is that enough RCTs plus an […]


  6. […] my series on limitations of RCTs: parts one, two and […]


  7. […] evaluation. This is hardly surprising: governance issues are notoriously difficult to evaluate. As I’ve discussed before, randomized controlled trials aren’t applicable. Even psuedo-experimental methods run into […]


  8. […] Dave Algoso, ‘Limitations of RCTs: politics and context’ […]


  9. […] Political factors pose a major challenge to RCTs as well. Even for relatively straightforward interventions, local politics have the potential to cause unintended consequences that would confound measurement of the results. For explicitly political interventions, the situation is even worse, as contextual factors become central to the program’s execution and impacts. […]


  10. I’m really surprised to see RCTs adapted for this type of research. It doesn’t seem like it could possibly be worth the investment. It sounds more like an attempt to borrow credibility from a “gold standard” research method that is only valid in limited contexts, this being not one of those. RCTs are overused in health research because of how research students are taught to privilege RCT results over other research designs. There are suitable contexts for doing so, but there is an awful lot of shoddy research published in the health field, and I would think you could find better methods looking into other fields of research for model applied research processes and theories of knowledge. RCTs are, above all, very expensive, and that is a huge limitation. When you short-change them you lose a lot of the benefit, and when you invest in a poor design there is a lot of pressure to overstate the success rate to justify the effort instead of admitting it was bad planning.

    Papers on multi-level modeling in the health field might highlight some conceptual challenges in population-level behavioral research for you, but multi-level modeling doesn’t tend to work out too well either. It’s an interesting approach to reframing the gaps in the existing evidence base, but probably not a promising solution to closing the gaps.

    For impressive work on applied behavior change intervention research, I would look into the paper “Tales of Shit” for goal-oriented process documentation of what is and isn’t working in rural sanitation.


    The Barefoot Guide to Working with Organizations and Social Change is also very good.



  11. […] never met one of them. Most practitioners I know have larger questions about the efficacy and politics of RCTs than about the ethics. The ethical issue is a bit of a straw man: I see the defense far […]


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: