A recent set of essays in the Boston Review looks at the role of behavioral economics in development. The collection gives a good introduction to the impact of randomized controlled trials (RCTs) on development thinking in recent years.
The starting essay comes from Rachel Glennerster and Michael Kramer, with a series of responses and criticisms from others. For example, Eran Bendavid calls attention to how development RCTs have failed to fully learn from the history of medical trials. In another critique, Chloe O’Gara argues that RCTs are more practical for health interventions than for education, especially if you’re concerned with educational quality in addition to access.
Two of the response essays touch on political factors. Jishnu Das, Shantayana Devarajan and Jeffrey S. Hammer hint at political complications by noting that RCTs can tell us what impact a particular program might have, but that they give little guidance on whether a government (or donor) should or would run that program. To answer the should question, you need to know more about the (opportunity) costs of running the program and the extent of the benefits in the particular context at hand. To answer the would question, you get into political economy and the incentives facing powerful actors in the system.
David Posner points out the importance of the level of analysis. Most of the landmark RCTs analyze at the level of the individual or the household. However, many important development impacts occur at the community-level. Furthermore, we may find that the individual/household-level outcomes don’t make sense apart from the community-level effects. Posner ties this into perhaps the most common criticism of RCTs, namely the importance of context and the difficulty of generalizing results (aka external validity). Individual behavior is shaped by our peers and communities. Posner gives a few examples, such as the way that peer pressure increases the use of chlorine in drinking water, and the way that someone might decide whether to attend a community meeting or join some collective activity based on an assessment of whether others would do so. He writes:
Chris Blattman responds on his own blog, saying specifically of Posner that:
I agree with Blattman’s comments. Context and complexity muddle RCT results on political issues. But I would go even further: context and complexity muddle all RCT results. Why? The short version: because all development is political.
The long version of the explanation will come in two follow-up posts. One will address what context and politics mean for measurement of an intervention’s impacts. The second will address the implications of context on the execution of an intervention.
It’s worth noting that Glennerster and Kremer’s rebuttal cites several RCTs under the heading “Politics” — but they focus more on issues of government administration, such as teacher incentives, nurse absenteeism, or user fees for public services. The closest they come to issues I would call political include tackling corruption (I assume this refers to community monitoring programs, public release of audit results, and the like) and community control of schools (not sure what this one was).