Pritchett, feedback loops, and the accountability conundrum

Last week I attended a seminar given by Lant Pritchett at the Center for Global Development where he discussed his new working paper (co-authored with Salimah Samji and Jeffrey Hammer), “It’s all about MeE: Using Structured Experiential Learning (‘e’) to Crawl the Design Space.” Thoughts and ideas have been bouncing around in my mind since, so thanks to Dave for giving me some space on his blog so I can get them out of my head and move on with my life.

There first three-quarters of the paper or so could be considered a detailed problem statement. Prtichett and his co-authors note that even development projects that seem simple have numerous different design parameters. Using the example of conditional cash transfers (CCT), they identify 11 different parameters, including who is eligible, the size of the transfer, the frequency of the transfer, etc. And they rightly note that these different parameters can interact with each other, sometimes in non-linear ways.

They argue this is a problem for rigorous impact evaluation (RIE) because an evaluation can only tell you whether a specific combination of design parameters is effective, but doesn’t necessarily tell you anything about other programs designed using a different combination of parameters. Since RIEs are expensive and slow, they can only be used to test a very small number of the possible combinations.

To address this problem, Pritchett lays out a process called structured experiential learning (“e”) that is designed to provide rapid feedback to a program team on what combinations of design parameters are working and not working. This would allow the team to “crawl the program design” space in order to find the most effective implementation strategies. (Caveat: There’s a lot more to the paper, and I am simplifying a great deal.)

So far so good, I am big believer in creating more rapid feedback loops to inform program implementation. I have told my own colleagues on many occasions that what we need on a given project is lot more “M” and not to worry so much about the “E”. Other terms like “evaluative thinking” and the older term “reflective practice” get at similar ideas.

The paper rightly notes that this sets up a tension between the desire to allow programs to learn and adapt on the one hand, and the need for accountability on the other. One person’s adaptation is another person’s flailing about doing whatever they want. Pritchett tries to square this circle through a seven step process that describes what “e” might look like in practice. In Step 4 of this process, the implementer would identify ex ante key design parameters. He uses the example of a teacher training program with three key design parameters: 1) location of the training; 2) content of the training; 3) frequency of the follow-up. In the example, each of these paramaters has two alternatives. As the project is implemented, monitoring information would be analyzed to decide which alternative is superior for each of these parameters and the program would shift implementation toward that alternative.

This is where I get troubled. My concern is that Pritchett is shying away from the implications of his own argument. As noted, in the CCT example, he identified 11 different parameters that can interact with each other in non-linear ways. It is hard to imagine how an ex ante specification of key design choices could do justice to this complexity (even for a relatively simple program). More importantly, it seems entirely contingent on whether or not key design choices are knowable ex ante. The very metaphor of “crawling” a design space implies an exploration. We may only learn about key design parameters as the program is being implemented. Moreover, the design space itself is not static. As a concrete example: a grantmaking program I designed for Sudan had to be radically re-designed as the result of new violence in South Kordofan. The important point here is not that this violence was impossible to predict, as many did predict it. Rather, the issue is that the new violence was not a design parameter. It was a complex reordering of the context in which the project was being implemented.

At the seminar I raised this issue and Pritchett responded that the recommendation to identify key design parameters ex ante should be seen as a practical compromise, an effort to provide a means for donors to allow some exploration while providing them enough assurance that they are still holding implementers accountable. Fair enough. Progress requires compromise. But a nagging feeling remains that if this strategy were actually adopted it would just set up another artificial exercise that fails to acknowledge the real complexity of project contexts and that remains out of synch with the way programs are actually implemented.

I have zeroed in on this issue of adaptation versus accountability because it is one I wrestle with every day. I work in peacebuilding where, as my Sudan example illustrates, adaptation is necessary. But funders of peacebuilding programs still require accountability. When I work with my colleagues on program design, I do push them to develop a clear story regarding how they think the project will work. This sometimes takes the form of a logframe, but the format is less important than the fact that a clear logic has been worked out. I also tell my colleagues that this project logic should not be seen as a prediction of the future, but instead as a distillation of all their current knowledge and assumptions (this is not an original idea, but I have forgotten where I first heard it). As their knowledge changes and as their assumptions are disrupted, their project logic and their project design should change as well.

But the issue remains, how can accountability be maintained in these circumstances? My proposal is that we give up on ex ante efforts to create structures for future accountability, such as logframes that organizations are accountable to at the end of a program, and shift to providing evidence-based, ex post accounts of program implementation. That we shift from, “we did what we promised,” to, “what we did makes sense and here’s why.” Interestingly, this brings us closer to the original meaning of accountability – the requirement that one provide an account of one’s actions.

The question arises here, what remains constant? For a project to be a project, to have an identity, there must be some constant that remains as the project adapts and evolves. Pritchett’s paper is a follow-up to his earlier work on Problem Driven Iterative Adaptation (PDIA). I would argue that it is the problem that remains constant as the project evolves. Reducing corruption in the police force in South Sudan, for instance, is a well-defined problem. This problem that can stay constant as a project evolves over time in an effort to learn the best ways to address corruption.

Under the strategy I am proposing, the project team would then provide an evidence-based accounting for how the anti-corruption project evolved, including why such an evolution created superior results for reducing corruption compared to alternative approaches. Requirements for rigor under this strategy would be the same as under existing strategies, it’s just that the evidence collected would be deployed in different ways.

I am just beginning to think through the implications of such a strategy and I’m sure there are many, many issues that I have not considered. I am very interested in what others think, both how feasible such a strategy might be, as well as whether there are examples of such a strategy being implemented already, either formally, or through more informal interactions between donors and implementers. In other words, is this already happening, but just can’t be openly acknowledged?

Andy Blum is the Director of Learning and Evaluation at the US Institute of Peace. He can be reached at ablum@usip.org or @alb202.

8 thoughts on “Pritchett, feedback loops, and the accountability conundrum

  1. Interesting post Andy. I was also at the talk last week, which I had different concerns about. But first regarding your doubts, I don’t think your proposal is too far off from Lant’s argument. It seems that you don’t necessarily disagree with the exercise of developing a clear hypothesis and causal chain regarding how a project will reach its end goals, but that you’re opposed to funders rigidly holding implementing organizations to this framework. Instead, you argue that “as [the program staff's] knowledge changes and as their assumptions are disrupted, their project logic and their project design should change as well.” Isn’t this the whole point of Lant’s little “e”, to approach the project adaptation process in a structured way? Maybe I’m missing something…

    On the accountability issue, your argument is nearly identical to Lant’s (he just didn’t emphasize it all that much in his talk). See here for more (slides 34 and 39 in particular): http://www.hks.harvard.edu/var/ezp_site/storage/fckeditor/file/The%20Folk%20and%20the%20Formula_WIDER_Annual_Lecture_2012_Weds.pdf.

    What you call “we did what we promised”, Lant calls thin accountability (or “accounting”), and what you call “what we did makes sense and here’s why”, Lant calls thick accountability (or “the account”).

    Because the focus of the presentation (and the work broadly) is on public sector implementation, one of my main concerns is around how public sector organizations can develop the capability to implement a well-functioning MeE system (or even an regular old M&E system), particularly because Lant and others advocate for MeE in precisely those places with low state capability.

    (Full disclosure: I’m one of Lant’s students).

    • Kartik,

      Thanks for your thoughts and the other links. You’re correct that I fully support “e”. Structured adaptation is what I think we all need to get better at. I took a look at the slides and I’m not sure what I’m calling for is the same as Lant’s “thick accountability”. It’s hard to say because it’s just one slide, but that sounds a bit more like a narrative that creates and sustains organizational identity. What I am suggesting is something more pedestrian. An evidence-based account that will make the donor happy. I would be interested in what else Lant has written on these type of ex post accountability processes.

      -Andy

  2. I like your formulation and here are a few quick thoughts.

    Project design should anticipate adaptation in dynamic circumstances and your proposal for accountability based on an accounting of that adaptation makes good sense. However, this should happen within a framework of an ongoing dialogue with the project design, a reflection on what new understanding implies based on the original logic. Without revisiting the underlying knowledge and assumptions upon which the intervention rest, there could be some tendency to get caught up in the daily and weekly flow of events. A step back to recall the deeper dynamics at play, a luxury perhaps, helps ensure that adaptation is not taking the intervention away from the original leverage points around which the project was originally formulated.

  3. Hi Andy, thanks for this thoughtful post. I would tend to agree with your formulation as the most effective way to maintain accountability usefully around interventions in complex space. However, I think the biggest challenge is to square that approach for accountability with the principle of competition that underlies most development donors’ solicitation process. For a given responsible party, accounting ex post around how adaptiveness was present is sensible. However, comparing multiple organizations’ proposals to undertake work necessarily requires looking at their present plans looking forward. An often unstated rationale behind the ex ante form of accountability within development is to allow donors to “objectively” evaluate an award holder, in anticipation of comparing them with peers when the next solicitation is out.

    These don’t mean that ex post accountability can’t be used, but that it must be translated into the practices of evaluating proposals and comparing past performance of offerors in ways that will be seen as fair and not too high a barrier to entry for new organizations to compete for funding, in order to graft onto the current aid architecture.

    • David, thanks for the comment, funny that you raise this issue. I had it at the front of my mind as well when I was writing the post and was going to try to address it, but the post was too long already. I was a grants officer for three years, so am very aware of the issues my ideas would raise with review processes. There’s a long answer here, but the short answer is that I think the donor would review proposals based on the organization’s initial plan as well as their ability to demonstrate capacity to gather information and adapt their programming. This would change the nature of proposals, but I am not sure they would be less evaluatable. I have to think more about whether such a system would present higher barriers of entry to current proposal systems for new organizations. It’s not obvious to me that they would, especially since for many donors (cough. . .cough. . .USAID) the current barriers are quite high.

      • I like to think that if the donor weighted corporate systems and past performance more than “the plan” for the project, it might more easily move in this direction. After all, the most convincing evidence of adaptation is having done so. Just having language on adaptiveness and responsiveness in the criteria for the technical proposal would help, though, even as a starting point. But I still think that the tension with competitive processes won’t be going away any time soon…

  4. When I read Lant’s paper some time ago I liked the recognition of the complex design space that even relatively “simple” development interventions occupy (i.e. cash transfer), and the the problem it poses for RCTs which typically only test one or very few of the design space configurations that can exist (and thus their frequent subsequent problems with limited external validity). However I disliked the solution, …the ex-ante nature of the design of the Structured Experiential Learning process. My reasons being that it is simply often unnecessary – design variation typically arises free of charge in many large (multiple location) development programs, as a result of both accident and intention (especially where there is a degree of decentralisation). But usually it is politely ignored and smoothed over by stats that focus on average and totals. The trick is to identify and capitalise on this design variation, and the best way of doing so, I suspect, is more investment in forms of monitoring (not evaluation) that seeks to identify and analyse the diversity of interventions and outcomes. One way of doing this sort of analysis is through the use of relatively simple data mining tools such as Decision Trees (see http://mande.co.uk/blog/wp-content/uploads/2013/01/Decision-Trees-and-ToCs-Vs-20121227-NPW1-1.docx), which can recognise the operation of multiple causal configurations at work. Doing so is entirely consistent with your admirable reformulation of accountability – the requirement that one provide an account of one’s actions – in the form of an ex-post description of reality..

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s