Pritchett, feedback loops, and the accountability conundrum

Last week I attended a seminar given by Lant Pritchett at the Center for Global Development where he discussed his new working paper (co-authored with Salimah Samji and Jeffrey Hammer), “It’s all about MeE: Using Structured Experiential Learning (‘e’) to Crawl the Design Space.” Thoughts and ideas have been bouncing around in my mind since, so thanks to Dave for giving me some space on his blog so I can get them out of my head and move on with my life.

There first three-quarters of the paper or so could be considered a detailed problem statement. Prtichett and his co-authors note that even development projects that seem simple have numerous different design parameters. Using the example of conditional cash transfers (CCT), they identify 11 different parameters, including who is eligible, the size of the transfer, the frequency of the transfer, etc. And they rightly note that these different parameters can interact with each other, sometimes in non-linear ways.

They argue this is a problem for rigorous impact evaluation (RIE) because an evaluation can only tell you whether a specific combination of design parameters is effective, but doesn’t necessarily tell you anything about other programs designed using a different combination of parameters. Since RIEs are expensive and slow, they can only be used to test a very small number of the possible combinations.

To address this problem, Pritchett lays out a process called structured experiential learning (“e”) that is designed to provide rapid feedback to a program team on what combinations of design parameters are working and not working. This would allow the team to “crawl the program design” space in order to find the most effective implementation strategies. (Caveat: There’s a lot more to the paper, and I am simplifying a great deal.)

So far so good, I am big believer in creating more rapid feedback loops to inform program implementation. I have told my own colleagues on many occasions that what we need on a given project is lot more “M” and not to worry so much about the “E”. Other terms like “evaluative thinking” and the older term “reflective practice” get at similar ideas.

The paper rightly notes that this sets up a tension between the desire to allow programs to learn and adapt on the one hand, and the need for accountability on the other. One person’s adaptation is another person’s flailing about doing whatever they want. Pritchett tries to square this circle through a seven step process that describes what “e” might look like in practice. In Step 4 of this process, the implementer would identify ex ante key design parameters. He uses the example of a teacher training program with three key design parameters: 1) location of the training; 2) content of the training; 3) frequency of the follow-up. In the example, each of these paramaters has two alternatives. As the project is implemented, monitoring information would be analyzed to decide which alternative is superior for each of these parameters and the program would shift implementation toward that alternative.

This is where I get troubled. My concern is that Pritchett is shying away from the implications of his own argument. As noted, in the CCT example, he identified 11 different parameters that can interact with each other in non-linear ways. It is hard to imagine how an ex ante specification of key design choices could do justice to this complexity (even for a relatively simple program). More importantly, it seems entirely contingent on whether or not key design choices are knowable ex ante. The very metaphor of “crawling” a design space implies an exploration. We may only learn about key design parameters as the program is being implemented. Moreover, the design space itself is not static. As a concrete example: a grantmaking program I designed for Sudan had to be radically re-designed as the result of new violence in South Kordofan. The important point here is not that this violence was impossible to predict, as many did predict it. Rather, the issue is that the new violence was not a design parameter. It was a complex reordering of the context in which the project was being implemented.

At the seminar I raised this issue and Pritchett responded that the recommendation to identify key design parameters ex ante should be seen as a practical compromise, an effort to provide a means for donors to allow some exploration while providing them enough assurance that they are still holding implementers accountable. Fair enough. Progress requires compromise. But a nagging feeling remains that if this strategy were actually adopted it would just set up another artificial exercise that fails to acknowledge the real complexity of project contexts and that remains out of synch with the way programs are actually implemented.

I have zeroed in on this issue of adaptation versus accountability because it is one I wrestle with every day. I work in peacebuilding where, as my Sudan example illustrates, adaptation is necessary. But funders of peacebuilding programs still require accountability. When I work with my colleagues on program design, I do push them to develop a clear story regarding how they think the project will work. This sometimes takes the form of a logframe, but the format is less important than the fact that a clear logic has been worked out. I also tell my colleagues that this project logic should not be seen as a prediction of the future, but instead as a distillation of all their current knowledge and assumptions (this is not an original idea, but I have forgotten where I first heard it). As their knowledge changes and as their assumptions are disrupted, their project logic and their project design should change as well.

But the issue remains, how can accountability be maintained in these circumstances? My proposal is that we give up on ex ante efforts to create structures for future accountability, such as logframes that organizations are accountable to at the end of a program, and shift to providing evidence-based, ex post accounts of program implementation. That we shift from, “we did what we promised,” to, “what we did makes sense and here’s why.” Interestingly, this brings us closer to the original meaning of accountability – the requirement that one provide an account of one’s actions.

The question arises here, what remains constant? For a project to be a project, to have an identity, there must be some constant that remains as the project adapts and evolves. Pritchett’s paper is a follow-up to his earlier work on Problem Driven Iterative Adaptation (PDIA). I would argue that it is the problem that remains constant as the project evolves. Reducing corruption in the police force in South Sudan, for instance, is a well-defined problem. This problem that can stay constant as a project evolves over time in an effort to learn the best ways to address corruption.

Under the strategy I am proposing, the project team would then provide an evidence-based accounting for how the anti-corruption project evolved, including why such an evolution created superior results for reducing corruption compared to alternative approaches. Requirements for rigor under this strategy would be the same as under existing strategies, it’s just that the evidence collected would be deployed in different ways.

I am just beginning to think through the implications of such a strategy and I’m sure there are many, many issues that I have not considered. I am very interested in what others think, both how feasible such a strategy might be, as well as whether there are examples of such a strategy being implemented already, either formally, or through more informal interactions between donors and implementers. In other words, is this already happening, but just can’t be openly acknowledged?

Andy Blum is the Director of Learning and Evaluation at the US Institute of Peace. He can be reached at ablum@usip.org or @alb202.