Several interesting reports related to peacebuilding evaluation have come across my inbox (and RSS reader) in the past few weeks. The first two reports below resulted from a series of meetings held by the United States Institute of Peace and the Alliance for Peacebuilding; meeting participants came from a range of NGOs, government funders, private foundations, and other agencies. The second two reports deal more with methodological issues.
United States Institute of Peace
The report skips over evaluation methodologies, which have seen significant development over the past decade, and instead focuses on the systemic changes needed to ensure that the existing methodologies are used. The peacebuilding field suffers from several structural problems that undermine evaluation practices. Combating these problems will require systemic-level efforts, such as:
- strengthening norms that would create expectations for effective evaluation practices;
- building consensus around what constitutes effective peacebuilding practices and evaluation practices, including a small number of core peacebuilding outcomes;
- disrupting current practice and creating alternatives, for example, through a demonstration project that utilizes collaborative monitoring and evaluation mechanisms, or through the identification of failure stories.
Several of the ideas for addressing systemic issues included creating space for peacebuilding bloggers, either as a way to help the field build consensus or to share failure stories. Huzzah for blogging.
Alliance for Peacebuilding
This report resulted from the same series of meetings as the USIP report. As with the previous report, this one skips over the methodological issues and instead addresses the organizational/individual-level forces that resist improved evaluation. These forces include myths about the need to hire social scientists to conduct evaluations, or the expectation that all projects must be successful. The report debunks each of these myths in turn. Several examples illustrate how various organizations have overcome these challenges.
Norwegian Peacebuilding Resource Centre
This policy brief dives into the issue of measurement. It cites several obstacles to good measurement in peacebuilding, such as data quality problems, political interference in the selection of indicators, and insufficient adaptation of universal indicators to a specific context. The brief closes with a few recommendations: organizations should improve their M&E capacity; organizations should promote a monitoring culture that accounts for diversity, uncertainty and risk; and peacebuilding requires data collection and databases designed specifically for this field.
CDA Collaborative Learning Projects
RPP was a three-year research project that involved over two hundred peace organizations around the world in an effort to describe the principles of effective peace practice (see more here). RPP can be considered an early effort at building the consensus called for in the USIP report mentioned above.
This report describes how the RPP principles have surfaced in recent program evaluations. After reviewing 20 peacebuilding program evaluations, the study finds that RPP tools and concepts were often used as a standard against which to measure the program, or as a framework for guiding the assessment.
[UPDATE: See M&E News for more resources on peacebuilding evaluation.]
Post Script: Evaluation of peacebuilding, as compared to other types of interventions
Several of the above reports mention that the broader international aid/development sector has moved ahead of peacebuilding in terms of evaluation rigor. At the same time as they encourage learning from other sectors, the reports caution against blindly applying inappropriate methodologies. As noted in the USIP report, methodologies exist for peacebuilding evaluation, but various systemic factors constrain their use.
These comments reminded me of Ben Ramalingam’s recent proposal for a portfolio approach to measuring results in international aid/development work. His basic point was that different kinds of work require different approaches. He drew a 2-by-2 matrix to describe the relevant differences. See below.
On the horizontal, the intervention can range from simple to complex. On the vertical, the context can range from stable/identical to dynamic/diverse. Which quadrant you’re in will guide the strategic approach and the method of measuring results. Notably, the “gold standard” RCT belongs in the upper-left corner of the box. This is a simple illustration of a point that I made at length in an earlier series of posts. In order to understand results across all four quadrants, you need a portfolio of approaches.
Peacebuilding sits squarely in the lower-right quadrant: it’s a complex intervention, and it’s extremely dependent on the dynamic and idiosyncratic (diverse) elements of a context.