Measurement at the agency level, part 2: Mercy Corps case study

Last week I discussed why NGOs might want to measure results at the agency level, rather than just at the program/project level, in order to improve management decisions or better make the case for the agency’s impact. This is harder than simply aggregating program level metrics. An effort to measure at the agency level will face challenges due to the variety of an NGO’s programming, the variety of contexts it operates in, and the inherent difficulty of managing an organization-wide process of measurement.

I wanted to bring in a case study of how one NGO approaches measurement at the agency level, in the hope that their experience would provide lessons for other efforts. This post will describe how Mercy Corps tackles this issue. Due to my previous work with the agency, I happened to know a bit about how their system works. I contacted Barbara Willett (Mercy Corps’ Senior Technical Advisory for Design, Monitoring & Evaluation) to learn more. She was kind enough to share several documents and to answer my many questions.

1. Mercy Corps’ Mission Metrics

When it comes to measuring agency-wide results, Mercy Corps faces many of the challenges outlined in my previous post: over 300 programs on issues ranging from agriculture to microfinance to youth, run by 42 country offices that have a fair amount of autonomy. A further complication arises from the agency’s variety of revenue streams — individual donors, private foundations, bilateral and multilateral aid agencies, and more — which each come with different reporting standards.

Yet in the face of this, Mercy Corps has developed and implemented a system called Mission Metrics to cover it all. This system provides the agency with a relatively small set of Mission Indicators through which to “funnel” program results. These indicators give internal decisionmakers a clearer picture of the agency’s impact, allowing them to improve that impact through better strategy and management decisions.

Several elements of the program are worth highlighting.

1.1 Long development period

Mercy Corps has been developing Mission Metrics over the past four years. It started with a senior-level commitment in 2007. This was followed by a facilitated effort to define the general measures and terms, develop them into themes, implement field tests, and eventually roll out the system across the agency in 2011. The development period was marked with numerous online trainings, presentations, and meetings. This year will be the first full review of the effort.

This long process allowed all parts of the organization to help guide the final product. It has been critical to create a system that’s appropriate for Mercy Corps and has buy-in from those who make use of it.

1.2 Aligned with the mission

As the name suggests, the Mission Metrics are about measuring performance against the mission. This means a focus on programmatic impact. Operational issues like efficiency or finances were left off the original roll-out, though these could be incorporated later.

The 20 Mission Indicators fall under Mercy Corps’ three-pronged mission of promoting secure, productive, and just communities. For example, one of the indicators under “productive” is: “Number and percentage of households reporting greater prosperity.” Another reads: “Number of jobs created.” Under the mission element of promoting “just” communities, one of the indicators is: “Mechanisms to manage conflict are established or strengthened.” Another reads: “Marginalized populations play a role in community decision-making.”

If those seem vague, then that’s a perfect segue to…

1.3 Mission Indicators serve as baskets

The indicators don’t prescribe specific measures. Rather, they serve as categories. As part of the development process, Willett’s team created a 50-page document with definitions and sample indicators. Programs are meant to align key program indicators with the Mission Indicators. This allows flexibility for tailoring the system to local conditions and even capturing unintended consequences.

As a result of the Mission Metrics, it’s now possible to sift through results from hundreds of programs in a systematic way. Previously, tracking down the information on disparate programs would have been far too time consuming. Now, an executive or manager can aggregate results at the Mission Indicator level, and conduct some broad analysis of how MC’s work is balanced across different program areas.

1.4 Improving data quality and supporting M&E

Submissions to the system include not just data, but also descriptions of data collection methodologies. When the Mission Metrics information comes in, support staff can do a bit of a “data quality audit” to understand M&E practices across the organization. This has created an opportunity to highlight good M&E, identify training needs, or provide extra support where it’s needed. In turn, this has promoted some standardization of measurement across the agency.

2. Recommendations for others

In a presentation last fall, Willett offered several recommendations based on Mercy Corps’ experience. A few of them stood out to me. First, it’s important that such efforts aim for genuine learning and management – rather than PR. It’s also important to connect the system with real use as soon as possible. Finally, as mentioned above, the development of these systems takes time. All of these factors were important to ensuring that Mercy Corps’ internal stakeholders could see the system’s value.

I find these recommendations interesting because they focus more on the process of developing Mission Metrics, rather than what the final system looks like. This is the key lesson for other agencies: develop something appropriate for your agency and work. Mercy Corps’ system probably wouldn’t work as well if you just dropped it into different agency. But the process and principles behind its development can be applied more broadly.

Mercy Corps has also seen value in having those conversations about what the mission means to the agency. The process led them to define and understand key terms that had been left ambiguous. Everyone involved came away with a clearer sense of what the agency’s work is about, as well as what role they each play in it.

3. A final cautionary note

There’s one aspect of agency-level measurement that might make you a bit uncomfortable. If you work at the ground-level, you know that most information about your program can’t be captured by agency-level metrics. Heck, a lot of impact can’t even be captured by program-level metrics. Stripping away nuance is inherent to this exercise.

So the cautionary note is this: even as we strip nuance away, we need to make a mental note that the aggregate numbers don’t tell the full story. There will always be more to the agency’s impacts than the numbers reveal.

____________________________

Many thanks to Barbara Willett of Mercy Corps for her help with this post. Full disclosure: I have previously been employed by Mercy Corps, but currently have no formal relationship with the agency.

________________________

Related posts: