This is the third and (probably) final post in a series on measurement at the agency level. Part 1 discussed the basic idea and the challenges involved. Part 2 looked at how Mercy Corps has dealt with these issues.
This installment covers a very different organization: One Acre Fund. While Mercy Corps manages a wide range of programs in over 40 countries on every continent, One Acre Fund works only in agricultural development and has operations in just a few countries. This has implications for their level of programmatic homogeneity and focus on particular results. They’re also relatively young, so they’re growing with a good metrics system already in place.
I’ll start with an overview of their model, then move on to talk metrics.
1. One Acre Fund in brief
One Acre Fund supports rural subsistence farmers with a “market bundle” of services. The bundle starts with bringing farmers together to form producer groups. One Acre Fund field officers then educate the groups on better planting techniques, and deliver improved seed varieties and fertilizer. The field officers also help the groups with post-harvest crop handling, storage and market access. They also provide crop insurance.
None of this is a gift. Farmers repay the program fees out of their increased incomes. This has allowed One Acre Fund to recover 80% of its field costs through program fees. Further fine-tuning of the model will bring it closer to full sustainability. Their newsletter points out that no piece of the program is new or innovative. But the combination of them into a sustainable business model could be revolutionary.
The successes so far have allowed the organization to expand rapidly. It has established operations in Kenya, Rwanda and Burundi, while trials are underway in Ghana, Tanzania and Cambodia. From the current reach of about 75,000 farm families, One Acre Fund plans to expand to reach a million by 2020.
One Acre Fund describes itself as “data-driven.” Judging from the website and casual conversations I’ve had with staff, I’d say that’s true.
What’s really interesting — from the perspective of measurement at the agency level — is their program dashboard. It lists seven performance metrics under three categories: scale, impact and sustainability. The dashboard tracks progress on each metric over the past three years, with targets shown for upcoming years. The graphic below is pulled straight from their site.
Their semi-annual performance reports include a few more metrics under each category. Some of these metrics can be drawn from simple program records. For example, it’s pretty easy to know how many districts you’re working in and how many employees you have.
The impact metric, on the other hand, requires a bit more effort to nail down. The organization’s M&E staff compares the harvests and incomes of farmers already in the program (treatment) with those who have just joined but have not yet completed a planting season with the “market bundle” (comparison). The fact that they are measured in the same year makes it possible to control for outside effects (like weather or changing crop prices). Similarities between the two groups helps to reduce selection bias.
3. Management, strategy … and PR?
My earlier post distinguished two uses of aggregate metrics. You can design your system either to improve strategy/management decisions, or to provide material for public relations/fundraising. Which one you decide to pursue might influence your choices. I presented the distinction as if you had to choose one or the other.
However, One Acre Fund has blurred that line with a deft move: they make the former a selling point in the latter. They use their metrics to fine tune the model and manage staff performance, while advertising that fact in their promotional materials. This won’t open the wallet of a casual donor who primarily responds to gut-wrenching photos of starving children, but it has attracted a lot of attention in the growing “impact focused” donor community.
4. Could others do it?
If we’re lucky, that “impact focused” donor community will continue to grow and more organizations will seek to follow One Acre Fund’s footsteps by increasing the rigor and transparency of their metrics.
However, as always, a word of caution is due. Not everything fits into neat quantifiable boxes. Certain issues resist metrics and so also resist “proof.” Think about community empowerment, policy change, peacebuilding, and human rights protection — to name a few. Just because one intervention isn’t proven doesn’t make it less deserving than another one that is. Don’t confuse a lack of proof of impact with a lack of impact.
Of course, within a given class of interventions that can all be assessed in the same way: by all means, please fund the one that’s proven.