Lies, damned lies, and ranking lists: The Top 100 Best NGOs

Sometimes I don’t feel like being clever or coy with the headline. Sometimes I just want to make the point up front. This is one of those times. Allow me to underline it:

Ranking lists are great publicity for both the rankers and the ranked — but they usually involve bad analysis and mislead the audience.

This intentionally inflammatory statement comes in response to the inaugural “Top 100 Best NGOs” list from the Global Journal. The list includes both relatively new players like Ushahidi (#10), and some established juggernauts like Oxfam (#3). GJ’s editors took a broad definition of “NGO” – a wise move, in my opinion, given the blurred lines between NGOs, nonprofits, and social enterprises – but they restricted their list to those organizations that are “operational or advocacy focused.” This led to some interesting choices. For example, I don’t think of TED (#71) as being an NGO. The list excluded the Gates Foundation (because it focuses on grant-making rather than running programs), yet the Open Society Foundations (#46) were included.

But my disagreement with GJ is not over which organizations got a chance to be included, or even the final results. Most of these NGOs are, to the best of my knowledge, quite good. My big disagreement is with GJ‘s ranking methodology. And the fact that they created this list at all. Let’s start with the methodology.

How did they decide the rankings? Good question!

I’m not really sure what the methodology was. They briefly describe their use of “qualitatively measured metrics” such as: innovation, effectiveness, impact, efficiency, transparency/accountability, sustainability, strategic/financial management, and peer review. They emphasize that “there is no science in measuring” and rhetorically ask the following:

How does one – after all – compare the fundamental societal impact of an organization like the Wikimedia Foundation, with the tangible outputs of a well oiled humanitarian machine?

How indeed. I contacted the editors for more information. Alexis Kalagas was kind enough to describe their process. The data sources for the rankings included organizational websites, annual reports, external evaluations, and conversations with practitioners and donors. No word on who they talked to, how many people, how they were selected, or how the conversations were structured.

Kalagas also shared more detail on which of the “metrics” were most important to the decisions: innovation, impact and effectiveness were given most consideration. Furthermore, the editors limited their scope to the past five years. On GJ‘s Facebook page, they replied to one comment to say that the ranking: “did not take into account longer-term impacts.” Just mull over that one for a moment.

Ultimately, it sounds like the methodology was: we browsed the web, talked to a couple people, then sat around the conference table arguing among ourselves. Here’s the result.

Sorry, guys, but that just doesn’t cut it. That’s not a methodology.

Would a more “rigorous” and “quantitative” ranking of NGOs be better? (Hint: No.)

The obvious alternative to this process would be something more transparent and rooted in metrics. Sadly, many people still think that the overhead ratio is an appropriate way to judge NGOs. (It’s not.) You could try a more balanced approach though, with multiple measures in a weighted formula.

This might look something like the U.S. News rankings of American colleges and universities. They have a weighted formula that uses a long list of metrics ranging from acceptance rates to academic reputation. It all creates the impression of being rigorous and data driven. But there’s nothing scientific about the rankings. Schools argue furiously over whether the metrics are appropriate and the formula makes sense. Some schools have even chosen not to participate.

In a weird way, it’s actually to U.S. News’ credit that the rankings are so disputed. We should be able to argue over methodology. The GJ ranking, on the other hand, came out of a black box. It provides no set of data that can be reanalyzed by others who want to tweak the weightings. It’s just a list of opinions.

So could we apply that metric/formula approach to NGOs? I don’t think so. As GJ points out, there’s no easy way to compare impacts across social sectors. At least universities are all doing basically the same thing (they educate students, conduct research, run athletic programs, etc.) and are structured in basically the same ways. But Wikimedia Foundation, Ashoka, TED, Search for Common Ground, and MSF? I could not think of a more diverse group of organizations in terms of missions, methods, or structures. How would you ever craft a set of metrics that would apply to all of these, let alone a formula that spits out a number to fairly rank them?

Even if a more methodologically sound ranking were created, it would suffer from the problem of false precision. A further analogy to the U.S. News rankings: What does it really mean that a school sits one spot higher than another? Harvard (#1) might be better than Podunk State — but is it really better than Yale (#3) this year, or even Brown (#15)? I would suggest taking any such rankings with at least a plus-or-minus 20. So why create the impression that the individual placements mean something more?

So I think these rankings suck. But why do I care?

I am two things: a development professional, and a blogger. As a development professional, I want to see a more efficient market for funding social causes. That’s an economics-y way of saying that I want funds to flow to those NGOs that can best convert them into positive social impact. As a blogger, I’m especially interested in how imperfect information distorts those funding flows. That’s not the only problem with funding markets, but it’s a big one, and it’s the one that I can (maybe?) influence as a blogger.

Regardless of the methodology, this kind of ranking represents an enormous chunk of imperfect information being thrown out into the market. Several organizations on this list have already started touting their ranking. I don’t blame them, of course. They do it for the same reason that universities advertise their rankings: it’s good for recruitment, fundraising, and more. Meanwhile, the Global Journal gets a lot of new hits on their website.

Most people consuming these rankings will not take the time to critically analyse them. They’ll assume that someone else has already done that. They may not use these rankings to explicitly make decisions, but hearing about an NGO’s rank will undoubtedly influence a donor’s opinion.

My suggestion for next year’s list: Don’t do it.

Seriously. If you want to highlight good work and inspire readers, go with case studies of individual NGOs. Or, pick a sub-sector (say, reproductive health, or peacebuidling, or human rights) and write features on how the major players differ in their approaches. That would be interesting, it would inspire, and it would stimulate debate. And most importantly: it would give you the space to actually explore what makes a great organization great.