Lies, damned lies, and ranking lists: The Top 100 Best NGOs

Sometimes I don’t feel like being clever or coy with the headline. Sometimes I just want to make the point up front. This is one of those times. Allow me to underline it:

Ranking lists are great publicity for both the rankers and the ranked — but they usually involve bad analysis and mislead the audience.

This intentionally inflammatory statement comes in response to the inaugural “Top 100 Best NGOs” list from the Global Journal. The list includes both relatively new players like Ushahidi (#10), and some established juggernauts like Oxfam (#3). GJ’s editors took a broad definition of “NGO” – a wise move, in my opinion, given the blurred lines between NGOs, nonprofits, and social enterprises – but they restricted their list to those organizations that are “operational or advocacy focused.” This led to some interesting choices. For example, I don’t think of TED (#71) as being an NGO. The list excluded the Gates Foundation (because it focuses on grant-making rather than running programs), yet the Open Society Foundations (#46) were included.

But my disagreement with GJ is not over which organizations got a chance to be included, or even the final results. Most of these NGOs are, to the best of my knowledge, quite good. My big disagreement is with GJ‘s ranking methodology. And the fact that they created this list at all. Let’s start with the methodology.

How did they decide the rankings? Good question!

I’m not really sure what the methodology was. They briefly describe their use of “qualitatively measured metrics” such as: innovation, effectiveness, impact, efficiency, transparency/accountability, sustainability, strategic/financial management, and peer review. They emphasize that “there is no science in measuring” and rhetorically ask the following:

How does one – after all – compare the fundamental societal impact of an organization like the Wikimedia Foundation, with the tangible outputs of a well oiled humanitarian machine?

How indeed. I contacted the editors for more information. Alexis Kalagas was kind enough to describe their process. The data sources for the rankings included organizational websites, annual reports, external evaluations, and conversations with practitioners and donors. No word on who they talked to, how many people, how they were selected, or how the conversations were structured.

Kalagas also shared more detail on which of the “metrics” were most important to the decisions: innovation, impact and effectiveness were given most consideration. Furthermore, the editors limited their scope to the past five years. On GJ‘s Facebook page, they replied to one comment to say that the ranking: “did not take into account longer-term impacts.” Just mull over that one for a moment.

Ultimately, it sounds like the methodology was: we browsed the web, talked to a couple people, then sat around the conference table arguing among ourselves. Here’s the result.

Sorry, guys, but that just doesn’t cut it. That’s not a methodology.

Would a more “rigorous” and “quantitative” ranking of NGOs be better? (Hint: No.)

The obvious alternative to this process would be something more transparent and rooted in metrics. Sadly, many people still think that the overhead ratio is an appropriate way to judge NGOs. (It’s not.) You could try a more balanced approach though, with multiple measures in a weighted formula.

This might look something like the U.S. News rankings of American colleges and universities. They have a weighted formula that uses a long list of metrics ranging from acceptance rates to academic reputation. It all creates the impression of being rigorous and data driven. But there’s nothing scientific about the rankings. Schools argue furiously over whether the metrics are appropriate and the formula makes sense. Some schools have even chosen not to participate.

In a weird way, it’s actually to U.S. News’ credit that the rankings are so disputed. We should be able to argue over methodology. The GJ ranking, on the other hand, came out of a black box. It provides no set of data that can be reanalyzed by others who want to tweak the weightings. It’s just a list of opinions.

So could we apply that metric/formula approach to NGOs? I don’t think so. As GJ points out, there’s no easy way to compare impacts across social sectors. At least universities are all doing basically the same thing (they educate students, conduct research, run athletic programs, etc.) and are structured in basically the same ways. But Wikimedia Foundation, Ashoka, TED, Search for Common Ground, and MSF? I could not think of a more diverse group of organizations in terms of missions, methods, or structures. How would you ever craft a set of metrics that would apply to all of these, let alone a formula that spits out a number to fairly rank them?

Even if a more methodologically sound ranking were created, it would suffer from the problem of false precision. A further analogy to the U.S. News rankings: What does it really mean that a school sits one spot higher than another? Harvard (#1) might be better than Podunk State — but is it really better than Yale (#3) this year, or even Brown (#15)? I would suggest taking any such rankings with at least a plus-or-minus 20. So why create the impression that the individual placements mean something more?

So I think these rankings suck. But why do I care?

I am two things: a development professional, and a blogger. As a development professional, I want to see a more efficient market for funding social causes. That’s an economics-y way of saying that I want funds to flow to those NGOs that can best convert them into positive social impact. As a blogger, I’m especially interested in how imperfect information distorts those funding flows. That’s not the only problem with funding markets, but it’s a big one, and it’s the one that I can (maybe?) influence as a blogger.

Regardless of the methodology, this kind of ranking represents an enormous chunk of imperfect information being thrown out into the market. Several organizations on this list have already started touting their ranking. I don’t blame them, of course. They do it for the same reason that universities advertise their rankings: it’s good for recruitment, fundraising, and more. Meanwhile, the Global Journal gets a lot of new hits on their website.

Most people consuming these rankings will not take the time to critically analyse them. They’ll assume that someone else has already done that. They may not use these rankings to explicitly make decisions, but hearing about an NGO’s rank will undoubtedly influence a donor’s opinion.

My suggestion for next year’s list: Don’t do it.

Seriously. If you want to highlight good work and inspire readers, go with case studies of individual NGOs. Or, pick a sub-sector (say, reproductive health, or peacebuidling, or human rights) and write features on how the major players differ in their approaches. That would be interesting, it would inspire, and it would stimulate debate. And most importantly: it would give you the space to actually explore what makes a great organization great.

41 thoughts on “Lies, damned lies, and ranking lists: The Top 100 Best NGOs

  1. Blog is not God

    I love bloggers, specially the ones that present themselves as the great defenders of Truth opposed to Lies. “Lies, Damned Lies..”. That’s no title. That is pure invective, and libel. Who’s looking for clicks, here?

    Mr. Algoso blames The Global Journal for not having ranked the Bill & Melinda Gates foundation. Indeed you do not come to the rescue of a brave new – and ignored- NGO, carrying new ideas against the big blockbusters of the non-profit sector, a smart NGO that you would like to stand for. Indeed they’re quite many of them and we feel so bad about not having all the many of them in the TOP100 – they’ll make it soon enough. We, as journalists, can discuss what has happened to Bill Gates, to his caprice with Ted, Davos, or the arrogance that always comes with billions. Gates, a very respected man, has many reasons to feel good; his foundation has become some kind of a global social central bank. And we love his sense of piraterie. So the Foundation didn’t make it in the TOP 100. Alexis Kalagas gave his views on that, clear enough.

    Then came the methodology question.

    Do bloggers have a methodology? Do they make a difference between being a reporter and a rapporteur? Or is journalism, in their eyes, at the cemetery? We have an ethic and a strong belief in the fact that journalism is already part of the methodology.

    We, as an independent media outlet, adding working hours over working hours, are very faithful to our ‘religion’. Journalism, in this TOP100 NGOs case, is not about counting the kilos, the tons, the injections, the liters, the SUV, the $, the employees, the unpaid interns, the volunteers… It is a journalistic approach, not an academic, not a mathematical, one approach that understands a simple fact. Profit has a metric, money. How do you measure Solidarity? How do you measure healing, suffering? Do you believe such a ranking has anything to do with the S&P, the NYSE and other financials index?

    When you think about the TIME’s Man of the Year, what do you say about their methodology? What do you say about the Oscars? Are the results fair? Methodology, from the French méthodologie, comes in reference to a logical system, a discipline. To my amaze, we spend almost two months setting our logics and means, and getting organized. Enough to my editorial taste. What is more the concern here, is the overview that such a TOP100 brings in. As a kid, I did love to look at charts when rock bands were shaking our ears, cities and societies. TOPS would help me to get familiar with these bands and I would have, as you Dave, my upset moments about this band coming after this other one. At the end of the day, The GLOBAL JOURNAL TOP 100 NGOs shades light on many fascinating NGOs attracting amazing people. If, in couple of years, you and your anti-ranking friends would tell us, without a doubt, the names of the leaders of the first 10 of the TOP 100 NGOs, I would just feel great. So as I read your prose, I understand that I’ll have to wait before rejoicing. Until the next edition in January 2013, we will live with such comments.

    Jean-Christophe Nothias
    Editor
    The Global Journal

    • Mr. Nothias:

      To attack a blogger on “methodology” is to assume that blogging has a methodology. It does not.

      However, rankings do. Or they should. Your rankings most assuredly did not. As this blogger did, the first thing I checked when I came across your list–after whether my favorite NGO was there (it was)–was the link to what your methodology was. Your one-line nod to your method was unsatisfactory to me, but, unlike this blogger, I just decided to go about my day.

      Kudos to Mr. Algoso for digging further. This is the sort of thing that NEEDS to be done.

      And I agree with him. Please don’t publish next year’s list. Or, if you do, please formalize your methodology in a way that those of us who care about things like this will be able to discuss.

    • Jean-Christophe’s rant-response to a critical, but measured blog post makes Dave’s critique more powerful and valuable and I’m now convinced that he’s probably spot-on in identifying the right problems. It also reveals a lot about journalism, although it may be more appropriate to use the term in inverted commas with reference to The Global Journal. Defending a flawed approach by pointing at other flawed rankings and events and hiding behind an abstract notion of journalism that only insiders are able to understand not only diminishes the value of the ranking, but ultimately the reputation of the organisations featured in it many of which do excellent work (I don’t think anybody disagrees with that). The notion that the Global Journal is not a learning organisation and will simply continue with its work (‘See you in 2013!’) is also the exact opposite of what organisations, bloggers and journalists usually consider best professional practice. I encourage the Global Journal to reach out to academics, practitioners who work on indeces such as UNDP’s Human Development Index for example, critical bloggers and other experts who could improve the ranking in the future. Until then, there will only be more criticism, some of it probably sparked by Jean-Christophe’s reply and responding to it along the lines of a sulking child is not the appropriate response.

    • Wow…I did not expect that The Global Journal’s response was going to further lower my opinion of their rankings, but, there you go.

    • I’ve actually never heard of the GJ, but if the intellectual level and objectivity of this response reflects the publication, I’d probably categorize them right next to Us Weekly and the Enquirer.

    • When I first saw this come out I was hugely optimistic that perhaps this exercise had a fresh lens and perspective that all the many charity-rating agencies had overlooked. But when the editor above says they left out Gates because you felt he has been rewarded enough, then any sense of objectivity, or letting the facts lead where they may, seems to have gone out the window. By all means exclude Gates foundation for other reasons, but that reasoning is surely irrelevant to a rational examination of the impact of an agency.

      So really, harden up Jean-Christophe and take it on the chin. People who work in or pontificate upon aid and development and developing countries are accustomed to dealing with indices with an awful lot of statistics and grunt work behind them (Human Development Index, Transparency Corruption Perception Index, Commitment to Development Index, etc ad infinitum). They are also accustomed to transparency and accountability in the exact methodological approach. You want to make a ranking and hold this sector in judgement, be prepared for that ranking to be judged harshly itself. And a two-month exercise by a couple of people without clarity of methodology, has so far just come across in all this as a bit woolly. Sorry.

      If next year you had a rethink and aimed to highlight 100 NGOs but divided them into the ten groups of the 10 best/most promising on a per-sector basis, well I think you’d have a lot more to work with. And ten times as many #1’s to tweet about your magazine from the rooftops. Win-win!

    • I work for a very large NGO that didn’t make your list. You can try to figure out which.

      Rankings of NGOs according to objective criteria are welcome. There are many such rankings already. For NGOs registered in the United States, sites like CharityNavigator.org do a very important public service by researching and publishing how effectively NGOs use publicly donated funds. That’s just one part of the picture: financial accountability. Other initiatives like the Humanitarian Accountability Project and the INGO Accountability Charter have enlisted dozens and dozens of international aid agencies to increase their accountability to the people who actually receiving aid.

      No one doubts the NGO sector could do well to have some increased objectivity. I hope you will receive these comments constructively and do everything in your power to hold yourself accountable next year to the same standard of objectivity and transparency that you expect in the NGOs you ranked.

      • M Scott: I work for a very large NGO that did very well on this list. Fundamentally I don’t like this list and our place on it because I think it risks being a misguided & underinformed source of complacency for us. Bah, humbug.

    • I sympathise with Dave’s frustration with the Global Journal’s ‘top 100′ list: it doesn’t sound like it was compiled very thoughtfully at all (like, why 100, why not 101 or 45 or 3? And how much better is 1 than 2 than 100 than 101…?! How have they measured effectiveness?), and then you have to ask, what is the point of it? And what potential harm could it do (disracting attention from the actually really stand-out amazing charities, some of which are highlighted by GiveWell and Giving What We Can, for example)

      The thoughtlessness of the Global Journal’s methodology and purpose seems to be born out in the actual results they get. #1 Wikimedia Foundation: OK, probably pretty a good charity. #2: Partners for Health: A charity that airlifts people from Haiti to Boston for treatment, in a world where 50 cent deworming treatments and $5 malarial bednets are under-supplied. It’s uncomfortable to think how many more people they would help if they focused that money elsewhere, certainly thousands. #3: Oxfam: Recently impact-evaluated its own projects and found that the majority have none of the impacts they intend whatsoever. And none of the charities that GiveWell and Giving What We Can find most cost-effective even get a mention, despite the fact that they save lives for around $300. How on earth can Oxfam and Partners for Health be doing better than that, given the above?

      Dave’s also right that it’s very hard to make lists like this, that try to say one charity is better than another despite both operating in very different sectors. How do you (properly) compare providing a guide dog for a blind person with getting more children into school, for example?

      I have two points on this. One is, although it’s hard, if we really do want our money to do as much good as possible (which I do), it’s a judgement I just have to make. Look at the costs and outcomes of a program, and decide, which of those outcomes do I think is most important?

      The second is, some comparisons are actually fairly easy. For example, guide dogs for the blind cost $45,000, while deworming costs 50 cents per child (and really importantly improves children’s cognitive development, increases school attendance dramatically, and increases child survival and general health). If I have $45,000 to give away and I choose guide dogs, I can help one person. If I choose deworming, I can help 90,000. To use that sort of information to make a public ranking of an effective deworming charity above an effective guide dog charity would be fair, I think.

      But to make a public ranking in a thoughtless way is just very irresponsible. If you think your advice will change how people donate, you must take it really seriously. If you don’t think it will change how people will donate, why write it?

      Because the cost-effectiveness of charities is normally distributed, you do have just a very few charities that really and obviously stand out above the rest in the sort of way the guide-dog versus deworming example illustrates. It’s surprising these charities (see http://www.givingwhatwecan.org/where-to-give/recommended-charities) don’t make it onto the list.

      What do you think?

  2. This is a new beginning in the sense this can generate a competition to win a race for getting in the top 100 or in top 10 but to judge the organizations work that must be done by following a scientific method but again I would say working in the field either for developing and responding an emergency is not mathematics that’s result is always same.
    I would say this can create a good debate among the players and other concerned institutes so in that sense it’s a good endeavor that would create something new so as the things may comes up cleaner.
    Regards
    Nasir Sajjad
    Pakistan

    • It sounds like plenty of NGOs on this list made it because they are sexy and in the news, not because of any impact they have demonstrated. I think Ushahidi is great and has enormous potential- but so far their impact on the ground (in helping aid workers make more informed and efficient decisions, or bringing to light new abuses that can be prosecuted) is unknown. What has the impact, as separate from outputs (maps, data, widespread media coverage), actually been? Nobody could really tell you. I don’t think an org should be on a list because of what they might accomplish. But again, back to having some sort of focused methodology so we can measure and debate impact.

  3. Thanks Dave for such critical insight, and bringing further credibility to the concept of ‘blogging’ as a new and legitimate form of inquiry. I do not have much to add to the comments already posted, except to say critically looking at methodology, whether is be for public rankings or academic articles is key to uncovering misleading and/or false conclusions upon which decisions are made.

  4. Very disappointing, esoteric rant from the editor. Poor taste. Mr. Algoso (the heathen blogger) is actually asking the hard-hitting questions that your faithful team of journalistic purists failed to ask. You’re comparing apples to oranges here. Even lists that rank development NGOs with relief organizations is a major stretch – throw in non-profits like the Wikimedia Foundation and there is absolutely no coherence. I think Mr. Algoso makes an excellent suggestion, should these rankings continue next year…focus on sub-sectors and organizations that have some relevance to each other. Otherwise, you might as well include your Man of the Year/Oscar nominee’s in the list while your at it – I’m sure we can fit Angelina Jolie, Sean Penn and Bono in there somewhere.

    Also, the last thing we need is more founderitis in the non-profit/NGO sphere. I’ll leave canonization to the church.

  5. “Most people consuming these rankings will not take the time to critically analyse them. They’ll assume that someone else has already done that.”

    I think most people don’t read the Global Journal……

  6. Thanks for all the supportive comments, and for sharing this post around. I agree with Matt (and others) that the Global Journal isn’t exactly on every news stand. I almost ignored their list altogether. Then some of the “top 100″ NGOs started publicizing their position in emails and blog posts. I thought there needed to be a critical rebuttal. Glad to see that so many people agree with me!

    Well, not *everyone* agrees with me. But I don’t think the comment from GJ’s editor (above) really requires much of a response.

    By the way: kudos to Innovations for Poverty Action, which publicized their inclusion on the list while also questioning the methodology: http://www.poverty-action.org/node/5032

  7. I’m going to take the moral low-ground here and guffaw at the following bit of Jean-Christophe’s reply:
    “Methodology, from the French méthodologie”
    Good bit of journalism there. LOL.

  8. I wanted to share my views on the selection process by getting in touch with the editor of Global Journal. I was an editor of a magazine, Catalyst for NGO movement in India (www.afhd.org) and had collected a lot of information on the leading NGOs connected and operating in India.

    Though I have no problem with those NGOs which are selected from India, I was surprised by some “bombastic numbers” used by GJ to show why they were selected. My problem also lies with those who did not make the list in comparison to the ones selected. I felt there was some unintended systemic biases. I share many of the doubts expressed very eloquently by Dave Algoso.

    The list of criteria given by GJ is very impressive. When I started evaluating the NGOs selected by GJ using those criteria, I started getting doubts as to how GJ might have incorporated them in its analysis. Just to give one example. Strategic and financial management is one of the criteria. This is indeed a very high sounding criteria. While one many have good financial management and it can be easily and even objectively assessed, strategic management is extremely difficult to assess.

    Making a judgmental error say in selecting music or picture (which also requires multiple criteria) does not make much difference in the scheme of things. However selecting top 100 NGOs of the world sends some very powerful messages just like selecting nobel prize winners. Even the organization awarding the Nobel prizes does not try to defend it the way GJ is defending.

    My own prediction is that even the NGOs who got pretty high ranks may not like to quote their ranks because it may question their credibility based on the comments of their editor. Often it is the awardee which gives more credibility to the organization which gives awards. By not giving nobel prize to Mahatma Gandhi, it is nobel prize which lost some credibility and not Gandhi. The same way, those deserving NGOs who could have made a more scientifically developed list are not the losers here. It is GJ as time will tell.

    Bhamy V Shenoy
    Former Editor, Catalyst.

  9. Pingback: la vidaid loca
  10. As a psychometrician/statistician, I know how difficult “ranking” can be. If GJ can’t take the heat of a full methodological disclosure, with every step carefully detailed and fully supported, they shouldn’t even try to claim this as a valid ranking, as they can do more harm than good. “Face validity” appears to be the support used for their claims, i.e. “The opinions of experts” (themselves). This is the same sort of “validity” that led to the use of the Malleus Maleficarum to search out witches.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s