[Note: This was an early post on the blog — from June 2010. I’ve been thinking about this issue recently and might post on it again soon. In the meantime, I thought I’d give this a re-post since the blog’s readership was in the single digits back when it first ran. Date references have not been updated. -DA]

Last year brought a deluge of online charity contests. It wasn’t a new concept but somehow it had suddenly gained traction. Innovative and hip, they were everywhere. Chase Community Giving Challenge was one of the first to make a splash, and Pepsi’s Refresh Everything seems to be the big one right now. Ideablob sought to fund both for-profit and nonprofit ideas (though it was shut down rather abruptly in November). Also worth mentioning is the twist that the Case Foundation put on its contest: America’s Giving Challenge sought to promote small philanthropy by judging competing nonprofits on the number of individual donations (of any size) they could gather, rather than simple votes.

There were also critiques. The Chase competition got a lot of flack for lack of transparency. Kjerstin Erickson posted on Social Edge last year with some sharp analysis on what the lack of information meant for the efficiency of entering contests — from the perspective of the nonprofits that are competing for funds. The costs to win are high, both in terms of staff time and contacts with an organization’s network. Of course, it can pay off for some. Atlas Service Corps is one example. It’s even gained notoriety for its ability to win contests. (Incidentally, the founder, Scott Beale, will be taking part in a Case Foundation chat about online contests later this week.) But I think concerns over efficiency are misdirected if they only focus on the costs to the nonprofits in the competition. From a nonprofit’s perspective, fundraising is only efficient or not compared to other methods of fundraising. There is always uncertainty in fundraising, forcing organizations to make educated guesses on how to spend their time.

I’m more interested in the impact of these contests on the allocative efficiency of the nonprofit market as a whole. This is a perpetual concern: while for-profit capital seeks the highest monetary returns, nonprofit funds attempt to generate non-monetary returns that are not easily comparable. How does the impact of an environmental conservation organization compare to the impact of an after-school sports program? This sort of problem makes it very difficult to allocate funds.

So the question becomes: Do such contests allocate funds efficiently? That is, are the best organizations with the best ideas really winning the contest? This isn’t meant as a critique of the winners. I’ve met Scott and many of the Atlas Corps fellows, and I agree that it’s a great organization. But it seems to me that contest winners are those organizations that are best at “getting out the vote” and rallying whatever supporters they have. It also helps to have an issue that resonates with the young, internet-savvy crowd that is predisposed to using Facebook and voting in such contests (I don’t recall many elder care organizations winning contests). This leads me to doubt that this form of crowd-sourcing can generate better decisions than a traditional philanthropic model in which program officers review grant applications.

Crowd-sourcing is only the idea used to justify the contests. There are other incentives at play. For corporate philanthropy, contests put the brand name in front of thousands of voters. And for donors of all kinds, it makes decision-making easier: you don’t have to answer the tough questions of which causes deserve money if you just let the people decide. The risk is that we weaken accountability by doing this, because neither the donors nor the voters are evaluating any real evidence about a candidate’s impact.

Am I missing something about these contests? Perhaps someone who is more intimately familiar with the philanthropic world can let me know what’s really going on here. I’d be especially curious to hear how the practice has evolved recently. With the exception of the Pepsi campaign, I feel like I haven’t heard much about these contests recently.

  1. […] resources? We can start by admitting that not all competition is inherently good (for example, charity contests inject competition, but it’s not clear that they actually improve allocation decisions) and the same goes for […]


  2. Dave, ok, i tend to agree with you if we’re talking about competitions where the criteria is number of votes, eg. orgs ability to rally others around their cause. This biases well established, tech-savvy ones over an individual who may be interested for what ever reason to enter a competition.

    But there is a big difference, i think, between these and challenges where the ‘crowd’ is asked to come up with a solution that is measured against well defined criteria (eg. we’re running one at the moment to come up with energy solution for 3,000 war returnee families in BiH with no access to energy- http://www.nesta.org.uk/areas_of_work/challengeprizes/assets/features/undp_prize). So it is in this context that i found the Drop the Challenges article (ht.ly/ocdym) somewhat off-putting (despite several good points in it, eg. need to focus on implementation not invention).



  3. Fair enough. I think the Dump the Prizes piece (http://www.ssireview.org/blog/entry/dump_the_prizes) nods toward the fact that there are many types of contests, though it gives slim treatment to differences and potential benefits of some.

    What I really like about Starr’s piece is the point that no one is critically analyzing or evaluating these differences. The many cheerleaders of prizes makes it okay for a foundation, CSR donor, etc. to pursue this avenue without much thought. Starr is pushing back. Hopefully there’s some thought into finding the analytical frameworks, principles, etc. that distinguish the good approaches from the bad.


  4. […] Relatedly, it should be acknowledged that grant writing and funding more generally does imply costs of time and resources by grant-seeking organizations.  Each funding opportunity entails its own cost-benefit analysis by organizations.  As the GPSA moves towards applications that require much more in-depth analysis and careful framing, this implies higher investment by applicants.  Given that only 22 of over 600 applications have been funded, CSOs must evaluate how much effort to dedicate to this process, and the quality of the analysis of context and framing of the intervention may reflect, to some extent, the time and resources invested in crafting the proposal according to each organizations’ assessment of the cost-benefit of that investment.  Others have made this point regarding prize competitions for non-profit organizations. […]


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: