Yesterday I gave a guest lecture to John Gershman’s politics of development course at NYU’s Wagner School (mostly MPA students). The topic: how the development sector puts complexity thinking into practice. Prepping and giving the lecture helped me put together some thoughts on how the topic has evolved since I took that very same course about six years ago.

In recent years, there have been at least three major books that address complexity in development work: Ben Ramalingam’s Aid on the Edge of Chaos (2013); Danny Burns and Stuart Worsley’s Navigating Complexity in International Development (2015); and Jean Boulton, Peter Allen, and Cliff Bowman’s Embracing Complexity (2015). I’m currently working on reviews of the latter two, but in the meantime, I drew from all three for the lecture. As a simple indicator, that amount of literature on the topic suggests increasing interest and relevance.

All three books mix theoretical frameworks and practical cases. For the most part, the theory draws from fields outside aid/development work. That’s not surprising, given that complexity thinking has roots and applications across a wide range of disciplines: ecology, physics, mathematics, etc. Naturally, that thinking and the accompanying toolkit are way ahead of what the development sector puts to use. (Agent-based modeling, anyone?) I’ve heard more than one development professional express that it seems like the complexity concepts are still struggling to have a major impact on development practice.

However, I would argue that on the practice side, there’s actually quite a bit happening that aligns with complexity thinking but that isn’t put in those terms. You might call it complexity-relevant, though not complexity-aware, practice. That comes out in the books: e.g. Burns/Worsley use complexity to describe the effectiveness of community led total sanitation, even though that approach wasn’t designed as explicitly complexity-informed; same goes for the positive deviance approach, the history of which Ramalingam discusses in his book.

My lecture included case studies of projects that I’ve worked on directly, and I also touched on another project outside my own experience. Complexity thinking can help to explain the successes and struggles of these projects, even though few (if any) of the people working on the projects were thinking in those terms at the time. (The “Doing Development Differently” case studies are great examples of this.)

It seems that complexity theory and complexity practice are out of sync in the development sector: theory got out ahead with a boost from the unrelated disciplines where it first developed, but it turns out that practitioners are muddling their way to approaches that can be explained by the theory. Practitioners in the sector are simpy responding to the complexity they encounter in their work, even if they lack the analytical frameworks for it; they are also incorporating the complexity concepts that have made their way into popular intellectual culture (e.g. tipping points, feedback loops).

Complexity thinking has gotten its toeholds in development by explaining some of this complexity-relevant-but-unaware practice (both successes and failures). If there’s a next stage, it may be the explicit application of the theory to create new practice, or at least to significantly adapt current practice.

I suspect that requires an institutional home: a place where the practice can be developed intensively enough for it to evolve. Although a range of smart people work in this space, it seems like they do so from within larger institutions (donors, think tanks, NGOs, etc) that aren’t equipped to give it the focus needed.

Or maybe I’m wrong? Is there some organization or team out there explicitly translating this thinking into practice? And will I have something more concrete to share with the next crop of students who are subjected to my rambling thoughts?

I’ve fallen off the blogging wagon in recent months. Whenever that happens, I find the best way to get back on is to post something small and easy. Fortunately, I had a bit of inspiration from a client’s recent blog post on intrinsic/normative arguments (“X is good in itself”) versus extrinsic/instrumentalist arguments (“X is good because it accomplishes Y”)—specifically as they relate to open governance and progress in that sector. It sparked some thoughts on the value and utility of arguments based on intrinsic value and extrinsic utility.

So as part of my effort to get back in the habit of blogging, here’s a simple thought for today: Principled arguments about the intrinsic value of something—whether that’s open governance, rights, justice, etc—can only make sense at a vague level. Extrinsic, practical, empirically grounded arguments are needed to define the details.

For example: There’s a principled argument to be made for progressive taxation, but there’s no principled argument to be made for a top marginal tax rate of 40% v. 60%. That argument has to be utilitarian, and can be at least modeled if not tested. Likewise, there’s a principled argument for equal access to education, but there’s no principled argument for the structure of private v. public provision of education services. That argument can only be made empirically, perhaps by comparing education outcomes in various institutional, economic, and political contexts to draw useful conclusions.

Policy conversations get muddled when we mix the two, e.g. grounding policy details in normative rather than empirical arguments (e.g. the Republican presidential candidates’ tax plans). And the thorniest political challenges emerge when there are arguments about conflicting intrinsic values (e.g. freedom v. security). At that point, empirical arguments are largely useless and progress is nearly impossible. This has confounded many an evidence-based advocacy effort. 

What about self-interest and in-group-interest? They may seem unprincipled, but these are actually a form of intrinsic argument: they are rationally self-contained and require no justification in outside principles. So again, you can’t overcome them with facts. You only overcome them by changing the context, i.e. with power.

There’s more to be said about the political interplay of intrinsic and extrinsic arguments—especially as they relate to issues like rising inequality or white privilege, which are grounded in self-interest—but let’s leave it at that for now. Here’s to a 2016 of increased writing.

 

I got to spend a few days last week at the the third TA LEARN workshop, hosted by the Transparency and Accountability Initiative (TAI). Around 70 practitioners, researchers, funders, and the occasional consultant gathered to assess and advance the state of practice on transparency, accountability, open governance, and related issues. Here’s the third in a series of three takeaways.

In the two previous posts, I wrote about how learning is adaptation and learning must be user-owned. As obvious as the second point may be, it’s often undermined by power and funding structures in our sector. “Learning” is often oriented toward the extraction of knowledge for use elsewhere, and practitioners only see it reflected back to them through future RFPs, rather than being able to generate, own, and act upon it themselves.

Righting the balance depends a lot on how the various actors work together, which brings me to the final point.

Takeaway #3: Learning and adaptation depend on relationships.

Not data. Not the brilliance of the practitioner. Not a lack of funder bureaucracy. Rather, the personal relationships among the various partners are the critical enablers of learning and adaptation. Those relationships shape the open sharing needed to gather insights, the joint interpretation needed to decide on changes in direction, and the collaboration needed to put that new direction into action.

The funder-implementer relationship is the most noticeable and the most controversial. Although that relationship is shaped by formal processes of grant applications and reporting, those merely set the guidelines and constraints. In example after example shared during the TA LEARN workshop, the personal relationships between the individuals at various organizations seemed to be more important than the formal elements.

This echoed a theme from the various conversations on “doing development differently” and “adaptive management”: savvy  coalitions can work adaptively even within projects funded by bureaucratic institutions like the World Bank and USAID. Trust and communication among the actors can create the willingness to massage institutional barriers and navigate constraints.

Think about it this way: Adaptation often means changing the project’s results framework, timeline, budget, and more. That requires the funder’s program officer to overcome some amount of bureaucratic inertia. Is the program officer more likely to do that for: project A, whose leadership she knows personally, has met on several occasions, and has chatted casually with over coffee outside the confines of official meetings; or project B, which has no tangible meaning to her beyond a quarterly report in her inbox? All other things being equal, human psychology suggests A.

Fortunately, there are also policy shifts that can loosen those constraints more generally, making it easier for more projects to fall into category A. Earlier this year, Jenny Ross of INTRAC (with support from TAI and the Hewlett Foundation) did some great research on how grantmaking practice supports or hinders grantee learning, highlighting challenges of grant timelines, project silos, inflexible reporting, and lack of prioritization.

Of course, the funder-implementer relationship is not the only one that matters. The TA LEARN conversations also turned to the differing roles played by NGOs, community-based organizations, grassroots activists, and social movements. The personal relationships involved there matter greatly as well, especially as the various individuals find ways to work together over time.


Closing thoughts on TA LEARN

We ended the third day with some discussion of TA LEARN’s future trajectory. In line with the above, I think those relationships built last week (and in the previous workshops) may be the greatest value-add of this space to improving the sector’s practices. Transparency and accountability actors can only improve if we adapt, can only adapt if we learn, and can only learn if we know one another.

Looking beyond our little sub-sector, I think the transparency and accountability community may be ahead of the curve in some ways. Compared to the rest of the development sector, I’d argue that this particular corner better understands adapting programs, navigating complexity, and tolerating ambiguity—simply because the nature of this work leaves it no other choice.

For more reactions to TA LEARN:

I got to spend a few days last week at the the third TA LEARN workshop, hosted by the Transparency and Accountability Initiative (TAI). Around 70 practitioners, researchers, funders, and the occasional consultant gathered to assess and advance the state of practice on transparency, accountability, open governance, and related issues. Here’s the second in a series of three takeaways.

Yesterday, I wrote about how learning is adaptation. The development sector increasingly talks about creating “learning organizations.” The best indicator that an organization has learned is not whether individuals within that organization have learned anything, but rather, whether the organization itself has adapted.

Which suggests a related question: What sort of individual learning lends itself to organizational learning and adaptation?

Takeaway #2: Learning must be user-owned.

During one breakout session, it became clear that real-time learning in programs hinges on putting the “users” of that learning in the driver’s seat. If organizational learning is adaptation, then the users of learning are the ones who must implement that adaptation. That certainly includes program staff, but probably also a range of partners. These learning users are the best positioned to gather various types of data, interpret and understand it, decide what to do next, and put those decisions into action.

User-owned learning is another lens for thinking about local knowledge. It’s not enough to simply respect local knowledge (aka just “knowledge”—full stop). That’s merely the first stage of user-owned learning enlightenment. The second stage ensures that lessons drawn from that knowledge are reflected back to those who provide it, rather than extracted for use elsewhere and never heard from again. The third, most enlightened stage, allows space and encourages (local) knowledge to develop into its own learning and adaptation, led by the same actors who brought that knowledge to the table.

This may seem obvious, until you remember that much formal “learning” in our sector is outsourced to donor-dispatched researchers or fly-in-fly-out consultants who have limited engagement with program staff or partners. Unfortunately, there’s no such thing as learning for other people. Hiring someone else to learn for you is like paying another student to write your essay: you might get away with it at first, but the professor’s questions during class will reveal that you didn’t put in the work. Likewise with organizational learning.

To the extent that researchers or consultants are involved, they’ll better facilitate user-owned learning (and perhaps be users themselves) if they have ongoing relationships with the other actors.

Which brings me to the final takeaway from TA LEARN: learning and adaptation depend on relationships. More on that point soon.

Sorry for the radio silence in recent weeks. Take it as a sign that I haven’t quite figured out the secret to freelance work/life/blogging balance. Fortunately, I had a chance to take a break last week. I headed down to Rio, glanced briefly at the beach, and then spent three days in a hotel conference room. Huzzah. (On the plus side, it was one of the least conference-room-like conference rooms I’ve been in.)

I was there for the third TA LEARN workshop, hosted by the Transparency and Accountability Initiative (TAI). Around 70 practitioners, researchers, funders, and the occasional consultant gathered to assess and advance the state of practice on transparency, accountability, open governance, and related issues.

In particular, the workshop focused on learning in transparency and accountability work. While that learning takes many forms, I would broadly place it in two categories:

  1. Learning about transparency and accountability. I.e. what’s the state of practice, what’s generally working or not, what evidence do we have or (more often) lack. This sector confounds many of the standard tools of development evaluation and learning (indicators, attribution, etc.) so having dedicated space to dive into the nuances of what we’re learning is critical.
  2. Learning for transparency and accountability. This is the more concrete, real-time, and context-dependent learning that happens in the midst of programs or campaigns. This learning drives changes of direction, but offers fewer lessons about broader practice. Though useful in any sector, there’s a strong case to be made that this form of learning is an order of magnitude more important in accountability work: the unknowable/shifting aspects of governance and the political nature of change limit the utility of advance planning, thereby increasing the need for ongoing learning. In other words: when broader learning is harder, ongoing learning is critical.

The workshop tackled these forms of learning with varying success. I feel like we hit the mid-range on both of the above: we exchanged solid ideas and experiences, but we neither crafted a grand manifesto nor did we trade tools and tips at the most concrete, tactical level.

For me, the most interesting conversations were around the second category: ongoing, real-time learning. I had three main takeaways. In the interest of digestibility, I’ll save #2 and #3 for later posts.

Takeaway #1: Learning is adaptation.

What does it mean for an organization to learn? This questions was posed by a fellow participant one evening over dinner (where the most interesting conversations at many conferences occur).

In one sense, learning happens at multiple levels: individuals learn new skills; an organization learns that its strategy needs to change; the sector learns to support or abandon particular approaches. But in another sense, learning only happens at one level: the individual. Databases of PDFs and “best practices” are not learning. Organizations are made up of people, so an organization only learns if its people learn and are able to put that learning into practice in the organization.

For example: A program manager or campaign organizer may learn that a certain mobilization approach doesn’t resonate with their constituents. If she’s able to ditch that approach and switch to another, we can say that the organization learned. However, if something forces the organization to continue using that approach—e.g. because the project indicators are unchangeable or the approach utilizes a flashy tech tool that’s great PR—then individual learning fails to translate into organizational adaptation. At that point, what sense would it make to say that the organization has learned anything?

So the sine qua non of organizational learning is adaptation. And as a corollary, adaptation is the best indicator that learning has occurred. If you’re going to hold someone accountable for learning, look for their adaptation.


See also reflections from Global Integrity’s Alan Hudson and TAI’s Brendan Halloran.

And stay tuned for takeaway #2: learning must be user-owned.

Entrepreneurship is the kind of endeavor that we place a pedestal. It carries a mystique. The same way that being an artist, joining the clergy, or writing a book mark unique career paths, creating a company seems to transcend normal livelihood choices.

Being an entrepreneur sets someone apart in our rhetoric, but not in practice. Far from being a solitary activity, it is a career more integrated with the world around it than most. In building teams, finding customer bases, navigating regulators, forging supply chains, and much more, entrepreneurs interface with more aspects of society that many other professions.

These interactions make entrepreneurship incredibly dependent on its context. How an entrepreneur builds a company is shaped by political institutions, cultural norms, talent pools, and customer expectations. While technological or design breakthroughs may spring from brilliant minds in relative vacuums, entrepreneurs create businesses in quite the opposite.

This dynamic is well illustrated in a new book, From the Other Side of the World: Extraordinary Entrepreneurs, Unlikely Places. The author, Elmira Bayrasli, profiles seven entrepreneurs building businesses far from the accelerator programs, business schools, and venture capital funds that typify technology entrepreneurship in the United States.

Each of the seven cases is crafted around an entrepreneur, a context, and a challenge. The first focuses on how Bülent Çelebi built a technology company called AirTies in Istanbul. The major challenge he faced was developing a talent pool and company culture that would be willing to take risks and innovate. These characteristics are in the groundwater in Silicon Valley, but he had to dig deeper to find them in Turkey.

In a separate case, Shaffi Mather created a private ambulance service called Dial 1298 in Mumbai. He faced the challenges of corruption, from the moment he tried to register the phone number through to managing the service’s drivers.

In Pakistan, Monis Rahman created two collaborative spaces that challenge the global perceptions of Pakistan: a matchmaking site called Naseeb, and job site called Rozee. The latter was initially created simply to fulfill the recruitment needs of the former, before taking off in its own right.

Other chapters profile an energy efficiency company in Mexico, a technology manufacturer in China, and a mobile payments platform in Nigeria.

A bit of an outlier case focuses on Yana Yakovleva, who co-founded a chemicals company in Russia the mid-1990s. In 2006, she ended up in jail after resisting extortion efforts by the police. After her release, she moved from entrepreneur to activist, creating an organization to protect the rights of businesses and entrepreneurs. Though a key enabler of economic growth in most countries, the rule of law is under constant threat in Russia.

Each story is an instance of Schumpeterian entrepreneurship—i.e. those that cause creative destruction and lead to new markets. Smaller scale entrepreneurship of local businesses or self-employed hustlers are deliberately left out. This is a critical analytical choice. We muddy our understanding of entrepreneurship’s importance when we blur the lines between local businesses and industry-changing enterprises. Bayrasli focuses on the latter.

What results is a picture of entrepreneurship that highlights the personal histories of the individuals involved in each company, but with more nuance than the cults of personality surrounding many tech titans. The teams immediately surrounding the entrepreneurs get their due treatment, as do the broader networks. In fact, the importance of the returning diaspora bringing networks and skills shows clearly in several of the chapters.

The emphasis also lands clearly on the institutional and historical contexts for each enterprise. With a style that makes this as much foreign affairs as business book, Bayrasli takes detours into China’s economic history, levels of mobile access in Nigeria, and public service expectations in India. The larger forces driving such contextual factors can dictate the fortunes of entrepreneurs. Even when creating something new, history matters.

Where the book falls short is extrapolating broader lessons and trends about entrepreneurship around the world. This is admittedly a tricky balance: the move from evidence to recommendations trips up many nonfiction authors. Still, some amount of insight for policymakers, investors, or entrepreneurs themselves would have been welcome. Bayrasli leaves readers to make their own inferences.

That minor complaint aside, From the Other Side of the World makes a worthy addition to our understanding of how entrepreneurship happens. Putting a human face on it and placing it in context allows us to take the activity off the pedestal and to better understand its role in the world. For those wanting to promote it or pursue it, read this book.

Several folks reached out in response to the last post, on learning and adaptation. It seems like establishing and promoting learning processes within organizations is on many people’s minds. In that light, a few followup thoughts:

1. Types of organizational learning

We should all be familiar with the idea of individual learning styles—visual, auditory, tactile, etc. There are parallels in organizations: some learn by seeing proof and incorporating academic evidence into organization-wide processes (analogous to visual learning); others learn in more fluid conversation with the sector, with lessons absorbed by individual staff members as part of their own professional development (call that auditory); and others learn by doing, drawing lessons primarily from their own work and the feedback loops within it (tactile).

Variations and combinations of these abound, but the broader points about learning styles apply: different organizations learn in different ways; and different types of learning serve different organizational priorities.

The term “learning” may actually be a bit problematic, if we conceive of a hierarchical relationship between the “knowers” and the “learners”—the professional analogue of schoolchildren at desks. This model might apply to the incorporation of academic findings (on, say, the effectiveness of humanitarian cash transfers) into programmatic work or policies. In such cases, there are “knowers” separate from the “learners”; your challenge as an organization is to learn from those who know.

However, the most critical learning in organizations involves discoveries that aren’t verified by any sort of external authority. This learning is inherently internal. It is uncertain (managers must make daily decisions based on incomplete evidence, with less “rigor” than academics); tacitly held by staff (often without mechanism or opportunity to be stated explicitly); and relevant only to that organization’s context and work.

This suggests a corollary to the classic distinction between known-unknowns and unknown-unknowns: in organizational learning, there are the somebody-else-knows and the nobody-else-knows. We bring the former in from the outside, and generate the latter internally. Learning what others know is a necessary but not sufficient condition for being a learning organization. You have to generate new learning internally as well.

2. Beyond indicators: Iterative visualizations and aggregated narratives

In August, I had a chance to attend a good conversation at USAID about measuring systemic change. Measuring systemic change is central to understanding the impact we’re having, but systems are so hard to measure at a single point in time that tracking change with any kind of rigor can seem impossible.

One of the insights from the USAID event (hosted by the agency’s Learning Lab, naturallyLocal Solutions Community and Local Solutions team) was that approaches to measuring systemic change fall into three broad buckets. One is the use of relevant indicators. Important, but not as interesting as the other two:

  • Iterative visualizations. Showing systems graphically helps us to understand them more quickly. It also allows a certain flexibility and uncertainty through the visual presentation of relationships and processes—in contrast to words, which often lead us to specify elements more precisely than we’re able (or couch them in caveats). Similarly, iterating on visualizations over time shows us how the system changes (again, without the pressure to articulate that change precisely). Imagine how a social network analysis might change over time. (More examples from the event here.)
  • Aggregated narratives. Drawing on methodologies like Most Significant Change (MSC), the insight here is that the individuals impacted are their own best judge of how and whether they’ve been impacted. MSC is a participatory and indicator-free way to have program participants (beneficiaries, users, etc.) articulate what changed for them personally; those insights can be aggregated to form a more complete picture.

For any organization pursuing broader systemic impacts, learning about how the system has actually changed (and learning to influence it more effectively) requires some combination of these.

Coincidentally, I just came across a great primer on visual thinking that is especially good for the non-visually inclined.

[toggle title=”Visual Thinking Sketch Notes”]

From XPLANE.

[/toggle]

3. Connection between learning and Theories of Change

ODI’s Craig Valters put out an excellent blog post and report on theories of change in development. It resonated with themes from the DDD conversations: e.g. the idea that the specific tool you use isn’t as important as how you use it. It doesn’t matter quite whether you’re filling out a ToC, logframe, or something else, so long as you’re doing it in a reflective, inclusive, and iterative process.

[toggle title=”Read the report: ‘Theories of Change: Time for a radical approach to learning in development'”]

[gview file=”http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/9835.pdf”]

[/toggle]

Similarly, the outcome of the process isn’t as important as the process itself. The practice of regular theorizing is more important than the theory created at each stage—with obvious parallels to the idea that plans are useless but planning is essential. That said, future learning and adaptation over time hinge on how well you capture the theory at each stage for future assessment (perhaps through iterative visualizations).

Third, Valters does a useful breakdown on learning that parallels the questions we should always ask when someone talks about accountability: learning for whatby whom, and in what ways? In particular, we need to avoid the tendency to think that learning is done at some central location, using data gathered from the fringes and then distributing implications outward. The most important learning may be that done by those at the front line of any given effort, and efforts to capture or measure those might actually undermine them.

Finally, there’s an obvious tension between the accountability of the results agenda (often confused with accountancy, as Valters notes) and the prerequisites of learning (openness, willingness to fail, flexibility, and so on). The two are not necessarily in conflict, and the political imperatives are such that learning is unlikely to supplant results in any case. However, the two could co-exist more effectively than they do. This will be helped by a growing body of evidence demonstrating the importance of learning for improving results.

4. Adaptive learning for accountability and open governance

Global Integrity has put the idea of adaptive learning at the heart of its strategy. Over the past several months, the aforementioned Alan Hudson led the team at Global Integrity through a very open strategy process—to the point where he actually tweeted a link to their evolving strategy in Google Docs and invited the world to comment. This aligns well with the values of an organization committed to accountability and transparency, but it’s still an uncommon move.

In any case, it looks like I’ll get to spend some time working with them over the next few months in refining what that strategy means in practice. That’s where the rubber really hits the road: I can opine all I want on a self-hosted blog, but the market dictates that I also help actual organizations do actual things. Yikes!

I had a chance to catch up with Alan Hudson yesterday, and the conversation brought me around to an idea that I’ve been trying to articulate for a while. Though admittedly still a bit abstract, here’s the idea.

We’ve made monitoring and evaluation—grouped together as M&E—a core expectation of any social or development effort. As I’ve argued before, M&E essentially serves a management function by supporting decisions at either the program/project level (for monitoring) or at the policymaker/donor level (for evaluation).

At any level, the practice is synonymous with merely tracking and measuring. You operationalize the practice by creating an M&E plan, hiring an M&E officer, and producing M&E reports. And that’s all well and good—I should know, because I’ve been that officer and written those reports.

However, both in M and in E, what we really care about is not the numbers that show up in the report. What we care about is what the numbers tell us and what we do with the numbers. This started to come out at last year’s M&E Tech conference, which focused heavily on the idea of feedback loops. It was an important recognition that the whole point is the changes that come about from M&E.

Put another way: what we really care about with M&E is what we learn and how we adapt in response to that learning. The next logical step would be to de-emphasize monitoring and evaluation as operational functions, replacing them with learning and adaptation: moving from M&E to L&A.

To make this concrete—perhaps for someone looking to apply complexity thinking, DDD, and related agendas in their work:

  • First, jettison the requirements for an M&E section in your project proposals and plans; replace that with an L&A plan.
  • Second, frame that plan around organizational processes and culture, rather than the research methods and indicators.
  • And finally, don’t hire M&E officers who think like researchers; instead, hire L&A advisors ones who think like consultants.

An important caveat is that L&A in practice would only create a first-order feedback loop, where information becomes available to inform a self-motivated change. A second-order feedback loop occurs when the information actually compels the change through some form of accountability. LA&A: learning, adaptation, and accountability. But I suspect that may be a bridge too far for much of the social and development sector.

Progress requires action. Action in any form—political, commercial, charitable, religious, etc.—requires crossing the divisions that exist in the world. Crossing divisions requires trust. Trust requires empathy.

Therefore: Progress requires empathy.

Roughly, that’s the thinking behind the use of empathy as an analytical category and practical instrument in the social impact space. Empathy appears in a variety of ways: fundraisers use it to unlock donations; designers build it with end-users; advocates leverage it to spur political action. In all cases, it aims to create a connection between individuals across some kind of divide.

On its face, empathy is a good thing. The world is better off when you consider the pain someone else suffers alongside your own. We become less nationalistic, less parochial, and more willing to cooperate.

However, empathy can be misapplied. A crude use of empathy leverages caricatures and misrepresentations—think photos of emaciated children—to provoke emotional responses. These are effective when raising money, while undermining long-term understanding. Positive caricatures do their own form of damage, as when designers make heroic assumptions about their end-users.

These misapplications have sometimes been called false empathy, but as in most of life, it’s a matter of degrees. Understanding the nuances of empathy, in its more and less honest forms, requires disaggregation along two axes: symmetry and blend.

Symmetry: Unilateral v. reciprocal empathy

Meaningful empathy requires some amount of two-way connection. It requires both give and take. Contrast two versions: 1) unilateral empathy experienced when viewing a heart-wrenching photo of a natural disaster or even reading a detailed news story, where the “objects” of empathy do not even know they are playing that role; and 2) a reciprocal conversation you have directly with someone impacted by the same events. While neither gives you complete understanding, you are moved and drawn in by the latter in a much more honest way.

Social media has lowered the barriers to all of these connections, but broadcasts (think Kony 2012) still have more reach than conversational exchanges. Attempts to simulate a two-way exchange (e.g. giving the audience a proxy voice in the story, or providing letters from sponsored children) can spark more empathy, but it’s questionable whether it involves better empathy.

Better empathy means allowing some form of feedback loop—especially critical in design processes—and also leaving a piece of yourself behind. Exchange programs, living in unfamiliar places, and other immersive experiences allow the building of lasting relationships that are the epitome of reciprocal empathy.

Blend: Emotional v. intellectual empathy

It might seem obvious that empathy is always emotional, but there is a less stirring form of understanding that deserves equal billing. Intellectual empathy is the ability to comprehend someone else’s viewpoints and opinions, even when their premises, methods of reasoning, and conclusions may be very different from your own.

“I don’t see how anyone could possibly think…” is the surest indicator that intellectual empathy is missing, and partisan politics is its largest black hole. Otherwise intelligent people find themselves unable to wrap their heads around the logic of political opponents. However externally flawed someone’s analysis may be, there is always some amount of internal logic to it. Intellectual empathy is the ability to see that logic.

While we might take the symmetry of reciprocal empathy as unequivocally better than the unilateral form, the emotional v. intellectual divide calls for a balance. Emotional-only empathy lacks self-reflection and facilitates simplistic responses. Intellectual-only empathy risks a form of cold calculation that de-humanizes; it’s the empathy of early ethnographers, studying their subjects but unwilling to relate with them as equals.

Efficient empathy

Empathy doesn’t occur naturally in the quantities needed to make the social progress we want. So the sector manufactures it. However, when manufactured or induced for a specific purpose, empathy struggles to be honest. It gets caught in the goals for which it was manufactured, leaning heavily toward the emotional and unilateral variety. The manufacturers will point to market pressures, the demands of efficient production, and the fact that reciprocal empathy doesn’t scale. These are valid points, as far as they go.

In the case of environmental pollutants, we use regulation to reduce externalities. Is it time to regulate empathy pollution? Or maybe organize a consumer (donor) boycott of inorganic empathy? At a minimum, we can look to empathy innovators who demonstrate the market demand for more honest forms.

Of course, there’s no such thing as perfect empathy. Reciprocity and a balance of emotional/intellectual won’t fully bridge divides. We never truly understand someone else’s experiences. But we can get a lot closer than we currently do.

My SSIR post seemed to get a fair amount of attention, which is always nice. But of all the retweets and shares, my hat goes off to Jennifer Lentfer for picking out one of the most critical lines of the piece. To wit:

It’s a point worth underlining. A social enterprise can drive better management and process improvements by capitalizing on its ability to focus—an ability that’s hard to find in larger organizations with broader agendas. If we’re thinking like capitalists or economists, this is a competitive advantage for the social enterprise. Anyone interested in furthering social impact might conclude that social enterprises are the best investment. (And this is more or less what’s happened in the past decade, though rarely put in those terms.)

But if we’re thinking about the social impact space as an evolving ecosystem—creating variations, selecting them, and replicating them for further variation—then we’d be more interested in the roles played by a focused social enterprise in the broader system. That might make us more critical of investments in social enterprise, except to the extent that they serve those roles.

I replied to Lentfer’s tweet and then Ian Quick chimed in with the logical extension of that point:

What if, indeed.

There’s an analogy to be made between the social impact funding space and crude natural resource management. Sometimes we use too much fertilizer, which seeps into the waterways and causes algae blooms. Sure, we grew the plants we wanted, but we also ended up with all this other stuff and it’s killing the fish.

You may notice a parallel to my post a few weeks ago about social movements v. social entrepreneurs. When we consider only the single thing we’re funding or supporting right now, we miss the bigger picture. This results in both unintended negative consequences and missed positive opportunities.