Several folks reached out in response to the last post, on learning and adaptation. It seems like establishing and promoting learning processes within organizations is on many people’s minds. In that light, a few followup thoughts:

1. Types of organizational learning

We should all be familiar with the idea of individual learning styles—visual, auditory, tactile, etc. There are parallels in organizations: some learn by seeing proof and incorporating academic evidence into organization-wide processes (analogous to visual learning); others learn in more fluid conversation with the sector, with lessons absorbed by individual staff members as part of their own professional development (call that auditory); and others learn by doing, drawing lessons primarily from their own work and the feedback loops within it (tactile).

Variations and combinations of these abound, but the broader points about learning styles apply: different organizations learn in different ways; and different types of learning serve different organizational priorities.

The term “learning” may actually be a bit problematic, if we conceive of a hierarchical relationship between the “knowers” and the “learners”—the professional analogue of schoolchildren at desks. This model might apply to the incorporation of academic findings (on, say, the effectiveness of humanitarian cash transfers) into programmatic work or policies. In such cases, there are “knowers” separate from the “learners”; your challenge as an organization is to learn from those who know.

However, the most critical learning in organizations involves discoveries that aren’t verified by any sort of external authority. This learning is inherently internal. It is uncertain (managers must make daily decisions based on incomplete evidence, with less “rigor” than academics); tacitly held by staff (often without mechanism or opportunity to be stated explicitly); and relevant only to that organization’s context and work.

This suggests a corollary to the classic distinction between known-unknowns and unknown-unknowns: in organizational learning, there are the somebody-else-knows and the nobody-else-knows. We bring the former in from the outside, and generate the latter internally. Learning what others know is a necessary but not sufficient condition for being a learning organization. You have to generate new learning internally as well.

2. Beyond indicators: Iterative visualizations and aggregated narratives

In August, I had a chance to attend a good conversation at USAID about measuring systemic change. Measuring systemic change is central to understanding the impact we’re having, but systems are so hard to measure at a single point in time that tracking change with any kind of rigor can seem impossible.

One of the insights from the USAID event (hosted by the agency’s Learning Lab, naturallyLocal Solutions Community and Local Solutions team) was that approaches to measuring systemic change fall into three broad buckets. One is the use of relevant indicators. Important, but not as interesting as the other two:

  • Iterative visualizations. Showing systems graphically helps us to understand them more quickly. It also allows a certain flexibility and uncertainty through the visual presentation of relationships and processes—in contrast to words, which often lead us to specify elements more precisely than we’re able (or couch them in caveats). Similarly, iterating on visualizations over time shows us how the system changes (again, without the pressure to articulate that change precisely). Imagine how a social network analysis might change over time. (More examples from the event here.)
  • Aggregated narratives. Drawing on methodologies like Most Significant Change (MSC), the insight here is that the individuals impacted are their own best judge of how and whether they’ve been impacted. MSC is a participatory and indicator-free way to have program participants (beneficiaries, users, etc.) articulate what changed for them personally; those insights can be aggregated to form a more complete picture.

For any organization pursuing broader systemic impacts, learning about how the system has actually changed (and learning to influence it more effectively) requires some combination of these.

Coincidentally, I just came across a great primer on visual thinking that is especially good for the non-visually inclined.

[toggle title=”Visual Thinking Sketch Notes”]



3. Connection between learning and Theories of Change

ODI’s Craig Valters put out an excellent blog post and report on theories of change in development. It resonated with themes from the DDD conversations: e.g. the idea that the specific tool you use isn’t as important as how you use it. It doesn’t matter quite whether you’re filling out a ToC, logframe, or something else, so long as you’re doing it in a reflective, inclusive, and iterative process.

[toggle title=”Read the report: ‘Theories of Change: Time for a radical approach to learning in development'”]

[gview file=””]


Similarly, the outcome of the process isn’t as important as the process itself. The practice of regular theorizing is more important than the theory created at each stage—with obvious parallels to the idea that plans are useless but planning is essential. That said, future learning and adaptation over time hinge on how well you capture the theory at each stage for future assessment (perhaps through iterative visualizations).

Third, Valters does a useful breakdown on learning that parallels the questions we should always ask when someone talks about accountability: learning for whatby whom, and in what ways? In particular, we need to avoid the tendency to think that learning is done at some central location, using data gathered from the fringes and then distributing implications outward. The most important learning may be that done by those at the front line of any given effort, and efforts to capture or measure those might actually undermine them.

Finally, there’s an obvious tension between the accountability of the results agenda (often confused with accountancy, as Valters notes) and the prerequisites of learning (openness, willingness to fail, flexibility, and so on). The two are not necessarily in conflict, and the political imperatives are such that learning is unlikely to supplant results in any case. However, the two could co-exist more effectively than they do. This will be helped by a growing body of evidence demonstrating the importance of learning for improving results.

4. Adaptive learning for accountability and open governance

Global Integrity has put the idea of adaptive learning at the heart of its strategy. Over the past several months, the aforementioned Alan Hudson led the team at Global Integrity through a very open strategy process—to the point where he actually tweeted a link to their evolving strategy in Google Docs and invited the world to comment. This aligns well with the values of an organization committed to accountability and transparency, but it’s still an uncommon move.

In any case, it looks like I’ll get to spend some time working with them over the next few months in refining what that strategy means in practice. That’s where the rubber really hits the road: I can opine all I want on a self-hosted blog, but the market dictates that I also help actual organizations do actual things. Yikes!

  1. Dave, thanks for the shout-out on the discussion USAID convened on measuring systems change and capacity development. Two points of clarification. One is that the event was sponsored by USAID’s Local Systems Community and Local Solutions team. Second, we posited that indicators can be used as a third way to measure systems change (in addition to iterated visualizations and aggregated narratives). Of course they need to be used carefully–selecting measures that are locally meaningful and in conjunction with a framework that helps you identify what dimensions of a system are of interest. Indicators may be old school, but I do think they have a place.


  2. Readers may be interested: In addition to the Local Systems and Local Solutions Tjip notes, USAID is also integrating Collaborating, Learning & Adapting (CLA) throughout our field programs in a way that is highly customized to country context and program, and supports reflection and iterative course correction on our part and with/by the organizations we fund to implement our programs. Learn more at .


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: