Sorry for the radio silence in recent weeks. Take it as a sign that I haven’t quite figured out the secret to freelance work/life/blogging balance. Fortunately, I had a chance to take a break last week. I headed down to Rio, glanced briefly at the beach, and then spent three days in a hotel conference room. Huzzah. (On the plus side, it was one of the least conference-room-like conference rooms I’ve been in.)
I was there for the third TA LEARN workshop, hosted by the Transparency and Accountability Initiative (TAI). Around 70 practitioners, researchers, funders, and the occasional consultant gathered to assess and advance the state of practice on transparency, accountability, open governance, and related issues.
In particular, the workshop focused on learning in transparency and accountability work. While that learning takes many forms, I would broadly place it in two categories:
- Learning about transparency and accountability. I.e. what’s the state of practice, what’s generally working or not, what evidence do we have or (more often) lack. This sector confounds many of the standard tools of development evaluation and learning (indicators, attribution, etc.) so having dedicated space to dive into the nuances of what we’re learning is critical.
- Learning for transparency and accountability. This is the more concrete, real-time, and context-dependent learning that happens in the midst of programs or campaigns. This learning drives changes of direction, but offers fewer lessons about broader practice. Though useful in any sector, there’s a strong case to be made that this form of learning is an order of magnitude more important in accountability work: the unknowable/shifting aspects of governance and the political nature of change limit the utility of advance planning, thereby increasing the need for ongoing learning. In other words: when broader learning is harder, ongoing learning is critical.
The workshop tackled these forms of learning with varying success. I feel like we hit the mid-range on both of the above: we exchanged solid ideas and experiences, but we neither crafted a grand manifesto nor did we trade tools and tips at the most concrete, tactical level.
For me, the most interesting conversations were around the second category: ongoing, real-time learning. I had three main takeaways. In the interest of digestibility, I’ll save #2 and #3 for later posts.
Takeaway #1: Learning is adaptation.
What does it mean for an organization to learn? This questions was posed by a fellow participant one evening over dinner (where the most interesting conversations at many conferences occur).
In one sense, learning happens at multiple levels: individuals learn new skills; an organization learns that its strategy needs to change; the sector learns to support or abandon particular approaches. But in another sense, learning only happens at one level: the individual. Databases of PDFs and “best practices” are not learning. Organizations are made up of people, so an organization only learns if its people learn and are able to put that learning into practice in the organization.
For example: A program manager or campaign organizer may learn that a certain mobilization approach doesn’t resonate with their constituents. If she’s able to ditch that approach and switch to another, we can say that the organization learned. However, if something forces the organization to continue using that approach—e.g. because the project indicators are unchangeable or the approach utilizes a flashy tech tool that’s great PR—then individual learning fails to translate into organizational adaptation. At that point, what sense would it make to say that the organization has learned anything?
So the sine qua non of organizational learning is adaptation. And as a corollary, adaptation is the best indicator that learning has occurred. If you’re going to hold someone accountable for learning, look for their adaptation.
And stay tuned for takeaway #2: learning must be user-owned.