Commenting on a paper about monitoring, Rick Davies notes:

Complexity theory writers seem to give considerable emphasis to the idea of constant change and substantial unpredictability of complex adaptive systems (e.g. most human societies). Yet surprisingly enough we find more writings on complexity and evaluation than we do on complexity and monitoring. …

This imbalance is strange because monitoring typically happens more frequently and looks at smaller units of time, than evaluation. You would think its use would be more suited to complex projects and settings.

Good point. Davies goes on to say that big businesses have started using data mining and analytical tools to better understand customer behavior and preferences. The implicit question is: Why not development organizations? Surely, we face the same level of complexity as, say, a bank or an online retailer.

There’s a lot of value in the idea that Big Data will transform how organizations analyze, understand, and respond to their context. But in the low connectivity and low computer literacy contexts where many NGOs and social enterprises operate, taking advantage of Big Data requires prohibitively high levels of organizational investment. We’re years away from making use of it for daily or even annual decision-making. If we want to apply complexity thinking to development implementation — rather than just research and evaluation — then we need to learn how to monitor and establish feedback loops in a context of complexity but Little Data.

For some insight on that, the paper that Davies posted includes this quote from another source:

The key to learning is not analytical method, but organisational process; and the central methodological concern is not with the isolation of variables or the control of bureaucratic deviations from plans, but with effectively engaging with the necessary participation of system members in contributing to the collective knowledge of the system.

(Emphasis added.)

That line was written in 1981, long before all our talk about complexity in development contexts. It’s still relevant today.

  1. Dave, great post! You really struck a very important note- there is a lot of talk nowadays on feedback loops and evaluation, much less on monitoring (which should enable a more effective evaluation, no?). We at UNDP are starting to experiment with using narratives for both monitoring and evaluation together with Cognitive Edge ( Well, let me expand on that- we are running 6 projects in parallel where we’d like to test continuous story capture for: identifying issues (so research), monitoring effectiveness of various interventions (running quick, workable solutions of policies in the field and using stories to test their impact), and evaluation of already performed work. We’ll be writing up our experience and lessons learned as we go along. thanks for your post again!


  2. […] corporations use sales records or other data to better understand customer behavior. I’ve voiced my skepticism about how quickly these practices will transfer over to contexts of low connectivity, computer […]


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: