Forget about roots and embrace the webs: What complexity means for our traditional views on causality

Our tools for identifying cause-and-effect in the world are matched by a particular view of how causality works. Ideas from complexity theory are forcing us to update our views on causality, so our tools must be updated as well. Before getting to the updates, I want to start with some of the tools used under a more traditional, linear, and simple view of causality.

Old tools 1: Root causes and growth diagnostics

There’s a process used by many consultants called root cause analysis. The basic structure works like so: Start with whatever problem you want to solve, then break it into constituent parts and causes. Go down another level, breaking those causes into their own causes. Keep going until you hit the “root cause” — and you’ve found the thing you need to address.

Many years ago, I worked for a consulting firm that applied this analysis to particular organizations and their management’s concerns. At this relatively simple and self-contained level, it’s a useful tool. The inputs were both qualitative (e.g. interviews with executives) and quantitative (e.g. benchmarking data). The analytical process was largely inductive. The output was displayed visually as a tree diagram. It looked something like this:

Root cause or growth diagnostics?

(Source.)

Except that this particular diagram came from a slightly different sort of analysis.

A few years after that consulting work, I found myself in grad school learning about growth diagnostics in international development. This approach shares a fundamental insight with the root cause analysis that I had used before, which is this: there may be many shortcomings in an economic system, but growth faces certain binding constraints which should get priority for reforms. The concept of binding constraints isn’t revolutionary — think of bottlenecks in a production process — but creating a framework for applying it to entire economies turns it into a very powerful tool.

One strength of growth diagnostics is that it considers each case individually. The approach is often held in contrast to the one-size-fits-all policy prescriptions of the Washington Consensus. Like root cause analysis, growth diagnostics brings order to the problem in question and guides our thinking on possible solutions.

However, it is still largely a one-way, top-down exercise. Dani Rodrik described a successful diagnostics exercise as, “moving downwards in the decision tree, rather than upwards or sideways.” And in a growth diagnostics handbook, Hausmann et al. noted that:

Growth diagnostics goes from a very aggregate outcome, such as the growth rate of an economy, to its potential causes. As such, it is a top-down approach. (20)

Keep in mind that this is a root cause analysis diagram, and as such should be read from top to bottom, and not side to side or from the bottom up. (56)

(Emphasis mine.)

It starts from the big problem, then moves down to its causes. Only then are solutions considered. Another paper Rodrik wrote on the topic encapsulated this thinking in the title: “Diagnostics before Prescription.”

This top-down thinking is a strength and a weakness. It’s a strength because it orders the analysis in a convenient and ultimately powerful way. However, there’s no reason to think that a binding constraint, once identified, will yield to any policy or programmatic efforts. I saw the same thing when applying root cause analysis to organizations.

At the scale of national economies, political interests are frequently responsible for the binding constraints. Hausman et al. acknowledged this, but offered little guidance on how the diagnostics approach should deal with it. Diagnostics is generally focused on economic factors. You could extend the same type of analysis into political factors, but then it would be individuals and interest groups who are at fault. The analysis could no longer remain technocratic as the conclusions become contentious and (surprise) political.

For example: Suppose you traced one cause of low growth to high transportation costs, which are due to poor infrastructure in most areas. And further suppose that the poor infrastructure was due to political elites favoring other areas, or corrupt officials/contractors siphoning off money. Those problems won’t yield to mere policy fixes. And I guarantee that analysis won’t be gratefully received by national leaders who were simply waiting to have their eyes opened to the problem.

These are the weaknesses of growth diagnostics: lack of policy guidance and inability to grapple with politics.

Old tools 2: Intervention points and RCTs

Fortunately, the backers of growth diagnostics don’t claim that it’s the final word in analysis. The Hausmann et al. piece called diagnostics a “natural complement” to the more bottom-up approach of cost-benefit analysis on particular projects or policies. And in another piece, Rodrik called for pluralism, admonishing development economists for often believing in the “one right way” — whether a universal fix or a universal way of learning.

For example, he described how the “macro” of growth diagnostics relates to the “micro” of randomized control trials:

Ideally, diagnostics and randomized experiments should be complementary; in particular, diagnostics should guide the choice of which random experiments are worth undertaking. Any developmental failure has hundreds of potential causes. If the intervention that is evaluated is not a candidate for remedying the most important of these causes, it does not pass a simple test of relevance. (16)

I like how Rodrik framed the relationship between the two methods. Diagnostics starts with the big problem and works downwards, seeking root causes and areas for possible solutions. On the other hand, RCTs start with interventions and work upwards, rigorously testing their impacts on bigger problems. If growth diagnostics seeks root causes, then the parallel concept for RCTs could be called intervention points.

When the dust settles, those two should be exactly the same: we’re looking for interventions at the root causes of poverty, poor health outcomes, hunger, and more. As a framework, intervention points strikes me as more intuitively useful than root causes — ultimately we’re interested in impact, bettering lives, doing things — but let’s keep in mind that these are basically one and the same.

The strength of RCTs is that they pin down these intervention points with a large degree of certainty. The method does this by controlling for all factors other than the intervention being tested. Those other factors are stripped away in the analysis. That leaves us with a pretty clear idea about the causality for that intervention point. However, causality isn’t the same as explanation. While establishing an intervention’s causality is helpful in some regards, it doesn’t tell us much about whether we should replicate that intervention in another context. To answer that question, we need to understand how and why it worked. We need an explanation. Some RCT proponents claim that repeating the study in different contexts will bolster the external validity of the results, but there’s increasing recognition that RCTs must be matched with other methods.

Political factors pose a major challenge to RCTs as well. Even for relatively straightforward interventions, local politics have the potential to cause unintended consequences that would confound measurement of the results. For explicitly political interventions, the situation is even worse, as contextual factors become central to the program’s execution and impacts.

So even if we pin down one causal consequence at an intervention point, we still lack certainty about other consequences and what they mean for the intervention’s applicability to another context.

The challenge: Politics, complexity, and the rootlessness of causality

Both RCTs and growth diagnostics share a blind-spot when it comes to politics. Maybe that shouldn’t be surprising, as these methods are promoted by economists. There’s also a much deeper shortcoming that these two approaches share. The complementary nature of diagnostics and RCTs lies in their symmetry (top-down vs. bottom-up) as well as their simplifying tendency: both strip away the complexities of reality in an effort to isolate certain factors.

This is highlighted in the very term root cause. We use this term all the time, but no one really believes that the causality behind something can be traced back to a single root. Not for a specific event, and certainly not for complex social phenomena. Not only are causes multiple, but feedback loops make them circular: poverty is caused by lack of education is caused by government failure is caused by low government capacity is caused by lack of tax base is caused by poverty. We went from poverty back to poverty in five steps. That chain could also include health outcomes, agricultural production, violent conflict, or countless other factors. (NB: The negative framing doesn’t matter to this. You could do a positive version too: increased earning potential is caused by better nutrition is caused by new seed varieties — and so on.)

If you were to display this visually, it wouldn’t be a tree with roots. It would be a web. Actually, it would be a multidimensional mish-mash of overlapping feedback loops and tenuous but very real causal links between countless ill-defined nodes.

In fact, it might look something like this:

afghanistan stability-coin dynamics

(Source.)

In 2010, this chart quickly became emblematic of the US military’s reliance on PowerPoint. The New York Times ran it under the headline, “We Have Met the Enemy and He Is PowerPoint.”

More importantly, for our purposes, it underscored the complexity of US engagement in Afghanistan. The full version of the slide deck is even more bewildering. General McChrystal saw it and noted: “When we understand that slide, we’ll have won the war.” We all laughed about this muddled depiction of the Afghanistan conflict. Although in our honest moments, we silently worried that even this diagram was a simplification.

Yet none of this stops academics, advocates and journalists alike from discussing social problems as if causality were linear and identifiable. Find your own examples: just google the phrase, “root cause of…” followed by your favorite social issue or topical news story (e.g. HIV epidemic, financial crisis, Syrian conflict, whatever). Our language gets especially confused when we refer to something as being “both a root cause and a consequence.”

As I said above, what we’re really looking for are intervention points rather than root causes. But while that framing is more future-oriented, it still assumes that an intervention will have known consequences. In a complex causal web, that’s not a valid assumption. In the real world, causality doesn’t work like that.

Of course, there’s utility in the simplifying approaches of growth diagnostics and RCTs. All methodologies simplify the world to make it understandable, just as all narratives emphasize certain elements of the story while excluding others. In some situations, that simplification offers us enough to act. The critical step is to be cognizant of the limits of our knowledge — to know what we do not know. Methodological pluralism is needed for that. It must extend beyond diagnostics, RCTs, and even economics. We need other tools as well.

New tools: Grappling with a complex causal web

The world is incredibly complex. I could tell you that it’s “more complex” or “changing more rapidly” than ever before, but I don’t think that’s true. Pundits and consultants waive their hands and say “increasing complexity!” because it sounds cool and because it frightens audiences and clients into coughing up the cash. I’ve seen no evidence that this idea is anything more than chronocentrism at work.

Yet the world is still incredibly complex. It’s just as complex as it’s always been. The difference now is that we have more tools to grapple with that web of complex causality. We are complexity-enabled in ways that we never were before.

The tools fall into five broad categories:

1. Availability of data: Digital interactions have dramatically increased the amount of data available for analysis. Major corporations that do a lot of business online have the most, due to customer purchases and behavior. Other companies and organizations will catch up eventually. A lot of claims have been made for how Big Data will revolutionize analysis. However, it doesn’t seem like there are accepted methodologies for analyzing Big Data yet, so availability alone might not be enough. Big Data might lead to research risks like cherry-picking, false precision, stripped caveats, or a technocratic veneer on deeply political results. Still, Big Data has potential.

2. Processing power: I won’t belabor this point, since growth in computing power is a well-known phenomenon. The complexity-enabling aspect is that researchers can process the Big Data, and also build more detailed and nuanced models of reality. There is also a human side to increased processing power due to communication systems. Just as better computer chips allow faster digital data processing, better communication systems allow faster human data processing. Learning ultimately results from human and organizational processes. Computers crunch numbers but only people can give them meaning. We do that best through discourse with other people, and communications technologies are making that easier.

3. Interdisciplinary approaches: Methodological pluralism within economics is gaining ground, and so are interdisciplinary approaches. Whereas previously we relied on the different disciplines to strip away complexity on their own narrow topics of interest, now we see that the walls between disciplines are very porous and that this allows us to grapple more directly with complexity. Physicists are helping to explain traffic jams, design thinking is tackling international development problems, and the entire Freakanomics franchise is built around the application of economic methods to other topics. Collaboration across fields is yielding new possibilities for understanding the complexity we face (though improvements in university structures and funding could accelerate this).

4. Analytical frameworks: Complexity science itself offers a powerful lens. Concepts like emergent properties, feedback loops, and non-linearity help us understand events like the Arab Spring or shifts in ecosystems. As the complexity lens is applied to more problems and situations, we may see new analytical frameworks that incorporate complexity concepts. We’re already seeing this in disaster preparedness/recovery with the idea of resilience, which may have its own frameworks as it matures. In development more generally, maybe we’ll see a replacement for the dominant logical-framework, which is so ill-suited for describing complex programs.

5. Organizational and operational models: The last piece of this will be new ways of doing things — which is what this was all about in the first place. Some early efforts are underway. For example, Owen Barder and Ben Ramalingam describe cash-on-delivery aid as a complexity-aware approach; though I disagree with them on whether COD aid qualifies, the point here is that complexity thinking can change the way we address problems. Another example: promoting resilience may also involve new ways of organizing government or civic associations, or new funding mechanisms for recovery.

Some of these tools are in place, while others are still progressing. We’re leaving behind the days when a simple view of causality was the norm. These changes will cut across sectors. A new paradigm is emerging for international development in particular, with elements from the old one and influences from other fields being integrated. We’re developing the tools that will open our eyes and let us see the world as it is. Once we do that, there’s no telling what it will mean for our understanding and our impact.

_________________

Related posts on Find What Works:

10 thoughts on “Forget about roots and embrace the webs: What complexity means for our traditional views on causality

  1. I came accross this fascinating post and your blog thanks to a link on Twitter. I am also looking at this type of questions as a historian and was wondering whether you were familiar with Robert Geyer and Samir Rihani’s Complexity and Public Policy, A New Approach for Politics, Policy and Society (London, Routledge, 2012), which may be of great interest to you if you aren’t.

  2. Thanks for the interesting post Dave, and you’re spot on about the “things are increasingly complex” line, I’ve heard that too many times.

    Having worked a bit on complexity science, and then on something close to growth diagnostics, I hadn’t made this link before but I think you make a good point about the reductionist underpinnings. What I tried to do was to look for constraints that play an important role in the system, and would have a number of knock-on or catalytic effects if relieved – I’d say that kind of approach would be supported by systems theory, perhaps. Addressing all problems at once isn’t a possibility so some choices have to be made!

    If you or your readers are interested, please see some of our work at the ODI on complexity:
    – Our working paper from 2007 tried to explain some of the main concepts of the complexity ‘lens’ as you put it: http://www.odi.org.uk/resources/docs/833.pdf
    – and my recent follow-up tries to synthesise the main implications for policy and practice along 3 dimensions: http://www.odi.org.uk/sites/odi.org.uk/files/odi-assets/publications-opinion-files/6811.pdf

    Cheers

    Harry

    • Harry, good point about systems theory informing growth diagnostics. That’s why I like the metaphor of the “lens” (which I can’t claim as my own – I think I stole it from something Ben Ramalingam wrote). A lot of the more reductionist forms of analysis could find new life with a different framing.

      Thanks for the links to the ODI pieces. Much appreciated. I think I saw the shorter one before but not the longer one. I’ll check it out shortly.

  3. Excellent post and synopsis. What do you think of some of the approaches that do try to grapple with politics, such as the Drivers of Change or other political economy analyses used by some donors?

    Harry, thanks too for the ODI briefing papers, very helpful to be reminded of them.

    More broadly, it seems that there is an underlying point about the strategic direction of what development actors are trying to do that relates to their choice of how. We are reaching a broader consensus with different meanings for development that themselves embed different results – such as the ECDPM work on capacity development, for example, which defines it as both a pathway to impact and an impact in itself. By defining development more broadly, in other words, we require more tools, and less reductionist approaches, to see how to get it; our use of more tools also accelerates our broadening definitions. From a distance, the story seems to be one of the shifting power dynamics within the development industry, though whether that’s more about changes in relative strength (growth of BRICs, value of remittances) or changes in ideology (post-Cold War/transition reducing the primacy of the lens of liberal free markets versus state-planned) is hard to say.

  4. I would worry about building on this optimistic assessment of multivariate analysis in a RCT: “The method does this by controlling for all factors other than the intervention being tested. Those other factors are stripped away in the analysis.” I think the only reason medical research asserts that this claim is valid about RCTs is lack of interest in the accuracy of this sort of generalization. Multivariate analysis in an everything-but-the-kitchen-sink model is not going to provide this kind of insight at all. The justification for doing it is that it hypothetically could, but in model design the limitations of the statistical process and the data going in tend not to be taken into account. Qualitative research and common sense would do more good than a model of this kind, to be blunt. The way the statistically significant results are winnowed down to a few of the measured variables when there are too many variables in the model is a fluke. It reflects on poor design, and the design can’t really be salvaged for health research purposes.

    Look at it this way. Temporality is a huge problem with the validity of survey data being used to model cause-and-effect processes in health research. And some of the secondary variables thrown in to correct for their effects on the variable of interest will be evaluated in such a sloppy way in real world research that you won’t know anything about temporality from the way the results are reported. They just aren’t taking rigorous cause-and-effect research seriously when they use this type of model. I would put a lot more confidence in root cause analysis than in an RCT.

    Large trials with randomization can provide useful data, but it’s important not to overstate the level of complexity this method can support in an analysis. Population-based research is interesting, but very problematic, and RCTs breeze over the problems because they’re too difficult to solve, and they don’t want to get bogged down in a serious attempt.

  5. Reblogged this on Think. Write. Act. and commented:
    Some good thoughts on applying a lens of complexity to issues. Such interconnected and deeply rooted issues like conflict and SGBV in DRC cannot necessarily be explained linearly or even simply. They are complex, interconnected and a mashup of all sorts of ideas, challenges, narratives and constructs.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s