Most theories of change we inherit at the start of an evaluation engagement are flowcharts. Boxes for activities, outputs, outcomes, impact; arrows that move from left to right; a colour scheme that signals coherence. The diagram is on slide nine of the proposal deck. It has been approved by a board.
It is also, in most cases, useless as a guide to evaluation, because it does not commit the programme to anything that evidence could disconfirm.
The decorative TOC has a tell. Every arrow points up and to the right. There are no branches, no failure modes, no assumptions stated specifically enough to be wrong. The boxes describe categories — "improved livelihoods," "strengthened systems" — rather than mechanisms. Read at face value, the diagram is committed only to the claim that doing things will produce other things, which is true of most activity in the universe.
A useful theory of change does something different. It names the mechanism through which the activities are supposed to produce the outcome, identifies the assumptions on which that mechanism depends, and specifies what evidence would tell you the assumption is false. The test is uncomfortable: can you, before fieldwork, write down a finding that would force you to revise the theory? If the answer is no, what you have is a programme description, not a theory.
The mechanism question is where most TOCs collapse. A livelihoods programme's flowchart that reads "training → enterprise formation → income gains" hides three or four assumptions doing the actual causal work: that participants have access to start-up credit, that local markets can absorb additional supply at non-trivial margins, that household labour allocation permits sustained enterprise time, that the trained skill is the binding constraint on enterprise formation rather than something else. Each of those assumptions is testable. None of them appear in the diagram.
This matters because the TOC is, or should be, the pre-specification of what evidence would update the programme's beliefs. Evaluators arriving after baseline are doing one of two things: testing the theory, or producing post-hoc narratives that fit whatever the data showed. The difference between those two activities is whether the theory was specific enough beforehand to be in tension with at least some possible findings.
Contribution analysis, when it is taken seriously, imposes the discipline that flowcharts evade. The method requires writing down the alternative explanations for any observed outcome — secular trends, rival programmes, selection effects, measurement artefacts — and specifying what evidence would distinguish them from the programme's claimed contribution. The exercise is uncomfortable for programme teams because it forces an admission that other explanations are plausible. The discomfort is the point.
There is a political function to decorative TOCs that is worth naming. They signal coherence to funders without committing the programme to anything specific. A board approves the diagram more easily than it would approve a paragraph that begins, "We are betting that the binding constraint on women's enterprise formation in this district is a credit gap rather than a market-demand gap, and if we are wrong about that, the programme will fail." The flowchart's vagueness is load-bearing.
The practical advice we give programme teams is straightforward. Write the theory of change as prose paragraphs first, then draw the diagram. Prose forces commitment to mechanisms; diagrams smooth them over. The paragraphs should name the binding constraint the programme believes it is addressing, the assumption set under which addressing that constraint produces the outcome, and at least two findings from baseline or early implementation that would force a revision.
This is more time-consuming than producing a flowchart. The time investment is what makes the exercise valuable. A theory that took fifteen minutes to write down took fifteen minutes' worth of thinking. A theory that took three iterations of argument among the programme team took three iterations of argument's worth, which is meaningfully more.
One caveat. Theories of change written as falsifiable claims are easier to evaluate but harder to fund. Funders who say they want rigorous evaluation often respond to specific, falsifiable theories with discomfort, because such theories make programme failure legible. This is a real tension. We do not have a clean answer to it. But the choice between a TOC that can be wrong and a TOC that cannot be evaluated is not actually a methodological choice — it is a choice about whether the programme intends to learn from evidence at all.
Useful background reading: BetterEvaluation on contribution analysis, John Mayne's original CGIAR paper on contribution analysis, and the ActKnowledge ToC Basics primer.