Composite indices have become the lingua franca of development policy in India and across the region. Vulnerability indices, district performance indices, climate readiness scores, gender development rankings, aspirational district composite scores. They have several attractions: they reduce a multi-dimensional reality to a single number, they produce ordered rankings that journalists can quote, and they look like the output of a technical procedure.
The trouble is the third attraction. The composite index is not the output of a technical procedure. It is the output of a sequence of policy choices — about which dimensions to include, how to normalise them, and how to weight them — presented as if it were a calculation. The seams are usually invisible to the people the index governs.
The most consequential of those choices is weighting. Most published indices use equal weights and describe this choice as "transparent" or "neutral." Equal weighting is neither. It embeds the substantive claim that a unit of, say, heat-vulnerability matters as much to overall vulnerability as a unit of livelihood-vulnerability or a unit of healthcare access. Whether that claim is true is a question about how the constructed concept of vulnerability is supposed to map to lived experience or to programme priority. It is not a question that an arithmetic mean can answer.
The alternatives have their own problems. Principal components analysis chooses weights to maximise variance explained, which optimises for statistical separation between units rather than for any policy-relevant notion of importance. Expert-judgement weighting (Delphi panels, AHP) makes the value-laden nature of the choice explicit but introduces consensus dynamics that often pull weights toward whatever seems defensible at meeting time. Data envelopment analysis lets each unit choose its most favourable weights, which makes the index almost meaningless as a comparator.
What is rarely communicated — and is the empirical heart of the matter — is how sensitive rank order is to weight choice. In practice, when one runs the same index under five reasonable weighting schemes, the top decile and bottom decile tend to be stable. The middle of the distribution swings substantially. A district at rank 47 under equal weighting can be at rank 23 or rank 71 under defensible alternative weights. The headlines are never about the top and bottom of the distribution. They are almost always about whether the policymaker's district moved up or down. The middle is where the action is.
The downstream consequence is that index-driven policy — "we will support the bottom 50 districts on this index" — is making allocation decisions in the part of the distribution where weight choice dominates the data. The optics suggest that allocation is following evidence; the substance is that allocation is following whichever weight scheme the index designers happened to choose, often without sensitivity testing.
Goodhart's law arrives shortly after the index does. Once a composite is a target, its components become targets, which means they become reported with a strategic eye toward their effect on the rank. Sub-component manipulation is rarely outright fraud. It is more often selective compliance with reporting protocols, prioritisation of components that are visible in the index, and de-prioritisation of activity that is hard to translate into a sub-indicator. The index becomes the policy in a way that the index designers usually did not intend.
NITI Aayog's Aspirational Districts Programme is a useful case study because the index, the rankings, and the methodology documents are all public. The programme deserves credit for openness; it also illustrates the dynamics. The composite delta-ranking system rewards improvement on the index, which means districts have an incentive to focus on whichever indicators move most quickly under available levers, not necessarily whichever indicators reflect the most binding development constraints. This is not a critique of the programme's intent, which is serious. It is a remark about what happens when an index is upgraded from analytical tool to allocation rule.
What is the better practice? First, treat composite indices as exploratory and communicative tools, not as targets. Second, publish component-level data alongside the composite, and visualise it in a form that does not collapse to a rank. Heatmaps and small multiples carry more information than ordered lists. Third, publish weight sensitivity analysis — if rankings shift substantially under reasonable alternative weights, say so. Fourth, and most importantly, name the weights for what they are: a value choice about what matters, made by named people, defensible on grounds that should be stated.
The aesthetic appeal of a single number is also its political danger. A district ranked 47th cannot easily argue with the rank without accepting the methodology, and accepting the methodology requires accepting the weights, which were never up for argument. Reframing the index as one of several possible orderings, each defensible under particular value commitments, is more honest. It is also harder to put on a slide.
Useful references: the OECD/JRC Handbook on Constructing Composite Indicators remains the standard treatment of weighting and aggregation; NITI Aayog's Aspirational Districts dashboard publishes component-level data alongside the composite and is worth comparing under alternative weighting schemes; and Lant Pritchett's CGD essay on folk and formula in development measurement is a useful cross-reference.