← All insights

Why quantitative surveys in India need Human-Centred Design

When we began designing the baseline survey instruments for the India Health and Climate Resilience Fund's district learning architecture, we started where most quantitative studies start: with a literature review, a conceptual framework, and a set of constructs we wanted to measure. What we found, once we took those instruments into the field, was that the questions we had written and the questions respondents were actually answering were not the same thing.

This is not an unusual finding. It is just rarely documented honestly.

The four districts in the IHCRF architecture — Chamarajanagar in Karnataka, Dhubri in Assam, Khunti in Jharkhand, and West Singhbhum in Jharkhand — span radically different ecological zones, health system structures, and livelihood calendars. We were measuring climate-health intersections: heat exposure, water access, illness patterns, health-seeking behaviour, and how these connected to household economic shocks. The constructs are well established in the global literature. The instruments that carry them usually are not designed for the specific rhythms of these communities.

Three problems came up repeatedly in piloting, each surfaced by HCD methods rather than standard cognitive pre-testing.

Response categories that collapsed real distinctions. A question on water source access offered options that made administrative sense — borewell, handpump, piped water, open well, river/stream — but obscured the dimension respondents actually used to evaluate their own water security: whether the source was reliable across seasons. In Dhubri, a handpump and a river can both be primary sources, but one dries to saline in March and one floods in July. Both carry disease risk at different points in the year. A single-point water source question, asked without seasonal anchoring, produced data that looked clean and was functionally misleading.

Recall windows mismatched to lived time. Standard household survey recall periods — "in the last 30 days," "in the last 12 months" — are designed around calendar time. In agricultural communities structured around sowing, harvesting, and lean seasons, the meaningful unit of time is the agricultural cycle, not the Gregorian month. Asking about healthcare expenditure "in the last 30 days" during the harvest season in Khunti returned a very different picture than asking the same question three months later. Frontline workers told us this during the HCD iteration sessions; no enumerator training protocol would have surfaced it.

Scales that respondents answered politely but did not use discriminatingly. Likert-scale questions on women's decision-making autonomy — a staple of empowerment measurement — returned heavily compressed distributions. Most women marked the highest positive response across nearly every item. This is not because autonomy is uniformly high. It is because the social context of a survey interview, conducted with a stranger in or near the home, activates social desirability in a specific direction. When we restructured the same questions as decision-scenarios — "your child is ill and your husband is at work; who decides whether to take her to the health centre, and how?" — the answers were far more variable and far more informative.

HCD applied to instrument design is not about making surveys friendlier or more participatory in a philosophical sense. It is a method for narrowing the gap between the construct you intend to measure and the measure you actually obtain. That gap is where most measurement error hides, and it is systematically underdiscussed in development research — partly because it only shows up clearly in the space between instrument design and fieldwork, and most studies treat that space as logistics rather than epistemology.

The process we used involved two iterations with frontline health workers before the instrument was finalised, and one round of cognitive interviewing with community members in each district. Total additional time: roughly three weeks. The resulting instruments produced distributions that matched the qualitative patterns we already had from the district base papers — which is not proof of validity, but is meaningful corroboration.

The IHCRF district base papers for all four districts are available through the Fund. The National Family Health Survey and India's Health Management Information System provide district-level comparison data for health system indicators; both were used as anchor points during the instrument design process.