← All insights

Facility assessments don't tell the full story

The evaluation covered primary health care delivery across three districts in Sri Lanka. The system we were assessing had been through significant restructuring in the years following the end of the civil conflict — new facilities built, staffing increased, supply chains improved. Donor investment had been substantial and the Ministry of Health had invested considerably in process monitoring systems. When we began, the facility data looked reasonable.

Facility data, in health system evaluation, typically covers inputs and processes: staffing levels, essential medicine availability, equipment functionality, waiting times as recorded by facility staff, patient volumes, and compliance with clinical protocols. These are measurable, verifiable, and auditable. They are also a partial picture of what a health system is doing.

The patient experience data told a different story.

Exit interviews across the three districts showed a consistent pattern: patients reported long waits relative to what facility records indicated, poor communication from providers about diagnosis and treatment, and low trust in referral pathways to secondary facilities. Patients referred upward often did not follow through. When we probed why, the answers were not primarily about distance or cost — they were about the referral process itself. Patients described receiving referral slips without explanation of what the referral was for, what they should expect at the facility they were being sent to, or how to navigate the appointment system. Some had gone, waited, and returned without receiving the intended service.

Health workers, in separate interviews, identified the disconnect themselves. They knew the metrics that were reported upward — patient volumes, medicine availability, protocol adherence — and they knew these were not the dimensions of care that determined whether patients experienced the health system as trustworthy or effective. The system had been designed to be accountable to its funders' reporting requirements. It was less systematically accountable to patients.

This is a structural problem in health system evaluation that Sri Lanka did not create and will not solve alone. Most health system monitoring frameworks were built around supply-side indicators because those are more measurable: you can count medicine stockouts, you can count staffed facilities, you can verify that a clinical protocol is written on the wall. You cannot easily count patient trust, or the quality of a provider's explanation of a diagnosis, or whether a referral pathway functions for the patient navigating it rather than for the administrator tracking it.

The implication for evaluation design is that single-perspective assessments — however rigorously conducted — will reliably miss the dimensions that determine patient experience. A multi-perspective design that combines supply-side facility data with demand-side patient experience data is not more complex than it needs to be. It is the minimum necessary to evaluate what a health system is actually doing.

The operational challenge is that patient experience data is more expensive to collect and harder to standardise. Exit interviews require trained interviewers who can build rapport quickly. Sampling requires access to patients as they leave facilities, which some facility administrators resist. And patient-reported measures — particularly in post-conflict contexts where trust in institutions varies sharply by community — require careful instrument validation.

In this evaluation, a sub-sample of health workers identified one additional dynamic worth noting: they were, in some cases, actively optimising for the reported metric rather than the underlying goal it was meant to proxy. Waiting time was recorded from registration to first contact with a health worker. Patients who waited after that contact — for test results, for medicine dispensing, for a follow-up consultation — did not appear in the waiting time data. The metric was accurate. It was not informative.

The WHO Service Availability and Readiness Assessment (SARA)) methodology provided the supply-side framework. Sri Lanka Ministry of Health annual health statistics and WHO Health Observatory data offered district-level comparison points. Patient experience measurement drew on the People-Centred Health Care framework literature.