01

How we see research

Research begins where instruments fail. The gap between the question a researcher writes and the question a respondent actually answers is where most measurement error lives. Survey design assumes shared frames of reference. Field reality is messier — a question about “household income” lands differently in a household where income is irregular, shared with extended kin, or earned in kind.

We build instruments by iterating with frontline workers and community members before enumerator training, not after. The point is not to make data “better” in the abstract — it is to narrow the distance between the question we wrote and the question respondents heard.

Diagram: the gap between the question written and the question answered
The measurement gap — where most error hides.
02

The size of the practice

Pinpoint Ventures is small on purpose. We take a limited number of engagements each year because each one needs the kind of attention that scales poorly. A research design we revise once is a research design we have not lived with long enough.

Working at this size means each project shapes the others. The aquaculture welfare work informs how we think about indicator vocabularies in emergent sectors. The climate-health work shapes how we read administrative data. The discrimination work refines how we measure what is officially uncounted. The cumulative effect — not the volume — is what makes the practice useful.

Diagram: depth of attention versus number of engagements
Depth scales poorly — and that is the point.
03

From field to frame to policy

The shortest distance between a finding and a policy decision is rarely a straight line. The most useful research outputs are not findings but frames — ways of seeing a problem that survive after the report is filed.

Our work moves through five stages: field, method, evidence, framework, policy. Each stage shapes the next, and the framework usually loops back to refine what counts as field. The “learning layer” is the part most evaluation contracts ignore — and the part that determines whether the work actually changes anything.

Flow diagram: field, method, evidence, framework, policy with a learning loop
Field → method → evidence → framework → policy, with a learning loop back.
04

Frameworks that last

Most evaluation reports are read once. Most theories of change are pinned to office walls and forgotten. Both are normal — and both reflect how the development sector mistakes deliverables for outcomes.

A useful framework outlives the engagement. It survives staff turnover. It informs the next funding round. It changes the way an organisation argues about its own work. We design MEL systems, indicator architectures, and learning layers with this longevity in mind. If our deliverable is useful only while we are in the room, we have failed.

Diagram: reports get filed, frameworks get used
What gets filed vs. what gets used.
05

Methodologically plural

The fight between qualitative and quantitative research is mostly an artifact of academic departments. In the field, the question decides the method.

We use RCTs and quasi-experimental designs where the counterfactual matters. We use photovoice and life-history interviews where lived experience cannot be aggregated into a number. We use Human-Centred Design for instrument iteration. We use historical and policy analysis to read structural conditions. The discipline is in choosing — not in defending one method against another.

Diagram: a wheel of research methods feeding one central question
One question. Many ways in.

Want to see this in action?

Our work is where the philosophy meets the field.

Start a conversation