By Howard White, CEO, Campbell Collaboration
Historian EH Carr wrote that when Oxford dons state that standards of living are falling they mean that they themselves can no longer afford domestic servants. Carr provided this example to illustrate social relativity. History is not the collection and presentation of objective ‘facts’. Which data we collect, how we analyse them, and how we present them are all subjective choices.
At one level, it is impossible to disagree with this view. An object looks different depending on where you view it from. This is as true of social phenomena as physical ones. Social change – and social policies – have winners and losers. Unsurprisingly, the losers see these changes rather differently than do the winners. This obvious point is missed by anyone who dismisses people who disagree with them as wrong, misguided or stupid.
The point is also lost in the focus on average treatment effects. If there is heterogeneity in effects, then the average treatment effect is rarely the most interesting or relevant finding. A few years ago, I was evaluating a group loan scheme in India. Women borrowed money for petty trading, small rice mills or livestock – usually goats. The small positive net income most women earned was offset by the huge loss incurred by women who bought goats which died. The zero average treatment effect was extremely misleading. Exploring heterogeneity exposed how, by supporting livestock purchase in the absence of livestock insurance, the project was further impoverishing already poor women.
So it is very important to write the What Works question in full. What Works, For Whom, In What Context and at What Cost? As I have written recently, evidence-based policy is not a blueprint approach. And it is not a one-size fits all approach. When the socially excluded respond that such a policy, programme or practice isn’t for the likes of them, they may well be right. Average treatment effects won’t convince me otherwise and shouldn’t be used to support programmes for the wrong people in the wrong context.
Good systematic reviews are sensitive to these questions. But it is difficult to explore heterogeneity with just a few included studies. Critics condemn Campbell and Cochrane reviews for being too demanding in our standards of evidence. I condemn those who continue to conduct untested programmes without collecting evidence as to their effectiveness. Every new programme conducted without a rigorous evaluation is a lost opportunity to learn. We need to test, and keep testing, so we can know that money is being well spent.
The testing agenda is a political agenda. Professions can stand opposed to change, opposed to finding out existing programmes and practice are ineffective, at least for some people in some circumstances. Evaluators know well the opposition we face in conducting evaluations and the difficulty in finding a receptive audience for negative findings. This is part of the politics of effect sizes. The other part is that effects vary, that one size doesn’t fit all. We need to be clear who gains and who doesn’t. This is what Better Evidence for a Better World means.
CEO, Campbell Collaboration