Context matters: what works in one place may not work in another.
First, evidence-based policy is not a blueprint approach. We should not say, "this worked in California, let’s do it in Cardiff or in Conakry". Context matters: what works in one place may not work in another. This has been the case for the Nurse Family Partnership which has been effective in the US but not the UK. So, we need to identify approaches which have worked elsewhere which seem implementable in our context, and then pilot them and test them.
Second, use high quality evidence from evidence synthesis, not single studies. Mandatory arrest for domestic violence has become common in the US because of a single study in Minneapolis, even though studies in other cities have found it to be no more, or even less, effective than existing practice. Systematic reviews tell us the findings from all the highest quality evidence. Some things, like Scared Straight and abstinence approaches to teen pregnancy, have never worked and are best avoided. In other cases, like interventions to stop children dropping out of school, a range of approaches are effective. Pick those which seem most applicable to your context to implement and test.
For policy purposes, cost effectiveness matters more than statistical significance.
Third, focus on cost effectiveness not statistical significance. Academics doing studies of effectiveness focus on statistical significance. And this matters. But for policy purposes, practical significance matters more, and that is best captured by cost effectiveness. How much does it cost to keep a juvenile from crime, to lift a family out of poverty and so on? Sometimes a statistically significant effect may be too small to be worth the money. This is why useful evidence portals, like the Education Endowment Foundation’s Teacher and Learning Toolkit clearly indicate the cost of an intervention on their evidence dashboard.
Fourth, address second generation design questions. First generation questions address "does it work?". Second generation questions are "whow does it work for, and how?" so as to improve programme performance. The Campbell review on conditional cash transfers show that they has a stronger effect on school attendance the better conditions are monitored and enforced. The review of school feeding shows it improves nutrition, with a bigger impact the better the targeting and supervision.
And, finally, just keep testing. Testing of social policy and practice should be more closely modelled on clinical practice where there are stage 1, 2 and 3 trials. Doing so gives the evidence-driven project (or policy or practice) cycle shown in the figure. Starting at the top, blue circle, we consult the evidence base to identify a suitable programme for our context, and then conduct formative evaluation to fine tune the design before moving to a formal efficacy trial. If the pilot is successful, we go to scale. But just keep testing. There are countless examples of things which worked in the pilot which don’t work at scale. Every stage is iterative and we may need to go back to the start. Add all this evidence to the global database (in a Campbell review of course) to inform future policy and practice.
This last step – adding to the evidence base – is vital. The evidence revolution is here. But it can fail, to be replaced by policy driven by faddism and favour. Building up a global repository of knowledge of what works – that is the Campbell journal – builds a bastion from which to defend the revolution in the interests of better policy and practice, and better lives for the poor and disadvantaged.