Meta-analysis needed to assess how effective programmes are

Howard White speaking

It ain’t what you do, it’s the way that you do it

In a recent presentation I mentioned the finding from the Campbell review of Payment for Environmental Services (PES)that such schemes have only a very small effect and so are very cost ineffective. No, one of my discussants objected, there is a new review commissioned by EBA in Sweden which shows that PES works.

So I took a look at the EBA review, which does indeed say that PES increases forest cover. There are two main reasons why these two systematic reviews of the same topic come to different conclusions.

First, they don’t come to different conclusions! The Campbell review also concludes that PES works. But the effect is very small. The EBA review uses goal scoring i.e. counting the number of studies finding a significant effect versus those that don’t. This is not the correct way to synthesize effect size data. If you doubt this, read Chapter 1 of Hunter and Schmidt’s textbook and you will doubt no longer. So the EBA review of effectiveness does not actually review how effective the programmes are. For, that we need meta-analysis, as presented in the Campbell review. And that analysis shows very small effects.

The second reason is the quality of the evidence. It is a general rule that the weaker the study design the more likely it is to find an effect. For example, the simple correlation shown in before versus after studies finds ‘an effect’ as it fails to control for any confounders, incorrectly attributing all change in the outcome to the intervention. Studies which don’t control for selection bias will find an effect where there is none if there is indeed selection bias. And so on. So reviews which lower the bar on inclusion criteria – often in response to pressure from funders to ‘not throw away evidence’ – introduce the very bias which good systematic reviews avoid. The EBA review, which employs what the authors call ‘generous screening criteria’, included studies which did not meet the Campbell inclusion criteria.

And, then, this week, I came across yet another systematic review of PES, this one commissioned by the Natural Environment Research Council. This one also says that PES works. In addition to the above issues, the study illustrates a common problem of reporting bias. The authors present their findings that ‘one-third of studies demonstrated a notable reduction in the degree of agricultural intensity … with almost half of these suggesting shifts towards… the development of timber plantations and forest management or protection’. That presentation leaves the reader with a favourable impression. But another way of putting it is that ‘over 80 per cent of studies found no increase in forest management or protection’. That is, the large majority of studies find no effect. But of course, what we need is meta-analysis to get at both the size of the effect and to explore reasons for the variation in effectiveness.

As my discussant said, ‘What are readers to think, when there are different systematic reviews giving different conclusions?’ I agree this is a very unsatisfactory state of affairs. Systematic reviews have become very popular in international development, with more and more agencies commissioning them. We often have many reviews of the same topic. There is a double waste of resources here: doing multiple reviews of the same topic, and doing sub-standard reviews. Wouldn’t it be good if there were an agency which could coordinate this demand for reviews, hold them to a high quality standard, and put them in the public domain in the public domain in one easily found place?

Oh, wait, there is such an agency. It’s called the Campbell Collaboration.

Contact us