How should we summarize bodies of evidence? The emerging evidence architecture for knowledge brokering.

By Howard White, CEO, The Campbell Collaboration

Some people stress the difference between evidence and knowledge. I am not big on semantics. But I do see that different ways in which evidence exists or is reported differ in the degree to which that knowledge has been brokered or translated for use in policy and practice. Evidence-based organizations need to think strategically about what sort of products they will produce and the skills required to do so. Here I discuss the range of evidence platforms. I will talk about skills in another blog.

As shown in the pyramid figure, there is a spectrum of ways in which evidence may be presented. There are the underlying data, the studies produced analysing those data and systematic reviews of those studies. All that is part of the conventional evidence pyramid, which lays in the domain of researchers. We know that in general, policy makers and practitioners do not read academic papers, so just publishing in peer reviewed journals is not an effective channel for putting research into use.


There are well known ways of making single studies more accessible, such as policy briefs like Campbell’s Plain Language Summaries. I am more interested here – and indeed more interested in general – in ways in which we can institutionalize the use of bodies of evidence. I have blogged before about different institutional arrangements in different countries for doing this. This blog is about evidence platforms which adopt systematic approaches to drawing on bodies of rigorous evidence.

As shown in the pyramid these platforms include databases, evidence maps, evidence portals, guidance and guidelines, and checklists. Each level is more heavily brokered. Databases just make certain types of evidence available whereas checklists tell you ‘do this’.

Databases are collections of studies on a particular topic and possibly of a particular type. Epistimonkos is a database of primary studies of effects (impact evaluations) and systematic reviews for health. The 3ie database is a similar collection for international development. ERIC is a database of education studies of any kind. The advantage of using a database over, say, Google Scholar is that the studies have been pre-screened so you know they are relevant studies, which is particularly important for databases of particular types of study. In addition, most databases include a database record summarizing the paper possibly in a more systematic or complete manner than the paper abstract. And a good database allows for the export of study details into reference management software.

Evidence maps go a step beyond databases by presenting the evidence in a structured manner. 3ie evidence and gap maps present the evidence in a matrix: the rows are intervention categories, the columns outcome categories, with cells containing a link to studies reporting data for the impact of that intervention on that outcome.  The evidence-based policing matrix is shown in three dimensions related to the intervention: scope, specificity and proactivity. Each point in the 3D space is a single study with a key showing if the effects on crime are positive or negative.   And Veterans’ Affairs publishes 5D bubble plots such as this one on mindfulness (scroll down to p7 in the link). The five dimensions are shown by the two axes (direction and size of effect and size of literature), the number of bubbles, the colour of the bubble (intervention category), and the size of the bubble (number of reviews).

The map will usually link to summaries of the studies it includes. For example, the 3ie evidence and gap maps link to the study records in the 3ie database. Some maps give an indication of the direction and size of effect, but most mapping does not do this.

The structured presentation in a map gives an overview of the landscape of the evidence. This can be useful to researchers in situating their studies in existing literature or to identify bodies of literature ready for synthesis, to research funders in identifying evidence gaps needing to be filled, and assisting programme managers and strategy writers to see the size of the body of evidence in their areas of interest.  But in the end use of the map still requires policy makers to read the original academic papers, so the degree of knowledge brokering is not that high. The evidence has been organized in a helpful manner rather than brokered for use.

Evidence portals go that next step. Evidence portals provide summaries of evidence about the effectiveness of particular programmes in a way which is accessible to policy makers and programme managers. The leading examples of evidence portals are the Teacher and Learning Toolkit of the Education Endowment Foundation in the UK, and the What Works Clearinghouse managed by the Institute of Education Sciences in the US.  Portals are clearly oriented toward policy makers and practitioners, providing evidence on programme effectiveness in a clear, accessible way, with links to more information about the programmes. The US What Works Clearinghouse also has a nice feature to see the context for which the evidence for an intervention is most relevant. Most importantly, the summaries of what works are based on systematic reviews: not traditional literature reviews and not single studies.

But portals still leave interpretation about what to do with the evidence to the policy maker or practitioner. Guidance and guidelines go a step further, giving clear evidence-based recommendations. The use of evidence-based guidelines is best developed in the health sector. At the international level the World Health Organization (WHO) has a Guidelines Review Committee which oversees the regular publication and updating of WHO Guidelines on a wide range of topics such as nutrition, immunization and deworming. Bodies such as the National Institute for Health and Care Excellence (NICE) in the UK play this role at the national level. State or county-level health agencies may also issue guidance.

More recently we have seen national evidence-based guidance being published in education. The US education Clearinghouse published evidence-based practice guides. For example, the practice guide on ‘Teaching Secondary Students to Write Effectively’ says to ‘explicitly teach appropriate writing strategies using a Model-Practice-Reflect instructional cycle’ for which it says there is strong evidence.  In the UK, EEF publishes Guidance Reports. For example, the Guidance for ‘Making Best Use of Teaching Assistants’ states that ‘they should not be used as an informal teaching resource for low-attaining pupils’.

Whilst guidance documents provide recommendations, checklists provide a list of instructions to be followed. Atul Gawande’s book The Checklist has compelling examples on the power of checklists to transform performance in a range of settings.  Checklists put the best available global evidence to work in local settings through a series of simple instructions. Guidelines should be evidence-based. WHO’s guidelines on writing guidelines states explicitly that guidelines must be based on high-quality systematic reviews.

There are legitimate concerns over the role of deliberation: see the discussion in Munro et al. Improving Child Safety: deliberation, judgement, and empirical research.  But, given the choice between expecting a social worker to read an academic paper and being able to convey the message of that research in an evidence-based checklist, I’d opt for the checklist.

But, of course, this infrastructure of knowledge brokering is not a list of alternatives. Each layer rests on the foundation of the previous layer: studies need data, systematic reviews and databases need studies, evidence maps report reviews and studies, evidence portals are based on reviews, as are guidance and checklists.

The components of this infrastructure varies across sectors. It is best established in health, in large part because of the work of Cochrane and the role of WHO.  There is growing awareness of the need for systematic reviews rather than relying on single studies or traditional literature reviews. But the development of the full infrastructure making well brokered evidence available to policy makers and practitioners is still lagging in most sectors in most countries. 

Contact us