Interview with Terri Pigott

By Ciara Keenan
This article was originally posted on the Meta-evidence website on 25 April 2018.

We are honoured to welcome Professor Terri Pigott to the blog this week. Terri is a specialist in meta-analysis methods and teaches at Loyola University Chicago where she is the Associate Provost for Research. Additionally, Terri is the co-editor of the methods coordinating group in the Campbell Collaboration and the former chair of the American Educational Research Association Special Interest Group on systematic review and meta-analysis.

Terri’s scholarship history is very impressive and testament to her understanding of meta-analysis. As an early user of the technique combined with her knowledge of statistics, she was well placed to address some of the unresolved issues in meta-analysis. During her PhD research under the supervision of Larry Hedges, Terri first provided innovative methods to deal with the problem of missing data. Since then, Terri has continued to demonstrate her statistical expertise and has published papers on outcome reporting bias, individual participant data in meta-analysis, and power analyses.

Terri often shares her methodological insight by providing resources and courses for those who want to conduct a meta-analysis.

How do we know which questions should be addressed through meta-analysis?

I have two answers to this question, one about mapping of evidence and one about meta-analysis.

I am excited about the spread of evidence and gap maps in a number of scientific fields. Evidence and gap maps follow systematic review techniques to identify and map the existence of primary studies and systematic reviews in a given area. For example, 3ie has supported the creation of a number of evidence and gap maps in international development (see here for 3ie maps). The Collaboration for Environmental Evidence (see here) also has systematic maps in their library. These maps can help us to see areas in the literature where sufficient studies exist to warrant a systematic review, and areas where we need more primary research. The 3ie evidence and gap maps can also include systematic reviews so that the field can target their resources in areas where reviews do not exist. The Campbell Collaboration is currently working on standards for these reviews and will host these maps in the near future on their website.


A small section of an EGM conducted by 3ie

Sometimes what researchers really want to know is how many studies are needed to do a meta-analysis. My colleague Jeff Valentine at the University of Louisville usually says, "Two". The reason for this answer is that with two studies, we tend to use the simplest method of comparing the results – which would be counting the number of results in the study that do or do not support our hypothesis. And, as we know, just counting the numbers of results for or against our hypothesis is not the best way to synthesize results given the lack of power for statistical tests in studies with small sample sizes.

Just counting the numbers of results for or against our hypothesis is not the best way to synthesize results.

To address the question of how many studies are needed for a meta-analysis, we have to talk about power for meta-analysis. Power for statistical tests in meta-analysis depend on a number of factors: the number of studies in the meta-analysis, the sample sizes within the included studies, the mean effect size of interest, the variance of the effect size of interest, and the type of model used (fixed or random). While we know how to compute power for meta-analysis, we still need to develop more accessible methods for helping researchers using meta-analysis to anticipate the power of their planned meta-analysis. I suspect that many meta-analysis tests, particularly those for examining models of effect size heterogeneity, are underpowered. My former student, Joshua Polanin, is working on ways to make computation of power in meta-analysis more accessible.

People seem suspicious of meta-analysis, why is that?

My hunch here is that reports on large-scale systematic reviews that include a meta-analysis are lengthy with much technical detail. Of course, the technical details are necessary since systematic reviews should be transparent and replicable. Reading through all the process details of a systematic review can be difficult, and some readers may feel that this detail detracts from understanding the main implications of the review.

Studies assessing social interventions are never exact replications of each otherI also conjecture that some readers are uncomfortable with the synthesis of results from studies that are not exact replications of one another. Studies of social interventions (my area of experience) use different samples, different procedures and different measures, but we use synthesis techniques to examine across all of their results. All statistical models are simplifications of reality, and meta-analysis is no exception. We use meta-analysis to look at patterns of effects across studies. Our summary of these results cannot capture the nuance of each study, and this may be one cause for concern.

All statistical models are simplifications of reality, and meta-analysis is no exception.

I think the community who produces meta-analyses and those who are consumers of their results need to keep in mind the types of questions that meta-analysis can address. Meta-analysis can help us understand the variation in estimates of effect magnitude across studies. For example, are differences in the types of patients/participants in a treatment related to differences in effect magnitude across studies? Or, does the strength of the association between socio-economic status and achievement vary with age of the student? Meta-analysis cannot address questions of why one treatment is more effective than another or how implementation might affect treatment efficacy. These questions about how and why are critical to understanding an intervention, but cannot be addressed by the statistical techniques of meta-analysis. As a meta-analysis community, we should keep our conclusions close to our data, and acknowledge there are some questions we cannot address with meta-analysis.

What are you thinking about these days

I am thinking about a number of things these days – some of them echoed by earlier interviews on this blog. One issue for me is the interpretation of models of effect size heterogeneity. As Emily Tanner-Smith mentions in her interview, the most important contribution of a meta-analysis is the exploration of potential correlates of effect size heterogeneity. However, we often find that studies do not report on important characteristics of studies that we want to use as predictors in an effect size model, a problem of missing data. In addition, our models may be underpowered to detect associations between our predictors of heterogeneity and our effect sizes – back to power as I mentioned above. And, as others have pointed out, we need to be careful in our interpretations of effect size models. Relationships that exist among study characteristics and effect size magnitude are at the study-level only and are also only associations.

Should we still estimate models of effect size? Yes but with care. We should be thinking about pre-specified meta-analysis models based on theory, and distinguishing those from exploratory analyses. As Beth Tipton says in her interview, we should also stop using the “shifting units of analysis” strategy – where we fit models with whatever studies observe our predictors of interest. We also need to pay attention to multiplicity issues as my former student Joshua Polanin has written.

Plain language summaries ensure that the main implications of the review do not get lost in technical detailAnother issue of importance to our community is how to report on the results of systematic reviews and meta-analysis in accessible ways, as Peter Neyroud highlights in his remarks. I recently heard a journal editor lament that many submitted meta-analyses were too full of technical information and tables, putting the onus of interpretation on the reader. We need to find ways to strike a balance between full transparency and interpretability. Organizations like the Campbell Collaboration publish plain language summaries of their systematic reviews. I am not sure how we could implement “plain language” summaries in traditional journals though I think it should be a goal for our community.

We need to find ways to strike a balance between full transparency and interpretability.

Finally, I have been reflecting on the fact that systematic review and meta-analysis is a team sport. To produce a high quality systematic review and meta-analysis requires a range of expertise: literature search strategist, substantive knowledge expert, meta-analyst, policy specialist. A collaborative team approach ensures that the systematic review and meta-analysis is well-grounded in theory, uses a thorough search strategy, employs appropriate statistical techniques, and presents results in an accessible way with an emphasis on policy and practice. I think this is why I love working in systematic review and meta-analysis – I love working on teams where we each bring our skills to create a product bigger than the sum of its parts – a somewhat bad metaphor for systematic review!

References

Hedges, L. V. & Pigott, T. D. (2001). Power analysis in meta-analysis. Psychological Methods, 6, 203-217.

Hedges, L. V. & Pigott, T. D. (2004). The power of statistical tests for moderators in meta-analysis. Psychological Methods, 9, 426-445.

Polanin, J. & Pigott, T. (2015). The use of meta-analytic statistical significance testing. Research Synthesis Methods, 6, 63-73

Valentine, J. C., Pigott, T. D., & Rothstein, H. (2010). How many studies do you need? A primer on statistical power for meta-analysis. Journal of Educational and Behavioral Statistics, 35, 215-247.

Contact us