Written by Alexander Krauss, Postdoctoral Research Fellow, London School of Economics, and Adjunct Lecturer, University College London
All randomised controlled trials are to some degree biased. While trials are often used to help inform our decisions in public health and social policy, I argue in the study Why all randomised controlled trials produce biased results* that a degree of bias inevitably arises in any trial. This is because some share of recruited people refuse to participate in any trial (which leads to sample bias), some degree of partial blinding or unblinding of the various trial persons generally arises in any trial (which leads to selection bias), participants generally take treatment for different lengths of time and different dosages in any trial (which leads to measurement bias), among many other issues.
The ten most cited RCTs worldwide, which are assessed in the study, thus suffer from such general issues. But they also suffer from a range of other issues that also affect their estimated outcomes: participants’ characteristics (like age, health status, level of need for the treatment etc.) are often poorly distributed between trial groups, and trials often neglect alternative factors contributing to their main reported outcome, among others. Some of these issues cannot be avoided in trials – but they affect the validity of their results and conclusions.
The methodological study is an analysis of the RCT method itself and not any particular RCTs. While the ten most cited RCTs analysed in the study happen to be trials in the field of general medicine, the insights outlined in this study are equally useful and important for researchers using RCTs in economics, psychology, agriculture and the like.
Assumptions and biases generally increase at each step when conducting trials
Overall the assumptions and biases underlying any given trial’s results usually rise at each step: from how we create our variables, select our initial sample and randomise participants into trial groups, to how we analyse the data for participants with different lengths of time and amounts of treatment and how we try and ensure everyone involved is fully blinded before the trial begins and throughout its entire implementation – among many other steps before, in between and after these.
Difficulties in reproducing results thus often arise as we are dealing, in a complex way, with many actors (study designers, all participants, data collectors, implementing practitioners, study statisticians etc.) who need to make hundreds of unique decisions at many levels over the course of designing, implementing and analysing any given study – and some degree of bias unavoidably arises during this process.
That all trials face some degree of bias is simply the trade-off for studies to actually be conducted in the real world. A number of things inevitably do not go as planned or designed given the multiple complex processes over time involved in carrying out trials. Once a study is conducted and completed some biases will have arisen and nothing can be done about a number of them.
Are biased results in trials still good enough to inform our decisions?
In many cases they are. But that judgement usually depends on how useful the results are in practice and their level of robustness relative to other studies that use the same method or at times other methods. Yet no single study should be the sole and authoritative source used to inform policy and our decisions.
Overall, researchers, practitioners and policymakers need to become better aware of the broader set of biases facing trials. A critical step in this direction and to help improve trial quality is, as outlined in the study*, for journals to begin requiring researchers to outline in detail the assumptions, biases and limitations in their studies. Each trial should thereby have to include a separate table with the information listed in the CONSORT guidelines that have to be significantly expanded to also require not yet reported information on the share, traits as well as reasons of participants refusing to participate before randomisation, not taking full dosages, having missing data etc., on the blinding status of all key trial persons, on alternative (background) factors that can affect the main outcome and on the wider range of issues discussed throughout this study. It needs to also include a table with endline data (not just baseline data) of participants’ background traits and clinic characteristics – and also more detailed information on the ‘applicability of results’ including the broader range of background influencers of participants, step-by-step information on how the initial sample is exactly generated (not just eligibility criteria and trial location) and whom the trial results may explicitly apply to. As long as researchers do not report this essential information in their studies, practitioners and citizens will have to just rely on information and warning labels provided by policymakers, biopharmaceutical companies and the like implementing the tested policies and selling the tested treatments.
* Krauss, Alexander. Why all randomised controlled trials produce biased results. Annals of Medicine (2018).