In the movie I Robot, Detective Del Spooner, played by Will Smith, dislikes the ever-present robots because a robot rescues him from a car accident instead of saving an 11-year-old girl. The robot chose the detective as he had the greater chance of survival. But, as Del Spooner says, the girl was someone’s daughter and a human being would have known that.
If you had to swerve to avoid hitting an old lady or a cat, which way would you go?
As my friend of many years, and now AI superstar, Toby Walsh told the Campbell Board at our session on ‘AI and the future of evidence synthesis’ last week (here's a paper about this), robots just do what they are told. In the news last week I learned that manufacturers of autonomous cars are asking people if they had to swerve to avoid hitting an old lady or a cat, which way would they go? Would they try to avoid driving into a teenage girl or a dog? These preferences are being built into the algorithms of self-driving cars. It turns out that many drivers would spare the dog and hit the girl, while cats become roadkill.
Toby also tells us that it won’t be long before it is illegal for humans to drive because robots are so much better at it. They don’t get tired, or check their WhatsApp whilst driving, or make ‘human errors’, since they aren’t human.
Are we also heading to a future in which machines, rather than humans, do evidence synthesis? Should it be illegal for human researchers to conduct systematic reviews, given all the biases we have? And, if so, should we care? Screening and coding is laborious work, so why not leave it to the machines?
This future is already with us. Programmes such as Covidence – produced by Julian Elliot who also addressed our Board – use machine learning to assist screening. There is now software to machine read and code data tables. Content analysis has been done for years with software such as Atlas.ti. And it is simple enough to write code in Stata or R to run and update meta-analysis for different study designs – as Eva Vivalt does for AidGrade. (Notice that three of the four people I have mentioned by name in this blog live and work in Australia. Maybe a lesson to be learned there.)
But there are limitations. Machine learning is only as good as the humans they learn from, so our conscious and sub-conscious biases will be reproduced. Twitter had to quickly pull down Tay, Microsoft's chatbot, since as a machine it could only echo what it heard, which included some rather distasteful opinions. A couple of years back, Julian Elliot was lead author of a paper on big data in the journal Nature, in which we argued that correlation is not causation no matter how big the data. That remains true, and always will. So it is a problem if the people programming, for example IBM's Watson, don’t understand about selection bias, identification strategies and why vote counting kills babies (come to one of my presentations to hear that one, or read this). I don’t doubt for a moment that machines can be taught most of this stuff. Most, but perhaps not all of it.
Toby showed us that machines do struggle with nuances of language. Machines can translate words but not meaning. Are authors talking about an intervention/outcome/study design because they include, or because they don’t? Are they doing random assignment of the intervention or is it just a random sample of programme participants and non-participants. Many researchers struggle with these things! So it will take some doing for machines to get it right.
As a researcher you often have to hunt after the small print and the footnotes, or write to authors to find out sample size, attrition rates or that pesky standard deviation of the outcome measure. Again, a human element is probably required. At least for now.
So the future is here and we need embrace it rather than fight it. As evidence champions, we also need to seek out and engage those driving these machine-based approaches, because an observational study isn’t a randomised controlled trial – and a human would have known that.