Back in October, the headteacher at my son’s school began each assembly by displaying the Premier League table, with Tottenham Hotspur at the top. (My son, a fan of Tottenham’s local rivals Arsenal, was outraged.) Those familiar with English football will know that Tottenham were top of the league for much of October, but only those with long memories will recall the last time Spurs finished the season in that position. It was 1961.
Yet it doesn’t take much to produce an alternate universe in which Spurs are a winning machine. All you need to do is what the headteacher did: when Tottenham are winning, display the league table; when they are not, keep quiet. Recently, the headteacher has been quiet. This behaviour has a name: publication bias. Nobody is likely to be fooled by a humorous school assembly into thinking that Tottenham will win the Premier League, but, in other contexts, publication bias is a serious business.
When we are trying to make sense of the world, it matters that there is a systematic difference between the information that is put in front of us and the information that is obscured. We are surrounded by images and ideas that have been sieved through the deceptive filter of publication bias and, unlike the young football fans who know that Spurs don’t win many trophies, we typically lack the background knowledge to draw the right conclusions.
Publication bias is traditionally a concern in academic journals: surprising, exciting, novel and, in particular, statistically significant results tend to be published, while “null” findings, where the statistics demonstrate no clear effect, tend to languish in file drawers. This may sound like a minor annoyance, but, in reality, it leaves a perniciously misleading picture of the evidence that should be available.
To see why, replace “Tottenham lead the Premier League” with “new antidepressant is highly effective in clinical trials”. If trials that show no effect are unpublished, while those that find an effect are trumpeted, then the published evidence base is systematically biased and will lead to bad clinical decisions.
While publication bias is starkest and best studied in formal research, the same tendency applies much more broadly. Think about who we see when we turn on the television. People who appear on TV tend to be better looking and richer than the rest of us and, almost by definition, they are more famous. We are a social species and we often deal in social comparisons. If we compare ourselves not to our friends but to the celebrities we spend so much time watching, we may feel we don’t match up.
Or consider crime. In any country with a population of millions, there will be a steady stream of dreadful crimes. Such crimes are just common enough to appear every time you look at the news, while being just rare enough to be newsworthy. According to the Crime Survey for England and Wales, the UK’s most respected data series on crime, violent crime is down by more than 75 per cent since a peak in 1995; it is down by about half since 2010.
Yet surveys of public opinion frequently suggest that crime is a pressing concern, and the majority of people believe crime is rising. The likely explanation for this misperception is simply that we are surrounded by cop show dramas and by reports of ghastly crimes, rather than reports of banks unrobbed, houses unburgled and women who walked safely home at night. Our perceptions of crime don’t reflect reality, but they accurately match the news and entertainment with which we are presented.
Arguably, our own brains inflict a kind of publication bias on us every day, in the form of “the focusing illusion”. Whenever we contemplate a decision, we summon some considerations to mind while neglecting others. For example, when pondering whether to buy new garden furniture, we imagine a sunny weekend. We do not think of all the days when it will be cold and rainy, or those when we will need to be in the office, not the garden. In the words of Nobel laureate Daniel Kahneman, “Nothing in life is as important as you think it is, while you are thinking about it.”
I am not sure of any antidote to the fact that beautiful people dominate TV, but there is, at least, a well-understood treatment for publication bias in medicine: it is that every trial should be publicly registered before it begins (lest it go missing) and every trial should have results properly reported.
The All Trials campaign was launched in 2013 to put pressure on pharmaceutical companies and universities to preregister every clinical trial and publish every result, and the campaign received further impetus when one of its co-founders, Ben Goldacre, led a team to design an automated audit system, Trials Tracker. Trials Tracker automatically checks that clinical trials in the US, EU and UK are being promptly reported.
Goldacre recently told me that a watershed moment came in 2019, when the UK’s Parliamentary Science and Technology Committee wrote to the medical schools in leading British universities. The committee chair warned them that the committee had been studying the Trials Tracker data, and would soon be inviting the biggest laggards to give evidence in person.
“In some respects that was a bit unhelpful to me,” Goldacre deadpanned, “because, at the time, I didn’t have a permanent [academic] post and that sort of thing does slightly annoy deans of medical schools and makes people a bit cross and sad.”
But the message was received. Faced with the combination of clear metrics and the threat of public shaming, UK universities suddenly discovered a new zeal for reporting their clinical trials. According to EU Trials Tracker, they now boast an excellent record of publishing every result, as do pharmaceutical companies. If only the same was true of headteachers.
Tim Harford’s new book for children, ‘The Truth Detective’ (Wren & Rook), is now available
Follow @FTMag to find out about our latest stories first
Read the full article here