Last week the long suffering public was subjected to press coverage of yet another smoking ban miracle. According to its supporters the ban in England has reduced still births by 8% and saved 1,400 babies in the process. Smoking ban advocate and public health activist Jasper Been claimed that there is “enough evidence to show definitively that the smoking ban was working, and that other countries should follow suit”
How could they fail to do so faced with compelling scientific evidence and saved babies? I can think of several reasons, not least of which is that all of the “evidence” for health benefits to date has been produced by partisan activists such as Been and none of it stands up to more impartial scrutiny. A second important reason not to take Been and friends seriously is the appalling quality of their own evidence and the dubious methodology used to obtain it.
This latest baby saving conjuring trick was published in Scientific Reports a journal that claims to publish only good science but appears to have very low acceptance standards. Good scientific papers contain detailed methods, data and carefully explained results that allow other people to challenge, support, build on and repeat the work. Public health papers bear little relation to science. Their authors, like magicians, try to hide how they pull off their illusions by not fully explaining their methods and showing only carefully massaged data and results.
This paper is no exception. We know that Been et al used regression analysis and Office for National Statistics (ONS) data sets but the only results relevant to the 8% claim appear is a table that contains only odds ratios that compare the risks of still births 4 years post ban with 11 years pre ban. We are given no details about how the magic numbers were created and tested but we are assured that various variables have been taken into account and that the ratio is 0.922, which represents a 7.8% risk reduction post ban. The 1,400 virtual babies are then calculated using an absurd counterfactual algorithm to shape shift data.
I don’t have fancy statistical software nor grant money to pay the ONS for monthly data but the ONS annual data for still births is freely available and I do have Excel. Using those rudimentary tools and a bit of background research, I believe that I have worked out how this trick was pulled off.
The ONS still birth data goes back to 1927 and shows pretty consistent decline since the 1930s with the odd blip along the way. If we look at relatively recent data we see two such upward blips.
If we look at the pre-ban control period starting from 1993. The crude linear fit is not very good because there are obvious variations in a small number of data points, but it does serve to illustrate that the overall trend was fairly flat.
But if we use Been’s 11 year control period from 1995, the 2002 – 2005 upward blip now pushes the overall trend upwards...
This might be important, because with upward trending control data, even flat post intervention data can be interpreted as a fall in risk, especially by those keenly searching for one.
I thought it a bit strange that the authors had used uneven time periods pre and post ban. The graph below demonstrates why adopting the more robust strategy of using identical four year time periods either side of the ban might not have served their purpose.
Before the ban, annual still birth incidences were declining rapidly. Post ban the trend is flat.
The sharp fall pre ban is a consequence of the all-important second blip in the data when for two consecutive years, rates increased. After 50 years of consistent falls, the rise in 2002 came as something of a shock. So much of a shock that the ONS launched an investigation in which it tested many possible variables including some that Been et al claim airily to have adjusted for. The ONS could find no explanation for the rise. Been et al did not attempt to.
So the trick depends on comparing post ban data with control data so out of line with an 85 year trend that they sparked a national investigation. Unexplained anomalies are the last thing that honest scientists want in a control because they have enormous potential to skew results and invalidate conclusions. Public health activists are often rather less discerning, especially when outliers conveniently skew the control data in a direction that suits their agenda.
Been et al might well be right about the risk of still births being 7.8% lower post ban but if that is the case they are most probably measuring the magnitude of an anomalous upward blip in the pre ban data rather than a downward effect created by the intervention. Being advocates, they have simply assumed the latter, found an unfussy journal that is happy to publish statistical conjuring disguised as science, and then run to the media shrieking for more bans. It is clear from the ONS data that the decline in still births happened before the smoking ban was introduced and was no more than regression to the mean.
Science Reports is not the first journal to fall for the convenient anomalous peak in the control data trick. It was recently used to claim a childhood respiratory admissions miracle in the European Respiratory Journal and a childhood asthma miracle in Pediatrics. Been and his pals have moved on from merely helping children to actually saving babies but their dubious MO remains the same and journal editors keep on falling for it.