Thursday, 2 November 2017

Up is down in 'public health'

You may fondly recall Jill Pell, the anti-smoking campaigner who was responsible for the false claim that the rate of heart attacks fell by 17 per cent in Scotland after the smoking ban was introduced. Hospital admission statistics disprove this but that didn't stop the factoid spreading across the world. It has been cited as a fact in parliament several times.

Pell returned in 2010 with a risible attempt to prove that hospital admissions for childhood asthma fell by 18 per cent after the ban. Again, routine hospital statistics showed this to be complete nonsense.

Both studies somehow got published by the prestigious New England Journal of Medicine. Last week she returned again, this time in the rather less prestigious Tobacco Control, looking at hospital admissions for childhood respiratory tract infections (RTIs).

Awkwardly, it turns out that the number of admissions rose after the smoking ban, as this graph (from Pell's study) shows...


But this was only a minor inconvenience. In the past, Pell has managed to make it look as if admission rates were falling when they were essentially static, so it only required a bit more statistical jiggery-pokery to turn a rise into a decline. She probably enjoyed the challenge.

Here's what the data told her...

In our primary analysis, introduction of smoke-free legislation was associated with an immediate rise in acute RTI events (incidence rate ratio (IRR) 1.24, 95% CI 1.20 to 1.28) and an additional gradual increase over time (IRR 1.06 per year, 95% CI 1.05 to 1.06; table 2). This finding was consistent when upper and lower RTI events were considered separately.

Awks. At this point, Pell and her team could have decided not to publish (which, I strongly suspect, is what most 'public health' academics do when faced with such findings). Instead, they ploughed on.

We used advanced methods and followed a prespecified analytical approach—including a detailed statistical analysis plan—in an attempt to promote transparency. Despite this, our study yielded findings that were implausible and highly likely to be spurious.

Why would they be considered 'implausible'? Because...

Studies in other countries, including in the UK, previously identified consistent associations between comprehensive smoke-free legislation and subsequent reductions in paediatric RTI hospitalisations.

The problem here is that those studies are policy-based junk that directly contradict the evidence. Even if they weren't, respectable scientists don't change their conclusions to match those of other studies.

You know what is actually 'implausible'? The belief that a ban on smoking in workplaces, which mainly affected pubs, would have any effect of admissions to hospital by children under the age of 13 for a condition that is usually caused by a virus.

Building on the existing evidence base on the topic, we feel it is highly unlikely that smoke-free legislation was indeed responsible for a rise in paediatric RTI events, as our primary analyses seemed to suggest.

Nobody is arguing that the smoking ban caused RTI admissions to rise. The point is that, regardless of how Pell and her team 'feel', they did rise.

The amusing thing is that Pell et al. were clearly pleased with the rigour with which they approached this job...

Our study has a number of strengths. It was conducted according to a predefined protocol, including a detailed statistical analysis plan, which was developed a priori in an attempt to promote scientific transparency and reproducibility. We used over 10 million patient-years of high-quality data routinely collected over a 17-year period. Virtual universal availability of the CHI number minimises risks of incorrect data linkage across the datasets. We accounted for underlying temporal trends in RTI events as well as changes in population size and demographic structure. We applied a look-back period to reduce bias from RTI events occurring prior to the study period. Our modelling approach is widely applied in the evaluation of national public health interventions, including national smoke-free laws.

Then comes the punchline...

Given these strengths, the implausible findings are of considerable concern.

At which point, a sudden rethink was in order. But a further analysis still didn't come up with the goods...

In further post hoc analyses, the strength of association between timing of smoke-free legislation and acute RTI events was very similar when evaluated using a reg(S)ARIMA model of order autoregressive term multiplicative seasonal autoregressive term: IRR 1.15, 95%CI 1.02 to 1.28.

Ultimately, they resort to arguing that the smoking ban didn't cause the number of admissions to rise (which is something that nobody would seriously claim anyway)...

However, automatic break point detection suggested that the increase in acute RTI events started well before introduction of smoke-free legislation, that is, in November 2004. Using this break point rather than timing of smoke-free legislation in the primary negative binomial regression analysis indeed improved model performance as compared with the primary model. 

Phew! 

When timing of smoke-free legislation was then added to the model that included the November 2004 break point, smoke free legislation was associated with a gradual decrease in acute RTI events (IRR 0.91 per year, 95% CI 0.87 to 0.96), with no evidence of a ‘step’ change at that time.

God knows how they reached that last conclusion, but anyone who reads the abstract will see that the study shows that 'the legislation may in fact be protective', ie. that the smoking ban led to a reduction in the number of admissions for childhood respiratory infections.

I tempted to say that you couldn't make it up, but they did.

This is not the first time that 'public health' quackademics have turned a rise in hospital admissions into a decline (see the Brazilian miracle, for example), but it is the first time they have done it in such plain sight. Jill and her chums outline their methodology in detail, report their findings and then redo the whole thing because the findings go against their a priori assumptions. After redoing it, they arrive at exactly the opposite conclusion.

It is breathtaking. Is there any other field of science where researchers could do this so openly?

No comments:

Post a Comment

Comments are only moderated after 14 days.