As expected, the English smoking ban/heart attack miracle received blanket and largely uncritical coverage last week. The study was discussed in my last post but, to briefly recap, the heart attack rate fell by 4.26% in the year after the ban came in. This was no more and no less than might be expected considering the existing downward trend (3.21% and 5.19% in the two years before). Since there had manifestly been no dramatic decline in the number of heart attacks, Anna Gilmore and her team attributed a large chunk of that 4.26% to the ban (more than half of it, in fact, hence the 2.4% figure that appeared in every news report).
This is is so speculative that it might as well be gossip. It could be true but, if it is, no statistical evidence is provided to demonstrate it. The only figures presented in the study are the crude hospital admissions data that show a continuation of the existing trend. The 2.4% figure comes out of nowhere. No workings. No calculations. No data that be verified, checked or examined. Just an assertion to be taken on trust.
We need a little more than that if we are to attribute a long-term phenomenon to a one-off event. It is as if Gilmore is doing a rain dance in the middle of a thunderstorm and demanding credit for the rain. The onus is on her to convince us that the rain would have stopped if she hadn't showed up, not the other way round. Without that, she is just another loon dancing in a downpour.
Not that any of this affected the media coverage. Few journalists seemed aware that the 2.4% figure was an estimate, or that the heart attack rate was in the midst of a long-term decline. They certainly didn't bother to ask whether the post-ban drop was any higher or lower than usual.
The Times, for example, reported:
The number of people admitted to hospital for heart attacks has dropped by an average of 100 a month since the introduction of the smoking ban in England, research shows.
And the Daily Mail—which once predicted that the smoking ban would "cut heart attacks by 32,000 a year"—reported:
The Bath University research found hospital admissions for heart attacks fell 2.4 per cent in England in the year after it became the last UK nation to ban smoking in indoor public places.
The "nanny state" mostly gets a pasting from critics who dismiss government efforts to make us fitter or slimmer or healthier as unwarranted intrusion into individual's lives.
Today, the critics get their comeuppance with research showing that nannying works. In the first year after the smoking ban was introduced in July 2007, the air in bars, restaurants and offices suddenly became sweeter – and more than 1,000 heart attacks were prevented.
The next targets?
* A ban on smoking in cars to protect children. Millions of children are exposed to second-hand smoke, which is worse in cars because of the confined space, says the Royal College of Physicians. Smoking is banned in cars carrying children in some states in the US, Australia and Canada.
* A minimum price for alcohol. The National Institute for Health and Clinical Excellence said it would discourage supermarkets from discounting cheap alcohol. No price was specified, but a 50 pence per unit of alcohol minimum would mean a bottle of wine would cost at least £4.50, and a pint of lager £1.14
* A fat tax, on fast foods and chocolate, to curb the obesity explosion. As smoking falls and obesity increases, experts predict the latter will come to be seen as more damaging than the former. Some want us to follow Romania's example which pledged earlier this year to introduce a tax on junk-food.
The Times, which had run an extraordinarily premature and totally inaccurate story about the study nine months earlier, won the award for the most outrageous extrapolation of the day:
Ok Pannenborg, the former chief health adviser of the World Bank, said that the British study offered compelling evidence for nations trying to tackle smoking throughout the world.
Dr Pannenborg, who is a speaker at The Times Cheltenham Science Festival, starting today, said that a basic extrapolation of the findings suggested that more than a million deaths could be averted in China if it took similar action over the next decade.
The BBC gets a fair bit of stick in the blogosphere, sometimes with good reason, but its report was actually (slightly) better than most. It did, at least, mention that...
The 2.4% drop was much more modest than that reported in some areas where similar bans have been introduced
...last year a cross-party group of MPs argued the laws needed amending to stop pubs losing valuable trade from smokers.
And, uniquely in the media last week, it allowed a critical voice to be heard:
"The number of emergency heart attack admissions had been falling for several years, even before the smoke-free legislation, so what we are seeing is part of a trend that has nothing to do with the smoking ban," said Simon Clark, director of Forest.
"This study is designed to show the benefits of prohibition. What is doesn't show is the misery that has been heaped on hundreds of thousands of people by an unnecessarily harsh and divisive piece of legislation."
It is indicative of how far reporting of these issues has sunk that I feel the need to highlight these crumbs of reason at all. Two points were rammed home in every report. (1) There had been a 2.4% fall in heart attack admissions after the smoking ban, and (2) 1,200 fewer people were admitted to hospital after the smoking ban. Both of these statements are simply untrue—the rate fell by 4.26%, or 2,300 people. This tells us something about the standard of science reporting, but it is not the real issue. The question is whether this was indicative of a smoking ban effect.
Elsewhere, Alex Massie at The Spectator and Ed West at The Telegraph wrote more sceptical articles, both linking to this blog. Dr Michael Siegel also criticised the study, with particular reference to the lack of a control group. Siegel (who is, lest we forget, an epidemiologist himself) wrote:
Readers should always be skeptical about conclusions that are not consistent with the actual data presented in a paper. When you have a graph which clearly shows no demonstrable effect of a smoking ban on heart attacks (a.k.a., a straight line), then be wary of a complex statistical analysis that shoots out a specific number.If you can't see the effect in the graph, then it is going to be difficult to argue that any number which comes out of a computer is more believable than your own eyes. Statistical analysis is important as an adjunct to visual inspection of data, especially to help confirm visual impressions, but it is not a substitute for it.
On Massie's blog, however, the Guardian journalist Dr Ben Goldacre defended the study and complained about its critics focusing on the crude hospital admissions data.
if alex massie believes there is a problem with the variables used in the regression model, then it would be interesting and informative if he could let us know...or maybe alex massie thinks there is something inherently flawed about the very notion of regression, or of poisson regressions in particular. if so i'd be pleased to see those views have an airing. but saying that the crude rates don't show a big change, and making a big graph of them for yourself, strikes me as being fatuous but moreover oddly uninformative.
I agree with Goldacre on many things but I think he is missing the point here. On the subject of crude rates, he may be unaware that the BMJ study is just the latest in a long line of 'heart miracle' studies, all of which—with one exception—have relied on the crude admissions data (ie. how many people were actually admitted to hospital). This method was not considered "fatuous" or "uninformative" when smoking bans supposedly "slashed" the heart attack rate in Helena, Scotland, Bowling Green and other locations. In the context of the existing scientific literature on 'heart miracles' it is therefore highly relevant to show the crude rates for England. The 'Helena hypothesis' is that heart attacks drop dramatically in absolute terms (by 40% in that instance).
Secondly, virtually every newspaper focused on the fall (the "dramatic" fall if you're the Daily Mail) in the absolute numbers. This is not surprising since the press release went well beyond "creative epidemiology" and into the realms of plain dishonesty by pitching the 2.4% estimate as if it was the absolute fall based on the crude data.
A 2.4 percent drop in the number of emergency admissions to hospital for a heart attack has been observed following the implementation of smokefree legislation in England.
What journalist could read this press release and not assume that total heart attack admissions fell by 2.4% after the smoking ban, and that there was something unusual about this? Nowhere does the press release mention the long-term trend. Nowhere does it mention that the actual "drop in the number of emergency admissions to hospital for a heart attack" was 4.26% (2,300 people), or that the 2.4% was a theoretical figure from a computer model. If the study's press release doesn't mention these facts, what chance have journalists got?
For all these reasons, it is relevant to show the crude rates. As for the use of a regression model, no one is arguing that this is not a valid statistical tool but its practical usefulness in this instance—where you have nothing to work with except aggregate data from an entire nation and no information about any of the patients—is highly questionable.
If you are studying a disease which has only a handful of possible causes, and you have solid information about the patients and the risk factors, you can estimate how many cases are caused by a single factor with a fair degree of accuracy. But heart attacks and heart disease have several hundred risk factors—not all of them are well understood and the magnitude of each of them is open to debate. Furthermore, these factors interact with one another in complex and unpredictable ways. The Gilmore study adjusted for just three of them: temperature, Christmas holidays and "week of the year". (They looked at flu seasons but seem to have dropped them from their final model). These are all perfectly reasonable variables to adjust for, but doing so does not give you a better estimate of the total cases prevented by the smoking ban, it just gives a better estimate of the total cases prevented by all the other factors combined.
On top of that, no one knows exactly why heart attack admissions have been going down at the steady rate witnessed in the last decade anyway. With a multifactoral condition, the answer is surely vastly complicated and there are many theories. Without understanding why heart attacks are declining in the first place, pinpointing and quantifying one possible risk factor for one year's decline is a fool's errand no matter how many statistical methods you use. Computer models are only as good as the data being fed into them.
None of the more convincing explanations for the long-term decline—statins, lifestyle changes, diet or even smoking prevalence—are being adjusted for in this study. Correct me if I'm wrong, but I would have thought that any one of these variables has a more substantial impact on annual AMI admissions than what day of the week Christmas happens to fall on. Note also that none of the factors adjusted for in Gilmore's study can explain the long-term decline; they fluctuate, but do not rise or fall consistently over the period (with the exception of temperature?).
More specific criticisms are impossible as none of the workings are shown. And this, really, is the crucial point. If Gilmore and company have devised a formula that successfully predicts the number of heart attacks to the nearest thousand, based on adjusting for a few minor variables, it is indeed a remarkable scientific breakthrough and we should be told more about it. But we never are. It is asserted that the smoking ban accounted for more than half of the 4.26% drop in 2007/08 but we are never shown how. This is a number that can be neither verified nor debunked. It is to be taken entirely on trust. Skirting over the details might be fair enough in a news report, but we would hope to see some actual evidence in the study itself, even if it's just the weighting of the adjustments.
Effectively, we are being told: "We know it doesn't look like the smoking ban had any effect on AMI admissions but we've run it through a computer model and it has. Trust us." Why should we? Every other smoking ban/heart attack study—whether peer-reviewed or not—has turned out to be seriously flawed. This study's findings were (erroneously) leaked to the media months in advance, the press release failed to get the most basic facts straight and no verifiable evidence is offered to support the all important 2.4% figure. Instead we have unseen adjustments based on unspoken assumptions, all carried out by the UK Centre for Tobacco Control Studies and published just in time for the government's review of the smoking ban. If this doesn't warrant a little scepticism, what does?
3 comments:
It's not 2.4%. It's 0.66% to 4.06% on a 95% confidence interval. 0.66% is 300 people.
And 4.06% is implausible.
Bad Science love debunking junk studies ... except when they quite like the conclusions. ;-)
The major flaw in this study is that there is no control population. This is not so crucial when an effect is very large: for example, a 20x risk for heavy smoking and lung cancer; but this is a tiny claimed effect. A control population is statistically indistinguishable from the population to which the treatment (the ban) is applied. Having a control population means that there is no need to speculate as to what (flu, temperature, statins, tranfats or any number of unknown factors) may or may not be significant factors. Given the small claimed effect, it would not eliminate the possibility that the ban brought on other changes in behaviour which somehow cause a decrease in heart attacks, but would at least give the study a little credibility.
One plausible change in behaviour is that active smokers may have quit in disproportionately larger numbers immediately following the ban. If the instant heart attack hypothesis is true for passive smokers, who are, although not explicitly stated, the implied subject of the study; it must surely be so for active smokers. The study was not able to determine what proportion of heart attacks victims were active smokers.
The authors acknowledge both of these potential criticisms, yet the paper was still published.
The press release is extremely misleading - particularly claiming that the 2.4% fall was an observed fall and not the difference between two estimates.
Post a Comment