Showing posts with label junk science. Show all posts
Showing posts with label junk science. Show all posts

Tuesday, 6 March 2012

Scottish smoking ban miracle touches the unborn

The miracles keep on coming in Scotland, soon there will be pilgrimages.

Drop in pregnancy complications after smoking ban

Complications in pregnancy have fallen as a result of the ban on smoking in public places, according to a new study.

Researchers found the ban, introduced almost six years ago, has led to a drop in the number of babies being born before they reach full term.

It has also reduced the number of infants being born underweight.

My word, this post hoc ergo propter hoc junk science sounds like the kind of rubbish Jill Pell keeps coming out with.

The research team, led by Professor Jill Pell...

Ah, Professor Pell, we meet again and under such similar circumstances. You may recall Jill "Pinocchio" Pell from her signature piece claiming that the heart rate plummeted after the Scottish smoking ban, but for sheer effrontery in the face of rock solid evidence, her subsequent article claiming that the asthma rate fell after the smoking ban takes the cake. Rarely has science met fiction so brazenly.

...looked at more than 700,000 single-baby births before and after the introduction of the ban.

The number of mothers who smoked fell from 25.4% to 18.8% after the new law was brought in, researchers discovered.

This, as you might expect from Pell, is a distortion of the truth. The 25.4% figure relates to 2001, some five years before the ban was introduced. Any honest researcher would surely use the figure for 2005 (22.5%) as the pre-ban measure. We already know from a previous Pell study that the ban had no effect on the smoking rate in the general population. Looking at the ISD figures, it is difficult to see any effect on expectant mothers as well. There is a general downward trend which continued after 2006.


The graph above might actually exaggerate the decline. I was interested to see, upon studying the ISD data, that the smoking ban coincided with (I shall not say caused as I am not a charlatan) a large increase in number of expectant mothers for whom no information on smoking status was available. In other words, more pregnant women are refusing to tell the NHS whether or not they smoke.


The likelihood is that many of these women are smokers but do not wish to be chastisted by the denormalisers of Scotland's health service. This suspicion is supported by the fact that the proportion of pregnant women who are lifelong non-smokers has barely moved for a decade.


Back to the news story...

Experts further found there was a drop of more than 10% in the overall number of babies born "pre-term", which is defined as delivery before 37 weeks' gestation.

There was also a 5% drop in the number of infants born under the expected weight, and a fall of 8% in babies born "very small for gestational size".

This is the meat of the research. As so often, the study has been press released before publication so we cannot see which statistical tricks Pell has employed, but we can use the official NHS records to see how her claims stand up. The data are available here.

The graph below shows preterm births (ie. less than 37 weeks gestation) as a percentage of all live births recorded in Scottish hospitals between 1996 and 2010 (the period that Pell claims to have studied).



The proportion of babies born prematurely in this period remained very constant (between 6.8% and 7.9%). The post-smoking ban years were unremarkable, with percentages of 7.3, 7.4, 7.6 and 7.2 (2007, 2008, 2009 and 2010 respectively). The lowest rate was in 1996. There appears to be no relationship with the general smoking rate, maternal smoking rate or the smoking ban.

I suspect what Pell has done here is taken the highest pre-ban figure (7.9%) and compared it with the lowest post-ban figure (7.2%). The difference between these figures in percentage terms is a little under 9% which, with a bit of statistical massaging, could become "a drop of more than 10% in the overall number of babies born 'pre-term'".

Since preterm births are the major driver behind low birth weights, it should be no surprise that there has been no major change in the number of babies with a low birth weight. Between 97.0% and 97.4% of all full-term pregnancies in Scotland in this period resulted in a baby of normal weight (2500 gm+). I can see no evidence of any 'smoking ban effect' in any of the ISD data. There are moderate random variations and nothing more.

Dr Pell said the research highlighted the positive health benefits which can stem from tobacco control legislation.

To paraphrase Mandy Rice-Davis, she would, wouldn't she?

She said: "These findings add to the growing evidence of the wide-ranging health benefits of smoke-free legislation and support the adoption of such legislation in other countries which have yet to implement smoking bans.

"These reductions occurred both in mothers who smoked and those who had never smoked."

Sorry, what?

"These reductions occurred both in mothers who smoked and those who had never smoked."

Doesn't that tell you something then, Pell? If you are claiming that the smoking ban reduced preterm births because it made people give up smoking, the fact that you found the same result with nonsmokers rather gives the game away, does it not? If, on the other hand, you're suggesting that reducing secondhand smoke miraculously reduces preterm births (I haven't read the study yet, but I wouldn't put it past you to indulge in such superstition), the findings for smokers strongly suggest that this is nonsense as well. Or perhaps you are going to claim that smokers somehow feel the benefit of secondhand smoke reductions as well. Nothing would surprise me at this stage.

"The potential for tobacco control legislation to have a positive effect on health is becoming increasingly clear."

Yes, yes. We understand why you keep producing this garbage. Why don't you go find yourself a street corner to shout from?

Researchers looked at data for babies born between January 1996 and December 2009, taken from the Scottish Morbidity Record, which collected information on all women discharged from Scottish maternity hospitals.

Which is exactly what I have shown above. Feel free to check the data yourself.


UPDATE: Michelle Roberts—easily the worst of the BBC's appalling health reporting team—has been suckered by this story. Her entry on Journalisted is an A-Z of pointless epidemiology. I notice the national press have ignored the story, presumably on the basis of 'once bitten'.

Pell's study has appeared on PLoS here. It's short on data but this is her killer graph...


This barely resembles the actual data from Scottish hospitals, but even so it takes a massive leap of faith to attribute the smoking ban to any part of it. The hard line represents the smoking ban, but Pell prefers to use the dotted line because "the Akaike information criterion statistics suggested that using 1 January 2006 as the breakpoint produced a marginally superior model fit than using 26 March 2006." Hey, whatever fits your a priori conclusion the best, Jill.

Even having moved that goalpost, it's plain to see that the fall in preterm births began around ten months before the smoking ban came in. In fact, it came well over a year before, because the timeline Pell is using is the date of conception, not birth. It must have been pretty galling for her to see that the largest drop in her graph preceded the ban and came to an end as soon as the ban came in. Moving the date back to January does not help her much in that respect. Furthermore, even if it had happened after the ban, it would hardly have been proof of anything. There are two little peaks in the graph (as there are in my graph above) followed by two drops. Peaks do tend to be followed by drops, y'know. Maybe Jill Pell should look up 'regression to the mean'.

Friday, 23 December 2011

The magic 25%

A handful of anti-smoking extremists have long hoped that smoking is linked to breast cancer. The pink ribbon breast cancer campaign is arguably the best-publicised and best-funded initiative in pubic health. Because breast cancer is the most common form of cancer amongst women, even a small association with smoking would allow tobacco control advocates to claim that millions of cases could be prevented by stamping out tobacco.

The problem is that there really isn't any reason to think the two are related. Sixty years of epidemiological research has failed to find a link and, unlike with diseases of the lung and airways, there is no obvious causal mechanism. As recounted in Velvet Glove, Iron Fist (pp. 236-38), neither the International Agency for Research on Cancer (IARC) nor the American Cancer Society believe there is a link and even the otherwise outré Surgeon General's report of 2006 didn't claim smoking to be a cause of breast cancer.

Outside of California, it is generally accepted that breast cancer is not a smoking-related disease. Inside California, things are always a little different. From his pulpit at UCSF, Stanton Glantz has been insisting on a connection for years, and the California Environmental Protection Agency (Cal-EPA) conducted a meta-analysis in 2004 which found an association between breast cancer and passive smoking. When the American Cancer Society expressed reservations about this meta-analysis (amongst other flaws, it excluded a notable cohort study which would have wiped out the association), Glantz went berserk and referred to doubters as "religious fanatics", thus displaying an extraordinary lack of self-awareness.

Glantz has been at it again this month following a review of breast cancer risks conducted by the Institute of Medicine. Getting rather excited at the prospect at linking arms with the pink-ribbon campaign, he overstated the conclusions of the IOM report and announced:

It's time for the large breast cancer advocacy groups to join the tobacco control community.

Glantz seems to think that the IoM report implicated smoking (and passive smoking) as a cause of breast cancer. That is not how I read it, nor is it how the New York Times read it. What the IoM actually found was this:

The evidence also indicates a possible, though currently less clear, link to increased risk for breast cancer from exposure to benzene, 1,3-butadiene, and ethylene oxide, which are chemicals found in some workplace settings and in gasoline fumes, vehicle exhaust, and tobacco smoke.

This was the only reference to tobacco in a 700 word press release. In the report itself, the IoM say that they cannot rule out a link, but that the evidence is equivocal. Tobacco remains a "possible" cause in the same way that mobile phones were found to be a possible cause of brain cancer in a recent IARC report. In other words, the collated evidence does not suggest a causal link, but some studies have found an association.

There are two interesting aspects of the breast cancer/smoking hypothesis. The first is that there was barely a hint of a link for the first 40 years of epidemiological research, as the IoM acknowledge:

Before 1993, more than 50 epidemiologic studies examined the relationship between breast cancer and exposure to tobacco smoke. Although the quality of studies was highly variable, the better conducted studies did not suggest a causal relationship (Palmer and Rosenberg, 1993). An IARC review published in 2004 included studies conducted before 2002, and it relied heavily on a pooled analysis of 53 case–control and cohort studies by the Collaborative Group on Hormonal Factors in Breast Cancer Study (2002) that contended that apparent associations with smoking were confounded by alcohol consumption. The IARC (2004) conclusions were that neither active nor passive smoking was associated with increased risk of breast cancer.

In any other field of research this would be enough to put the matter to bed, but tobacco control was flooded with money in the 1990s and so it continued. This coincided with the rise of ultra-low risk epidemiology and cherry-picked meta-analyses which, in turn, was accompanied by the burden of proof being relaxed in the science to the point where statistically insignificant findings were taken seriously.

Breast cancer is a very common disease and smoking is a very common behaviour. Given these facts, any association between the two should have been evident very early on (by the 1950s, if not even earlier). That no one found an association despite smoking being the most studied risk factor of the twentieth century strongly suggests that none exists. "If smoking was a major cause of breast cancer, we would have found it by now," says Dale Sandler, chief of the NIEHS Epidemiology Branch.

Those who say that smoking (active or passive) causes breast cancer are making an extraordinary claim and, despite efforts being redoubled in the last fifteen years, there is no extraordinary evidence and very little ordinary evidence.

From the IoM report:

Active smoking 

The summary risk ratio was 1.10 (95% CI, 1.07–1.14), indicating a weak association with increased risk for early initiation of smoking. For women who smoked only after a first pregnancy, the summary risk ratio was 1.07, but it was not a statistically significant increase in risk (95% CI, 0.99–1.15). A subsequent report from the NHS found a statistically significant increase in risk associated with greater smoking intensity (i.e., pack-years of smoking) from menarche to a first birth (p for trend <0.001) (Xue et al., 2011). At 1–5 pack-years of smoking before a first birth the hazard ratio (HR) is 1.11 (95% CI, 1.04–1.20); for 16 or more pack-years, the HR is 1.25 (95% CI, 1.11–1.40).

No increase in risk was evident for pack-years smoked from after a first pregnancy to menopause. For 31 or more pack-years, the HR was 1.05 (95% CI, 0.92–1.19). However, pack-years of smoking after menopause may be associated with a slight reduction in risk (p for trend = .02) (Xue et al., 2011). For 16 or more pack-years of postmenopausal smoking, the HR was 0.88 (95% CI, 0.79–0.99).

... For women who started smoking between ages 15 and 19, the HR was 1.21 (95% CI, 1.01–1.44); whereas those who initiated smoking after age 30, the HR was 1.00 (95% CI, 0.76–1.32).

Brown et al. (2010) concluded that their data did not show a consistent association between smoking and significant increases in breast cancer risk among U.S.- or foreign-born Asian women. For example, the results for current smokers showed an OR of 0.9 (95% CI, 0.6–1.3) while ex-smokers had an OR of 1.6 (95% CI, 1.1–2.2).

A study that examined risk for triple-negative breast cancer found no statistically significant increase in risk over nonsmokers based on smoking status, age at initiation, or duration of smoking (Kabat et al., 2011). By comparison, women with estrogen-receptor- positive cancers (ER+) were at significantly increased risk with earlier initiation (< age 20: HR = 1.16, 95% CI, 1.05–1.28) and longer duration of smoking (≥30 years: HR = 1.14, 95% CI, 1.01–1.28).

These relative risks are low or non-existent and even the positive findings are often not statistically significant. The most interesting thing about these associations is that they are actually lower than the associations claimed for passive smoking.

Passive Smoking

A 2005 review by the California Environmental Protection Agency of various health hazards associated with exposure to secondhand smoke included a meta-analysis of 19 epidemiologic studies of breast cancer ... The meta-analysis produced an overall estimate for exposed women of RR = 1.25 (95% CI, 1.08–1.44) (CalEPA, 2005; also reported in Miller et al., 2007). When the analysis was restricted to five studies with more comprehensive exposure assessment, the overall estimate was RR = 1.91 (95% CI, 1.53–2.39).

In 2006, the U.S. Surgeon General’s report The Health Consequences of Involuntary Exposure to Tobacco Smoke, which included consideration of many of the same studies as the California review, concluded, “The evidence is suggestive but not sufficient to infer a causal relationship between secondhand smoke and breast cancer” (HHS, 2006, p. 13). The conclusion was based on a review of the findings from seven prospective cohort studies, 14 case–control studies, and a meta-analysis of all of these studies. The meta-analysis found that women who had ever been exposed to secondhand smoke (10 studies) were at increased risk of breast cancer (RR = 1.40, 95% CI, 1.12–1.76).

The idea that passive smoking is more dangerous than active smoking is patently absurd, but that didn't stop ASH (USA) hyping Cal-EPA's meta-analysis with this headline in 2005:

Secondhand Tobacco Smoke More Dangerous Than Smoking Itself

It is fitting that an organisation that endorses so much flim-flam should wind up embracing the principles of homeopathy, but any reasonable person understands that the dose makes the poison. In its understated way, the IoM acknowledges that it is a tad unlikely that people who inhale less than 1% of the dose inhaled by smokers would be at greater risk.

For most other smoking-related diseases, the relative risks are much stronger for active smoking than passive smoking. Thus findings of equivalent or stronger relative risks for breast cancer with passive smoking than with active smoking are difficult to explain mechanistically.

And yet these perverse findings exist and they require explanation. At first glance, it seems that the epidemiological research into breast cancer and tobacco don't tell us very much at all. Certainly, they don't tell us very much about the environmental causes of breast cancer, but I think they tell us quite a bit about the state of epidemiology. They show how easy it is to find a relative risk of around 1.25 (ie. a 25% increase) in an observational study. It takes only moderate recall bias or deficiencies in a study's design to come up with such associations. In the case of secondhand smoke and breast cancer we can surmise that the associations are false because there is no link with active smoking, but it is curious that the claimed associations with other diseases also fall in the same ultra-low bracket, regardless of the magnitude of the risk from active smoking.

Smokers are around 1,000 to 2,000% more likely to develop lung cancer. The passive smoker's excess risk is said to be around 25%.

Smokers are around 70% to 100% more likely to develop coronary heart disease. The passive smoker's excess risk is, again, around 25%.

Smokers are not any more likely to develop breast cancer, but the passive smoker's excess risk is said to be—you guessed it—25%.

Despite huge variations in the effects of smoking, the effects of secondhand smoke—if we are to take the epidemiological studies at face value—are remarkably consistent. Consistent with each other, that is. Not consistent with the rest of science.

Tuesday, 15 November 2011

Heart miracles are impossible

King-size cigarette,
pint-size intellect
It's good to see Dr. Carl V. Philips back and blogging over at Ep-ology. In his last two posts he has been discussing the North Carolina heart miracle 'study', which is as bad a piece of advocacy-driven junk science as you will ever see.

In particular, he makes a point which I have tried to made before, which is absolutely fundamental to all the heart miracle studies. The results they report—of heart attacks falling by 17%, 21%, 40% or whatever—are simply impossible.

Let's go along with the "consensus" view that long-term secondhand smoke exposure increases the lifetime risk of heart disease by around 20-30%. Nevermind whether that is a realistic estimate. For good or ill, it is the figure used by the Surgeon General and other authorities, and it is accepted by those who conduct the heart miracle studies.

That being the case, is it plausible that the elimination of secondhand smoke from restaurants, offices and bars could reduce the heart attack rate by 21% (as reported in North Carolina) or 40% (as reported in Helena, Montana)?

It is not.

For one thing, most restaurants, some bars and nearly all offices were non-smoking before the ban. In addition, many non-smokers avoided the few remaining smoky venues before the ban. The vast majority of heart attack cases are elderly and not the kind of people to be out partying in bars, nor indeed working in pubs or waiting tables in restaurants. Furthermore, the amount of secondhand smoke inhaled by this subsection of non-smokers before the ban is minimal compared to the long-term exposure that the 20-30% figure is based on.

As Carl explains...

How many people go from being exposed to restaurant/bar smoke to unexposed as a result of the ban? It is a bit fuzzy to define this since there will be a lot of people whose exposure is reduced, and a spectrum of how much it is reduced. But we can start with the observation that roughly half of everyone had approximately zero such exposure before the ban, never or almost never going out to eat and drink, or avoiding smoking-allowed venues when they did...

Thus, even if you believed that exposure at the level of visiting restaurants and bars causes somewhat more than 20% increase in risk, which is an absurd belief in itself, there is no possible way the effect of the smoking ban could be more than about half of the claimed 21%.

Even if we assume that secondhand smoke does cause heart attacks, smoking bans have so little effect on so few non-smokers (and have no effect at all on the smokers, unless it compels them to quit), that the kind of reductions in the heart attack rate reported by these studies defy both science and common sense. If there is an effect, it is too small to measure and would never show up in population-level statistics. Once that is understood, it is obvious that any studies which claim a dramatic effect on the heart attack rate must be flawed, cherry-picked or distorted. Sure enough, when such studies are examined, they prove to be flawed, cherry-picked and distorted.

We can figure that half of the population was not exposed in the first place, that easily a third of those exposed were smokers, that many of those exposed had very minor and occasional exposure, and that many others that were exposed had only a minor reduction in exposure since most of their exposure was elsewhere. So it seems unlikely that even one-fifth of the population experienced a substantial reduction in exposure, getting the effect down below 1% of the total.

If, to take North Carolina as an example, the smoking ban caused the heart attack rate to drop by 21%—which it unequivocally did not—it follows that smoking in bars, restaurants and offices must have been responsible for a fifth of all heart attacks before the ban.

It is quite possible that thirty years of induced panic about passive smoking has persuaded many people that such diluted tobacco smoke is capable of wreaking such havoc, but the empirical evidence shows that it cannot be so. If it were, the relative risk from secondhand smoke exposure would be far higher than 20-30%. Indeed, secondhand smoke would be responsible for more heart attacks than smoking. It would mean that passive smoking (at work and at home) was the single biggest risk factor for heart attacks. Even the most tobaccophobic hypochondriac surely cannot believe such a thing.

In the case of Stanton Glantz's bar-lowering Helena study (2004), the smoking ban effect was even greater—an astonishing 40%. Again, this implies that smoking in a subsection of private venues was responsible for two-fifths of all heart attacks before the ban—a manifestly risible idea.

Interestingly, Glantz must have known that his findings were inherently implausible because he addressed them in the text of the study itself. His comments tell you much about the man's mathematical illiteracy and, sadly, tell us much about the decline of the peer-review process (the study was published in the prestigious British Medical Journal). He wrote:

The effect associated with the smoke-free law may seem large but is consistent with the observed effects of secondhand smoke on cardiac disease. Secondhand smoke increases the risk of a myocardial infarction by about 30%; if all this effect were to occur immediately, we would expect a fall of - 0.30 x 40.5 = - 12.2 in admissions during the six months the law was in effect, which is within the 95% confidence interval for the estimate of the effect (a drop of - 32.2 to - 0.8 admissions).

His argument here is that secondhand smoke exposure increases the risk of heart disease by 30% and so, "if all this effect were to occur immediately", a smoking ban should reduce the heart attack rate by around 30%. 40% is, he concedes, a little higher than might be expected but it is within the margin of error.

This piece of reasoning is so patently flawed that I still cannot believe it was allowed to be published. Let's leave aside the fanciful idea that the effect of a lifetime's exposure would suddenly be nullified by a smoking ban in non-domestic settings. The key point is that Glantz ignores the fact that secondhand smoke is one of dozens, if not hundreds, of risk factors for heart attacks (or heart disease—he treats them as if they were the same thing). He seems not to comprehend the difference between relative risk and absolute risk. He does not acknowledge that a relative risk which affects a subsample of the nonsmoking population is not going to have a commensurate effect on the entire population. And he implicitly treats secondhand smoke as if it were the sole cause of heart disease. These are staggering schoolboy errors for a man with pretensions of being an epidemiologist (which just goes to show that a degree in mechanical engineering is not always the best grounding for a career in cardiology).

Look at it this way. If using a mobile phone while driving increases your risk of having an accident by 90%, what will be the effect on the number of car crashes in a country that bans the practice?

The answer is that we do not know. There are countless other risk factors for car crashes and so, even if using a mobile phone has a substantial effect on individual risk, the effect at the population level will be too small to measure.

By Glantz's logic, however, the effect of a mobile phone ban will be to reduce the number of car crashes by 90%—because he doesn't understand the basic difference between individual relative risk and absolute risk to the population. How can he be so ignorant? There are, as Carl says, only two possibilities.

Interestingly, it is not entirely clear whether he spouts junk because he has not acquired a modicum of understanding about the science in the field where he has worked for decades, or because he is a sociopath-level liar; I am not entirely sure which is the more charitable interpretation.

Do go read both of Carl's pieces about the North Carolina nonsense:

Unhealthful News 189 - Absurd claims about the effects of smoking place restrictions, North Carolina edition (Part 1)


Unhealthful News 190 - Absurd claims about the effects of smoking place restrictions, North Carolina edition (Part 2)


Wednesday, 9 November 2011

The North Carolina smoking ban/heart attack hoax

Stop me if you think you've heard this one before.

From the University of San Francisco (note the byline)...

Heart attacks down 21 percent in the first year after the North Carolina smokefree restuarant and bar law took effect

Submitted by sglantz on Wed, 2011-11-09 11:54

The evidence that strong smokefree laws provide large and immediate health benefits just keeps piling up.

The latest study, released today, found a 21 percent drop in emergency room admission for heart attacks during the first year of the law, saving an estimated $3.4 to $4.3 million in heath care costs. This is serious money, particularly as both government and the private sector struggle to keep health costs down.

These real documented and rapid benefits not just in terms of health, but the economy, show that the economic argument on smokefree policies has clearly shifted away from the tobacco industry and its allies to the health side.

Real and documented, you say? So we can assume, at the very least, that there were 21% fewer heart attacks after the smoking ban?

Not even that, I'm afraid. Not even close. As the study shows, there were 9,066 heart attacks in 2008. This fell by 10.5% to 8,113 in 2009. The smoking ban came in at the start of 2010. In that year, there were 7,669 heart attacks—a decline of 5.5%.

The researchers have even helpfully included a graph in which you can clearly see the heart attack rate falling before the ban and then leveling off somewhat after the ban.




As if to rub our noses in it, the researchers spell out exactly what the trend was.

Interestingly, the rates appear to have consistently declined between the year 2008 and 2009; after that period the rates leveled off at a consistently lower level in the year 2010.

Er, yeah. So where on earth does this claim that there was "a 21 percent drop in emergency room admission for heart attacks during the first year of the law" come from?

The answer is that they did a Gilmore. They made a computer model. You may recall Anna Gilmore and her band of merry women reinterpreting the no-change-there-then English heart attack data and declaring that 2.4% of the 4.2% drop was attributable to the smoking ban. Unprovable (she made no attempt to prove it) but also unfalsifiable.

This new study takes that approach to absurd new depths. Whereas Gilmore claimed that a portion of the drop in heart attacks was due to the smoking ban, this model says that the smoking ban reduced the heart attack rate by 21%, despite the actual heart attack rate only falling by 5.5%.

You almost have to admire the sheer audacity of these people. Every time I think there is no way they can keep flogging this dead horse, they come up with another ruse.

Here is a study which unequivocally shows that the smoking ban had absolutely no effect on the heart attack rate. If anything, the year after the smoking ban saw rather more heart attacks that would be predicted based on the preceding years. The study provides all the data you need to see that the heart attack rate fell by 5.5% after the smoking ban and yet it concludes—based on a demonstrably ludicrous computer model—that the smoking ban reduced the heart attack rate by 21%. When your computer gives you information like that it's time to turn it off and turn it on again.

And yet you can be sure that when this study is inevitably reported, the facts will not be allowed to stand in the way. The number of people who actually went to hospital with a heart attack will become irrelevant (although it's fitting that bans based on imaginary deaths are saving imaginary lives). The fiction has become the reality. The model has spoken. "There were 21% fewer heart attacks after the smoking ban. Here's Tom with the weather..."

Wednesday, 2 November 2011

A good news story

This piece of junk research came out a couple of months ago...

Meat eaters are selfish and less social

"Meat brings out the worst in people. This is what psychologists of the Radboud University Nijmegen and Tilburg University concluded from various studies on the psychological significance of meat.

Thinking of meat makes people less socially [sic] and in many respects more "loutish". It also appears that people are more likely to choose meat when they feel insecure, perhaps because it is a feeling of superiority or status displays, the researchers suggest.

Marcel Zeelenberg Tilburg professors (Economic psychology) and Diederik Stapel (consumer sciences and dean of Tilburg School of Social and Behavioral Sciences) and the Nijmegen Professor Roos Vonk (social psychology) examined the psychological significance of meat.

The conclusion was that eating meat is symptomatic of some sort of psychological disorder. This, of course, was just what militant vegetarians wanted to hear and it was eye-catching enough to make it into the newspapers.

Roos Vonk, known for her columns and books about how our ego gets in our way, doesn’t feel shocked. "Previous research had already shown that meat eaters think more in terms of dominance and hierarchy (who is the boss?) than vegetarians. Eating meat is also traditionally associated with status, meat used to be much more expensive and scarcer than now. Eating meat is a way to elevate yourself above others. But by uplifting yourself, you lose connection with others. That explains why there are more insecure people in need. It also makes people loutish when they think about meat and also feel lonely. "

Diederik Stapel adds to it: "It seems that vegetarians and flexitarians are happier and feel better, and they are also more sociable and less lonely."

Diederik Stapel is a social psychologist with a string of peer-reviewed studies to his name, including this one - just another junk scientist forcing his beliefs onto others with the veneer of social science. Nothing special about that, except that this story has a happy ending.


Dutch 'Lord of the Data' Forged Dozens of Studies

One of the Netherlands' leading social psychologists made up or manipulated data in dozens of papers over nearly a decade, an investigating committee has concluded.

Diederik Stapel was suspended from his position at Tilburg University in the Netherlands in September after three junior researchers reported that they suspected scientific misconduct in a study that claimed eating meat made people more aggressive.

Stapel's work encompassed a broad range of attention-catching topics, including the influence of power on moral thinking and the reaction of psychologists to a plagiarism scandal. The committee, which interviewed dozens of Stapel's former students, postdoctoral researchers, co-authors, and colleagues, found that Stapel alone was responsible for the fraud. The panel reported that he would discuss in detail experimental designs, including drafting questionnaires, and would then claim to conduct the experiments at high schools and universities with which he had special arrangements.

The experiments, however, never took place, the universities concluded. Stapel made up the data sets, which he then gave the student or collaborator for analysis, investigators allege. In other instances, the report says, he told colleagues that he had an old data set lying around that he hadn't yet had a chance to analyze. When Stapel did conduct actual experiments, the committee found evidence that he manipulated the results.

This is the kind of thing that the public expects peer-review to be able to weed out. In practice, alas, peer-reviewers do not verify raw data nor do they obtain proof that experiments have been carried out. Most of the time, they wouldn't be able to perform these checks even if they wanted to.

Not that I'm suggesting that peer-review is massively over-rated - sometimes reviewers will correct spelling mistakes.

The data were also suspicious, the report says: effects were large; missing data and outliers were rare; and hypotheses were rarely refuted. Journals publishing Stapel's papers did not question the omission of details about where the data came from. "We see that the scientific checks and balances process has failed at several levels," Levelt says.

The case of Mr Stapel is highly unusual. He got caught.

One down, hundreds to go.

Friday, 14 October 2011

Who do you believe?

Further to Monday's post about the childhood asthma rate in Scotland, it is worth comparing the data presented by Jill Pell in her NEJM paper—which claimed there were 18% fewer hospital admissions after the smoking ban—with the actual hospital admissions data recorded by the Scottish NHS.

This is the "smoothed" graph presented by Pell in her study, which as noted in a previous post, does not even fit her own data. (The last 'year' shown is also not a full year.)



Pell's study produced the intended flood of media coverage, of which this Reuters report was typical.

Scottish smoking ban cuts childhood asthma attacks

A 2006 public smoking ban in Scotland reduced the number of serious childhood asthma attacks by 18 percent per year, researchers reported on Wednesday.

Before the ban imposed in March 2006, the number of hospital admissions for asthma was rising by 5 percent a year among children under 15. The after-ban benefits were seen in both pre-school and school-age children.

Critics had said the ban could force smokers who could not light up in the workplace or in enclosed public spaces to smoke more at home, increasing the risk to children.

Dr. Jill Pell of the University of Glasgow, who worked on the new study, said the findings in the New England Journal of Medicine show that did not happen.

"The evidence we have from Scotland is that it had the opposite effect. People are generally more accepting of the need to protect nonsmokers and vulnerable groups such as children," Pell said in a telephone interview.

"Children were being exposed to less secondhand smoke. We went into the study hoping we would see some health benefit coming out of that."

However...

NHS Scotland has since published the statistics showing how many children were admitted to hospital with asthma between 2005 and 2009. These figures can be viewed here. They do not support Pell's hypothesis in any way, shape or form.

The graph below shows the rate of hospital admissions for asthma for children aged 0-14 years in all Scottish hospitals (per 100,000). The years shown are financial years (April to March - the first year shown is 2005/06), which is useful since the smoking ban was introduced in Scotland at the end of March 2006. Each of the last four bars therefore represent a full post-ban year.




The next graph shows the total number of episodes of the same (ie. the absolute number of admissions). It naturally shows a very similar picture.




Although not discussed by Pell, it is interesting to note that the rate of asthma admissions amongst people of all ages has been higher in every year since the smoking ban was introduced.



And, for good measure, let's have a look at hospital admissions for all diseases of the respiratory system combined.


The data available online do not go back further than 2005/06 so we cannot see the long-term trend earlier in the decade. However, it is sufficient to see that there was no decline in hospital admissions for any of these diseases amongst any age group. If anything, there was an increase.

So, once again, you have a choice. You can choose to believe Jill Pell, a researcher who has, shall we say, "form" when it comes to producing studies like this.

Or you can believe the statistics produced by NHS Scotland which are based on the number of people who actually got admitted to hospital. These statistics, incidentally, support the claim made by Asthma UK that the rate of childhood asthma has remained essentially static for a decade.

It's your call.

Monday, 10 October 2011

Somebody's lying

They say you're entitled to your own opinions, but not your own facts.

Wise words, but that's not how it works in tobacco control. Spot the difference between these two BBC news stories taken three months apart.

25 March 2011

Scotland's smoking ban hailed as anniversary approaches

Sally Haw, senior scientific adviser for the Scottish Collaboration for Public Health and Policy, said: "The ban really has been one of Scotland's big public health success stories.

"This bold step has really paid off."

Ms Haw cited a study by Glasgow University which showed a 15% reduction in the number of children with asthma being admitted to hospital in the three years after the ban came into force.


27 June 2011

Scottish health boards 'complacent' over asthma care

Research into the care of young people with asthma has exposed "shocking" complacency by some Scottish health boards, according to charity Asthma UK.

Asthma UK said the number of emergency admissions had remained unchanged for a decade - suggesting the asthma of many young people was still being badly managed.

Asthma UK Scotland's national director Gordon Brown said: "This report makes shocking reading - especially when you consider Scotland has one of the highest rates of childhood asthma in the world.

"Some health boards are doing some things very well - and this is down to the excellent staff within managed clinical networks.

"However, it seems that at a strategic level some complacency has crept in - that asthma has somehow been 'fixed' and priorities have now changed.

"This is borne out by the fact there has been no noticeable change in the unacceptably high emergency hospital admissions for children and young people with asthma in the last decade."

It is impossible for both these statements to be true. Either emergency hospital admissions for children with asthma fell by 15% after the smoking ban or they have remained unchanged for a decade. Someone's not telling the truth. Is is the "study by Glasgow University" or Asthma UK?

You can probably guess the answer. If I told you that the Glasgow study was penned by the infamous Jill Pell, you would be in no doubt at all.

Readers with a long memory will recall that Pell's study was the sheerest junk science. There was no effect from the smoking ban on asthma admissions. In fact, the first year of the Scottish smoking ban saw the largest number of childhood asthma admissions of the decade. Asthma UK is correct. Pell is wrong. Again.

Here we have two 'facts' which are totally at odds with each other appearing on the same news website in the same year. One fact is the number of children who actually went to hospital with asthma. The other is a piece of statistical jiggery-pokery created for political ends. And yet only one of them is true. The other is a fraud which has taken the place of the truth thanks to repetition and the appeal to authority (it was published in the prestigious, peer-reviewed New England Journal of Medicine). The real truth, meanwhile, appears almost by accident in a different context and no one at the BBC makes the connection.

This is the parallel universe created by the charlatans of the anti-smoking industry. They are entitled to their own facts. Whether or not they are true is of no consequence. They want them to be true and that is all that matters.

It is ridiculously easy to see through this garbage. The real hospital admissions data for asthma are available online, just as the heart attack data are. It takes a matter of minutes to distinguish fact from fiction and yet there is only silence and tumbleweed. If the mainstream media do not feel inclined to expose blatant policy-based evidence when it is in its crudest form, what hope is there of more subtle scientific abuses coming to light?

[Thanks to Ivan for spotting the two stories above.]

Friday, 9 September 2011

Another Alcohol Con

If you haven't already bookmarked the excellent Straight Statistics, you really should. Their latest article is a routine debunking of some routine junk science from Alcohol Con(cern) who came up with the amazing finding that alcohol sales correlate with alcohol consumption. Or, to be precise, that alcohol-related hospital admissions are correlated with the number of off-licenses in an area.

But not in London. So they left that out.

This is such a blatant conflation of correlation and causation that even Ben Goldacre—who never criticises 'public health' bad science and sometimes defends it—emerged to poke fun at it.


A red-faced Don Shenker knew exactly what he was talking about and replied...


To which Goldacre rightly responded...


As Straight Statistics points out, Alcohol Concern quite explicitly did claim causality:

Under the heading Methodological Qualifications, the new report states: “This study did not set out to establish cause and effect.” Yet the previous page asserts that nearly 10 per cent of all alcohol specific hospital admissions in England, excluding London, are directly attributable to off-licence density, “meaning availability rather than any other external factor is the cause of one in ten of such harms”.

So either Don Shenker doesn't understand that if you say something is "the cause" you are claiming causality or he is a liar. I make no judgement on that but urge you go read the rest.

Thursday, 1 September 2011

Anti-tobacco and anti-alcohol swap notes again

The day might come when I get tired of reminding drinkers of how foolish they were to doubt the slippery slope, but that day is not today, so let's have look at the next ASH (Wales)/Alcohol Concen conference, shall we?



That seems fairly unambiguous and it's a nice sequel to the recent 'alcohol and tobacco summit' in Scotland. Being an ASH event, it is of course sponsored by the pharmaceutical companies Novartis and Pfizer, and some of the country's top anti-smoking fantasists will be sharing their tips with the temperance lobby, including Gerard Hastings, a man who thinks the Ferrari logo looks like a Marlboro packet.

Temperance campaigners will be particularly excited to hear that Linda "the smoking ban didn't hurt pubs" Bauld and Anna "but it did reduce the number of heart attacks" Gilmore will be attending. Alcohol Concern are not slouches when it comes to bending the truth themselves, but these two have the know-how to go nuclear with the junk science. Let's remind ourselves of some of their greatest hits.

According to Linda Bauld, the smoking ban had "no clear adverse impact on the hospitality industry". And here, using pub closure figures from the British Beer and Pub Association, we can see what "no clear adverse impact" looks like:




And in her study of heart attacks in England, Anna Gilmore said: "We conclude that the implementation of smoke-free public places is associated with significant reductions in hospital admissions for myocardial infarction." Hmm, quite. And here's that significant reduction in full (the figures come from her own study):




Considering that the world and his wife has swallowed the idea that the smoking ban didn't damage the pub trade but did reduce the heart attack rate, you can see why any lobbyist would want to kneel at the feet of these two conjurors. Well done Alcohol Concern, you wanted the best. You got the best.

Sunday, 7 August 2011

The heart miracle scam revisited

One of the more blatant scientific scams of recent years has been the heart miracle phenomenon, in which tobacco control campaigners create studies showing a dramatic fall in heart attacks after smoking bans. It began when Stanton Glantz of Americans for Nonsmokers' Right announced the 'Helena miracle' in 2003, claiming a 60% drop in heart attacks after the smoking ban in this small town in Montana (later revised to 40%). It continued through places like Bowling Green, Ohio (47%) and Pueblo, Colorado (41%).

These places are not exactly major conurbations, you may have noticed, and the numbers of heart attacks are so low—often single or double figures—that large fluctuations are common. When whole countries have been studied, advocates have been forced to resort to methodological jiggery-pokery (Scotland, 17%) and bald, unverifiable assertions (England, 2.4%).

Stanton Glantz has produced two meta-analyses in an attempt to shore up his hypothesis, claiming that heart attacks fall by 27%, and then 17%, after smoking bans are enacted.

But when the actual hospital admissions data are made available, they invariably fail to provide any evidence of an effect of smoking bans on the heart attack rate. The statistics from Denmark, Wales, Australia, England, ScotlandNew Zealand and the United States have all failed to support the smoking ban/heart miracle hypothesis.

What should we trust? The evidence from places like Bowling Green, Ohio or the evidence from the entire USA? It has been obvious from the very beginning that anti-smoking campaigners have been mining the data to find big drops in heart attacks that roughly coincide with smoking bans. A study recently presented to an American Heart Association conference illustrates how easy this cherry-picking can be.

The study's finding are very interesting. The researchers looked at 74 US cities and found that the heart miracle effect has now fallen to just 3% (0.97 (0.95-99)). When the sample was limited to cities where the smokefree law was "meaningful"—ie. where there was a full smoking ban, rather than just restrictions—the effect disappeared entirely (0.99 (0.96-1.02). (Meaning that cities with lax smoking bans saw a bigger fall in heart attacks than those with "meaningful" smoking bans. Once again, the evidence fails to fit the theory.)

A drawback of the study is that there is no indication of how much the heart attack rate was falling before the ban, nor do the researchers compare rates to those in the cities which didn't have a smoking ban. When the study is published—if it is—hopefully the researchers will use a control group and look at the long-term trend.

What we do know is that rates of heart disease are falling in the USA, as they are in the UK and Europe. Even if one picks the more generous 3% figure, a modest decline of this order is likely to be in line with the secular trend. In other words: the smoking bans didn't make any difference.

If you look at the cities which brought in full smoking bans, you can see there is a great deal of variation in the heart attack rate with some going up, some going down and some going nowhere.



The variation is so great that no honest statistician would claim that there is any trend to be found amongst the data. But notice that four of the cities showed a statistically significant reduction in heart attacks and that the reduction was quite strong (at around 25%-30%). Now put yourself in the shoes of the tobacco control advocate who wants to show that smoking bans have a major effect on the heart attack rate. Which city would you choose to write your paper about? Evanston or Flagstaff?

The question, I think, answers itself. This is how they have been doing it. This is how we end up with 'news stories' like this from the BBC in 2003:

Town slashes heart attacks

Banning smoking in public places could prevent hundreds of deaths from heart disease, according to a study in a small US town.

Heart attacks in Helena, Montana, fell dramatically when smoking in public places was banned for six months last year.

The number of admissions dropped to fewer than four a month - a fall of nearly 60%.

From "nearly 60%" to nearly zero in just 8 years. What a ridiculous fraud this whole thing has been and how pathetic that so many have fallen for it.

Thanks to Michael J. McFadden for the tip.

Friday, 4 February 2011

Applied philosophy

Stanton Glantz:

"I'm 62 years old, and I tell people I didn't have a midlife crisis. I know a lot of people who reach 50 who sit around saying, 'What have I done?' I don't have that problem."

Bertrand Russell:

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts".


Thursday, 13 January 2011

Junk 365

I've been a little lethargic with the blogging of late. First came the (swine?) 'flu which put me out of commission for a good while. Then came the need to do some off-line writing. I would say there's not been much to blog about but Carl V. Phillips has found plenty of interest and is in prolific form.

As he will need to be, since he intends to write 365 posts about "unhealthful news" in 2011. By the looks of it, we can expect a lot of insight and a good scattering of fascinating facts. So why not pop over there and see what a real epidemiologist has to say about junk science? And if you haven't book-marked his blog yet, now's the time.

On media reporting of science:

The topic that the mainstream media is far-and-away best at reporting on is sport. Perhaps because of that, they try to make every other topic – public policy, science, etc. – as much like a sporting match as possible, emphasizing the battling partisans and score-keeping over substantive analysis of the topic. Among pursuits of the mind, doctrinal battles are already quite similar to sporting matches, so portraying scientific inquiry as if it were such a battle is probably just too great a temptation for reporters.

On the recent controversy about publishing a study purporting to demonstrate ESP (this one in fact):

C’mon, how exactly do these people think science works? We all get together and decide what is true and then produce evidence to support it, burying anything that contradicts it? Well, I guess that is what passed for science in the dark ages, and is what passes for science in anti-tobacco journals and a few similarly-politicized areas, and apparently for some areas of psychology research. Real science, however, relies on an interplay of theorizing and analyzing and reporting of field/experimental research. All of these are needed, including reporting research results that might not end up supporting an accepted theory.

...Only a non-scientist would think that we have to defend the science literature against results that support a hypothesis that might come to be accepted as wrong, something that is obviously impossible. But the reporter and those he talked to seem to think that the “extraordinary evidence” rule means do not publish even a single result that contradicts the conventional wisdom until we have extraordinary evidence. I trust everyone sees a little problem with that.

On peer-review:

I am especially amused by the bit about this being a fundamental flaw in peer review. I guess there were a couple of generations during which the peer review process was considered to add great value, in between Einstein (peer review started to become popular late in his career and he was appalled by it) and now (when anyone who has participated in peer review in a high-volume science, and who has half a clue, knows that it just barely adds value). Those of us familiar with peer review are aware that it serves to screen out some research that uses particularly bad methodology (it sounds like the Bem studies use methods as good as any in the field – pretty cute ones at that, which you can read about at the link above). Beyond that, peer review does nothing more that any editor could do, get rid of material that is incoherent or off-topic for the journal. Of course, it is often used to censor those who do not support the opinions of those who control the field, so I guess that is what Hyman was referring to.

...the news story makes several references to other researchers re-analyzing Bem's study data. This must mean that Bem made the data available. If this be junk science in parapsychology research, play on. In epidemiology we can only dream of getting access to data to do an honest reanalysis, even after obviously biased and misleading analyses are published (and peer reviewed, I might add).

On the reliability of psychological experiments:

I had a professor who told the story of how he picked up spending money as an undergraduate by doing as many psych studies as he could. He said that he came to quickly realize that when the experimenters told a room full of students “you are all participating in a study of X” they were always lying once, and often twice: X was never the true purpose of the study, so it was interesting to try to guess what was. Moreover, chances were that not all were participating, but one or half of the students were actually part of the experiment, acting some role but pretending to be subjects. This was about 1970, but it appears that nothing has changed. So not only are the experiments extremely artificial, but many of the participants have figured out most of the subterfuge, and are probably acting on that knowledge to some extent or just having a little fun, out of boredom if nothing else.

On that "more doctors smoke Camel" claim from way back when:

They passed out Camels outside a medical convention hall and then conducted a survey half a block down the street asking what brand the many then-smoking physicians were using.






And on no less dishonest research from modern day so-called health campaigners:

Myers claimed that there was a 39% increase in smokeless tobacco use among children since 2006. He made up calculated that number using the Monitoring the Future Survey, choosing 2006 as the starting year because there was a downward blip in the annual statistics that year, making it unusually low, and thus making any comparison to a future year look like an increase. In reality, as Brad points out, the results of that survey have fluctuated up and and down. A comparison to 1999 would show no increase in 2009. An additional point that Brad did not add is that using this one survey, a rather odd one, rather than looking across the many datasets available that measure the same time series is equally cherrypicking.

What Myers and his ilk do is not science. It is not honest error. It is lying, which is to say, it is intentionally trying to cause someone to believe something that is not true (e.g., that there is some huge upward trend in underage use of smokeless tobacco). It may seem impolite to phrase it this way, but it is far more impolite to try to manipulate people into believing something that is false. Such statistical games are just as dishonest simply making up a number. Indeed, in several ways it is worse: Not only is he making up the claim, which could actually be correct if he just made it up without looking at the numbers, but we know he has looked at the numbers, and so knows his claim is misleading.

And finally, on the perils of assuming existing trends will continue indefinitely:

It seems that an investment bank financial analyst in Great Britain looked at historical smoking rates and predicted that smoking in that country would drop to approximately zero in 30-50 years. The prediction was apparently based on a linear extrapolation from smoking prevalence from the 1960s to today, extending it into the future. The story attributes a drop in the share prices of two British-based tobacco companies to the report.

Oh where to start?

Start here.


Monday, 20 December 2010

Vast study finds no heart miracle but lots of publication bias

In light of my recent posts about post-smoking ban 'heart miracles', it is timely that a new study of heart attacks rates in the United States has just been published. This study—by far the biggest ever conducted— confirms that smoking bans have no significant effect on either the incidence of, and mortality from, acute myocardial infarction.

Published in Journal of Policy Analysis and Management, the study looked at more than two million heart attack deaths over the course of 16 years, making it by far the largest exercise of its kind ever conducted. The researchers found a great deal of fluctuation in heart attack rates but concluded that:

...large short-term increases in myocardial infarction incidence following a smoking ban are as common as the large decreases reported in the published literature.

The crucial four little words here are 'in the published literature'. The large increases get ignored while the large decreases get studied, written up, published and press released. The widely-reported studies that have found drops in heart attacks after smoking bans are—as regular readers already know—the result of straightforward cherry-picking and publication bias. We know that in most Western countries there is a long-term trend of declining heart attack rates. We also know that there is substantial variation in heart attack rates and that smaller communities (like the Isle of Man or Helena) are more likely to see bigger fluctuations because the average number of cases is already very small (single digits per month, in those instances).

As such, it is child's play to mine the hospital data and find places which have seen large drops in heart attack admissions following a smoking ban. It's not a coincidence that such studies usually rely on obscure towns in Montana or Ohio, and not the huge populations of Wales, Australia or New Zealand, where we know smoking bans have had zero effect on the number of heart attacks. And on the odd occasion when researchers get carried away and agree to do a heart miracle study for an entire nation before they've had a chance to look at the data, they can always ignore the actual hospital records and cook the books to create the illusion of a large drop in heart attacks, even though the real data show nothing of the sort.

What this latest study shows is that if you look at vast populations, there is far less chance of a fluke result and, if the findings are honestly reported, there can only be one conclusion:

"In contrast with smaller regional studies, we find that smoking bans are not associated with statistically significant short-term declines in mortality or hospital admissions for myocardial infarction or other diseases."

For more comment on this, see Michael Siegel, Jacob Sullum and Mr Puddlecote. The latter also has the news about regular commenter Junican winning a year's subscription to Tobacco Control after entering a competition to come up with new terminology for the anti-smoking movement to employ. His spoof suggestion turned out to be less risible than the real submissions. Junican is currently buying pornographic magazines to wrap around his issues of the world's foremost anti-smoking journal so he can read them in public without embarrassment.

Friday, 17 December 2010

How's that Scottish heart miracle going?

Being in the mood to look back on the effectiveness of tobacco control efforts (see Ireland's Abject Failure below), let's see how that Scottish heart attack miracle has been coming along. You'll recall the professorship-winning study by Jill Pell which claimed that hospital admissions for acute coronary syndrome  fell by 17% in the first year of the ban.

Pell didn't go down the traditional route of finding out how many cases were admitted to Scottish hospitals and comparing rates before and after the ban (the data are readily available). That would be far too obvious and accurate. Instead, she went to the elaborate effort of limiting her sample to a selection of hospitals and then extrapolated the results across the whole of Scotland. After all, why use the actual data when you can create your own?

The answer, of course, is that there wasn't a 17%, or anything like it. And now, with three years post-ban data in the can, let's see how that heart miracle looks using the real NHS admissions data.




And, just to be sure, let's look at the rates of acute myocardial infarction (heart attacks).




Is any further comment really required?

Thursday, 16 December 2010

Is this the love child of Glantz and Gilmore?

The other day I was thinking of running a worst-junk-science-of-the-year poll. Thank God I bided my time otherwise I would have missed the chance to nominate this beauty.

Isle of Man smoking ban 'cuts heart attacks'

A ban on smoking in public places has reduced heart attack admissions, according to research commissioned by the Isle of Man's Department of Health.

The department has compared admissions in the two years prior to introduction of the ban on 30 March 2008 and the two years since.

It discovered that the number of men over 55 admitted for heart attacks had dropped since the ban.

But if we take a look at the 'study' (unpublished and not peer-reviewed, not that that makes a lot of difference these days), a very different picture emerges:




Do my eyes deceive me or does this graph show that there significantly more heart attacks after the smoking ban?

They don't and there were. In the 23 months before the smoking ban, there were 109 heart attack admissions, or 4.7 per month. In the 23 months after the smoking ban, there were 153 heart attacks, or 6.65 per month.In what universe does this count as a drop in heart attacks?

In the crazy world of tobacco control, that's where. Note the regression lines, designed to take your eye off what is actually happening. Note how the second half of the graph has a line that is driven down by the lowish figure for the last month shown (since the next month needed to make it a full two years has mysteriously gone missing).

This is a method taken straight out of the Anna Gilmore's box of tricks, with a dash of Glantz's Helena magic thrown in for good measure (small community, inaccessible hospital records, data mining etc.). If there isn't a drop in heart attacks, you simply 'predict' how many would have occurred if the smoking ban hadn't come in and make sure your prediction is higher than the real number. And before you know it the BBC will be falling over itself to report that "a ban on smoking in public places has reduced heart attack admissions" and the New England Journal of Medicine will be beating a path to your door.

And the feeble effort shown above is the best this researcher—a maths student at Rutherford Polytechnic the University of Northumbria—could conjure up. The graph that shows all heart attack admissions, (ie. the relevant, non-cherry-picked data set) is even less compelling.





Notice that before the ban, there were usually fewer than ten heart attack admissions. Notice, too, that after the ban the rate was usually well above ten. And, of course, there were more heart attacks in total after the ban than before it. And, as the flat black line shows, the monthly rate of admissions did not go down one bit in the nigh-on two years after the ban.

But you're not supposed to look at any of that. Instead, you are invited to look at the upwards line in the pre-ban period and assume that the rate would have continued rising, even though that line only goes up because of a big jump (by Isle of Man standards) to 14 cases shortly before the ban. Nor are you supposed to notice that any responsible statistician would identify that unusual leap as a statistical artifact. The fact that more than two-thirds of the data points are below the regression shows that it's being contorted by an outlier.

It's truly unbelievable that this sort of stuff gets taken seriously. Or it would be if it didn't happen every few months. This is a world where a flat line equals a decline, and a 50% increase in heart attacks equals a reduction in heart attacks.

In a year that has seen fierce competition for the title, Ms Howda Jwad of Northumbria University—for it is she—may just have clinched the inaugural World's Worst Junk Science Award in the dying days of the year. Glantz, Pell, Gilmore, Winickoff—it's time to up your game.


Thanks to Brian Bond for the tip

Monday, 13 December 2010

Who pays for these studies?

I'm not in the habit of fisking studies based on the abstract alone, but I'll make an exception in this instance.

Overestimation of Peer Smoking Prevalence Predicts Smoking Initiation among Primary School Students in Hong Kong

Purpose:

To investigate the relationship between perceived prevalence of smoking and smoking initiation among Hong Kong primary second- to fourth-grade-students.

Methods:

A cohort of 2,171 students was surveyed in 2006 and again in 2008. Students who perceived ever-smoking prevalence in peers as “none” or “some” were considered as correct (reference group), whereas those who perceived it as “half” (overestimation) or “most/all” (gross overestimation) were considered as incorrect.

Hmm. So if they perceived that none of their peers smoked, they were assumed to be correct. That may be true in the sheltered world of tobacco control, but for the rest of us that should be classified as an 'underestimate'. Except there isn't an 'underestimate' option available in this study, which leads me to think that it isn't very well designed.


Results:

At baseline, overestimation was found to be cross-sectionally associated with ever-smoking. At follow-up, 7.2% of never-smoking students with incorrect estimation at baseline had started smoking, which was 79% (95% confidence interval: 3%–213%), greater than that of 3.7% for those with correct estimation. Among the never-smoking students with incorrect estimation, subsequent correct estimation was associated with 70% (95% confidence interval: 47%–83%) lower risk of smoking initiation compared with persistent incorrect estimation.

Regardless of whether these kids' estimates are right, it's fair to assume that those who said 'most' had more friends who smoked than the ones who said 'some' or 'none'. And since having friends who smoke is a major predictor of smoking initiation, that—not the overestimating—is the reason they start smoking. The ones who said 'all' would, of course, be liars having a laugh at the researchers' expense. Given that the subjects are schoolchildren I believe, and hope, that there were many of them.

Conclusion:

Overestimation of the prevalence of peer smoking predicted smoking initiation among children. Interventions should be carried out to evaluate whether correcting children's overestimation of peer smoking could reduce smoking initiation.

Rubbish. The conclusion is that if your friends smoke, you're more likely to smoke yourself. But I think we knew that already, didn't we?