As he will need to be, since he intends to write 365 posts about "unhealthful news" in 2011. By the looks of it, we can expect a lot of insight and a good scattering of fascinating facts. So why not pop over there and see what a real epidemiologist has to say about junk science? And if you haven't book-marked his blog yet, now's the time.
On media reporting of science:
The topic that the mainstream media is far-and-away best at reporting on is sport. Perhaps because of that, they try to make every other topic – public policy, science, etc. – as much like a sporting match as possible, emphasizing the battling partisans and score-keeping over substantive analysis of the topic. Among pursuits of the mind, doctrinal battles are already quite similar to sporting matches, so portraying scientific inquiry as if it were such a battle is probably just too great a temptation for reporters.
On the recent controversy about publishing a study purporting to demonstrate ESP (this one in fact):
C’mon, how exactly do these people think science works? We all get together and decide what is true and then produce evidence to support it, burying anything that contradicts it? Well, I guess that is what passed for science in the dark ages, and is what passes for science in anti-tobacco journals and a few similarly-politicized areas, and apparently for some areas of psychology research. Real science, however, relies on an interplay of theorizing and analyzing and reporting of field/experimental research. All of these are needed, including reporting research results that might not end up supporting an accepted theory.
...Only a non-scientist would think that we have to defend the science literature against results that support a hypothesis that might come to be accepted as wrong, something that is obviously impossible. But the reporter and those he talked to seem to think that the “extraordinary evidence” rule means do not publish even a single result that contradicts the conventional wisdom until we have extraordinary evidence. I trust everyone sees a little problem with that.
On peer-review:
I am especially amused by the bit about this being a fundamental flaw in peer review. I guess there were a couple of generations during which the peer review process was considered to add great value, in between Einstein (peer review started to become popular late in his career and he was appalled by it) and now (when anyone who has participated in peer review in a high-volume science, and who has half a clue, knows that it just barely adds value). Those of us familiar with peer review are aware that it serves to screen out some research that uses particularly bad methodology (it sounds like the Bem studies use methods as good as any in the field – pretty cute ones at that, which you can read about at the link above). Beyond that, peer review does nothing more that any editor could do, get rid of material that is incoherent or off-topic for the journal. Of course, it is often used to censor those who do not support the opinions of those who control the field, so I guess that is what Hyman was referring to.
...the news story makes several references to other researchers re-analyzing Bem's study data. This must mean that Bem made the data available. If this be junk science in parapsychology research, play on. In epidemiology we can only dream of getting access to data to do an honest reanalysis, even after obviously biased and misleading analyses are published (and peer reviewed, I might add).
On the reliability of psychological experiments:
I had a professor who told the story of how he picked up spending money as an undergraduate by doing as many psych studies as he could. He said that he came to quickly realize that when the experimenters told a room full of students “you are all participating in a study of X” they were always lying once, and often twice: X was never the true purpose of the study, so it was interesting to try to guess what was. Moreover, chances were that not all were participating, but one or half of the students were actually part of the experiment, acting some role but pretending to be subjects. This was about 1970, but it appears that nothing has changed. So not only are the experiments extremely artificial, but many of the participants have figured out most of the subterfuge, and are probably acting on that knowledge to some extent or just having a little fun, out of boredom if nothing else.
On that "more doctors smoke Camel" claim from way back when:
They passed out Camels outside a medical convention hall and then conducted a survey half a block down the street asking what brand the many then-smoking physicians were using.
And on no less dishonest research from modern day so-called health campaigners:
Myers claimed that there was a 39% increase in smokeless tobacco use among children since 2006. He made up calculated that number using the Monitoring the Future Survey, choosing 2006 as the starting year because there was a downward blip in the annual statistics that year, making it unusually low, and thus making any comparison to a future year look like an increase. In reality, as Brad points out, the results of that survey have fluctuated up and and down. A comparison to 1999 would show no increase in 2009. An additional point that Brad did not add is that using this one survey, a rather odd one, rather than looking across the many datasets available that measure the same time series is equally cherrypicking.
What Myers and his ilk do is not science. It is not honest error. It is lying, which is to say, it is intentionally trying to cause someone to believe something that is not true (e.g., that there is some huge upward trend in underage use of smokeless tobacco). It may seem impolite to phrase it this way, but it is far more impolite to try to manipulate people into believing something that is false. Such statistical games are just as dishonest simply making up a number. Indeed, in several ways it is worse: Not only is he making up the claim, which could actually be correct if he just made it up without looking at the numbers, but we know he has looked at the numbers, and so knows his claim is misleading.
And finally, on the perils of assuming existing trends will continue indefinitely:
It seems that an investment bank financial analyst in Great Britain looked at historical smoking rates and predicted that smoking in that country would drop to approximately zero in 30-50 years. The prediction was apparently based on a linear extrapolation from smoking prevalence from the 1960s to today, extending it into the future. The story attributes a drop in the share prices of two British-based tobacco companies to the report.Start here.
Oh where to start?
Talking of junk science...
ReplyDeletehttp://www.sciscoop.com/thirdhand-tobacco-smoke.html/comment-page-1#comment-8588
Thanks for the review, Chris.
ReplyDeletei had swine flu in 2009 before they managed to get the vaccine distributed in Canada. No question that it is a vaccine worth getting -- that was a nasty disease. Of course, after I had the disease I was still told to get the vaccine. I am not sure public health people understand the old-fashioned path to immunity.
The U.S. Food and Drug Administration has yet to be called to task for intentionally trying to cause someone to believe something that is not true. In July 2009, the Agency announced the results of testing of two brands of electronic cigarettes. What the science showed was that there are no chemicals in the e-cigarette liquid or vapor that present a danger to human health. The findings were misrepresented. “They contain carcinogens and toxic chemicals such as diethylene glycol, an ingredient used in antifreeze," stated the press release. The world was led to the false conclusion that e-cigarettes are very likely to cause cancer and/or poison the user. The FDA "forgot" to mention that the 8 nanograms of nitrosamines in e-cigarettes are no more likely to cause cancer than the 8 nanograms of the very same nitrosamines in an FDA-approved patch. The Agency also forgot to mention that the quantity of diethylene glycol (a tobacco humectant) they measured in one cartridge is so miniscule that 6,804 cartridges would need to be ingested in a single day to be fatally poisoned.
ReplyDelete