Ben Goldacre, The Guardian, Saturday 15 January 2011
Sometimes something will go wrong with an academic paper, and it will need to be retracted: that’s entirely expected. What matters is how academic journals deal with problems when they arise.
In 2004 the Annals of Thoracic Surgery published a study comparing two heart drugs. This week it was retracted. Ivan Oransky and Adam Marcus are two geeks who set up a website called RetractionWatch because it was clear that retractions are often handled badly: they contacted the editor of ATS, Dr L Henry Edmunds Jr, MD to find out why the paper was retracted. “It’s none of your damn business,” replied Dr Edmunds, before railing against “journalists and bloggists”. The retraction notice is merely there “to inform our readers that the article is retracted”. “If you get divorced from your wife, the public doesn’t need to know the details”.
ATS’s retraction notice on this paper is uninformative and opaque. The paper went “following an investigation by the University of Florida, which uncovered instances of repetitious, tabulated data from previously published studies.” Does that mean duplicate publication, two bites of the cherry? Or maybe plagiarism? And if so, of what, by who? And can we trust the authors’ other papers?
What’s odd is that this is not uncommon. Academic journals have high expectations of academic authors, with explicit descriptions of every step in an experiment, clear references, peer review, and so on, for a good reason: academic journals are there to inform academics about the results of experiments, and discuss their interpretation. But retractions form an important part of that record.
Here’s one example of why. In October 2010 the Journal of the American Chemical Society retracted a 2009 paper about a new technique for measuring DNA, explaining it was because of “inaccurate DNA hybridization detection results caused by application of an incorrect data processing method”. This tells you nothing. When RetractionWatch got in touch with the author, he explained that they forgot to correct for something in their analysis, which made the technique they were testing appear to be more powerful than it really was, and actually they found it’s no better than the original process it was proposed to replace.
That’s useful information, much more informative than the paper simply disappearing one morning, and it clearly belongs in the academic journal the original paper appeared in, not an email to two people from the internet running an ad hoc blog tracking down the stories behind retractions.
This all becomes especially important when you think through how academic papers are used: that ACS paper has now been cited 14 times, by people who believed it to be true. And we know that news of even the simple fact of a retraction fails to permeate through to consumers of information.
Stephen Breuning was found guilty of scientific misconduct in 1988 by a federal judge – which is unusual and extreme in itself – so most of his papers were retracted. A study last year chased up all the references to Breuning’s work from 1989 to 2007, and found over a dozen academic papers still citing his work, some discussing it as a case of fraud, but around half – in more prominent journals – still cited his work as if it was valid, 24 years after its retraction.
The role of journals in policing academic misconduct is still unclear, but clearly, explaining the disappearance of a paper you published is a bare minimum. Like publication bias, where negative findings are less likely to be published, this is a systemic failure, across all fields, so it has far greater ramifications than any one single, eyecatching academic cockup or fraud: unfortunately it’s also a boring corner in the technical world of academia, so nobody has been shamed into fixing it. Eyeballs are an excellent disinfectant: you should read RetractionWatch.
No comments:
Post a Comment