Prêmio CBMM
Imprimir Republish

Good practices

“I see too many cases where nothing ever happens”

Journalist Ivan Oransky, creator of the Retraction Watch website, discusses how the fight against scientific misconduct has evolved in recent years

Personal collection Oransky created a database that has become a reference for scientific integrity researchPersonal collection

American journalist Ivan Oransky had a successful career as an editor when he and his colleague Adam Marcus, who also specialized in health, decided to start a blog about a relatively obscure subject: retractions of scientific articles. Publishing research results is a fundamental part of modern science: millions of papers are published worldwide every year, and sometimes they contain errors. There are a number of mechanisms used to rectify these errors, such as publishing errata, and in extreme cases, an article can be retracted, with an announcement by the publishing journal that the described results were scientifically void. Some retractions are made due to scientific misconduct, such as plagiarism, fabrication, or falsification of data.

Thus was born the Retraction Watch blog, today on the radar of all academic publications, scientists, and anyone interested in good scientific practices, with 150,000 unique visitors per month. In their first post in August 2010, Oransky and Marcus explained why they were starting the blog. Although science has some important self-correcting mechanisms, they said, this process can be very time-consuming. As an example, they cited the 12-year gap between publication of the notorious paper by Andrew Wakefield linking vaccines to autism and its retraction by The Lancet six years after major questions about the study were first made public by a journalist. They also noted that most cases are not as famous as Wakefield’s, with retractions remaining an obscure subject, which gave them the idea of creating an informal repository that has since evolved into a database of 19,500 retractions. Another of their motivations was to provide a source for journalists reporting on fraud and misuse of funds, as well as helping people tasked with correcting misconduct in science. Finally, the pair wanted to know how journals deal with this issue: how long they waited before publishing a retraction, whether journals with a low number of retractions had a better peer review process, or whether errors were simply hidden.

Oransky, 46, graduated in biology from Harvard University in 1994, where he also began his journalism career as editor of The Harvard Crimson. He then graduated in medicine from New York University in 1998, where he now teaches journalism. He subsequently held editor positions at Reuters Health, Scientific American, and The Scientist, and is now vice president and editorial director at Medscape, an organization that provides news and online training for doctors and healthcare professionals. Oransky often travels to talk about his work—he has been to Brazil four times to attend BRISPE meetings (Brazilian Meeting on Research Integrity, Science Publication Ethics). While visiting Lausanne, Switzerland, to participate in a debate on how to report scientific fraud at the 11th World Conference of Science Journalists, he spoke with Pesquisa FAPESP about his work.

How was the reaction when you first created Retraction Watch? Was there any resistance?
There were people who were mistrustful, but we always contact the authors of retracted articles, we have from day one. They don’t always want to talk to us, but we try to talk to them, as well as the institution, when appropriate. The problem was not our method, but more the focus on bad news. I don’t know if people have changed or if we have changed. If we look at the big cases like Stapel [Diederick Stapel, a professor of social psychology at Tilburg University in the Netherlands who had 58 articles retracted for data manipulation], or, going back a bit, Hwang Woo-Suk [a South-Korean veterinarian who published two fraudulent articles about human embryo cloning in the journal Science in 2004 and 2005], we see that it happens in every country. People realized that these cases, while rare, are not incredibly rare, so we got more accustomed to talking about it. After a while, even Elsevier stated that Retraction Watch was good for science—and publishers are not generally treated favorably in our reports. There is now a perception that not talking about it makes people trust science even less. If there is a willingness to say that these things happen and here’s what we are doing about it—that should increase trust in science.

That implies that institutions actually do something about misconduct. Do you believe that this has improved in recent years?
Yes, some retraction notices have improved, which is good. Some universities are being more transparent and releasing investigation reports—although many still aren’t. They are under more pressure from channels like PubPeer [a website that allows users to discuss and review scientific articles] and Retraction Watch. Today you have major newspapers, magazines, television, and radio addressing these issues. It’s harder for universities to do nothing because someone will notice. One of the interesting things to me is that in some of these cases, the evidence of misconduct is really clear-cut, and no one is debating it. Everyone ignores it and treats it as something minor, until some external, nonscientific event happens that exposes the problem. For example, Duke University’s Anil Potti was a major case [the Indian physician had 11 articles retracted and six corrected for misconduct in cancer treatment research; the university was accused of suppressing evidence of data manipulation in a 2010 investigation]. Everyone ignored it until a reporter from the magazine Cancer Letter was shown a research grant application in which Potti lied about having a Rhodes Scholarship [an international postgrad scholarship to study at the University of Oxford, UK] and began to wonder if he was lying about other things as well. Then it all unraveled.  I wish all that wasn’t necessary—I wish that it was enough to just deal with problems in the research, but it often isn’t.

What do you think is the real extent of the problem?
I just see too many cases where nothing ever happens. I see hundreds and thousands of comments on PubPeer about issues found in articles. I have seen correspondence between institutions and journals, where institutions have asked for retractions and journals either do nothing or they take two or three years to act. We can’t cover even a tiny fraction of all retractions, nobody can. Without publicity, without public pressure, often nothing happens.

Retraction is mostly about researchers and journals, but what about the role of institutions that should be curbing misconduct?
We began working with C. K. Gunsalus, a professor at the University of Illinois, to review institutional investigation reports. A lot of them are terrible: they don’t ask the right questions, they don’t answer the questions well, they don’t have the right people on the committees. We published a checklist in JAMA [The Journal of the American Medical Association] last year about how to investigate allegations of misconduct. A lot of people are starting to use that to evaluate reports, as well as using our database, which is very gratifying.

How did Retraction Watch evolve into a database as well as a journalism website?
We were well known in the science and health journalism community, so right away we started reporting on stories of interest to other outlets, and we started doing interviews with them within a week of launching the site. We became a source for other journalists—it was a very effective but unintentional strategy. This was around the same time as the Stapel case, and there was a lot of interest in that. We got lucky. The database is our legacy. We only had the idea for that about four years after we started. We were keeping huge lists of retractions to cover that became impossible to manage. In the beginning, there were five or six dozen retractions a year, now it’s 1,400. When we accepted the idea that we couldn’t cover everything, it made the job much easier because we began to focus on what we considered important.

In some cases, the evidence of misconduct is really clear-cut, and no one is debating it. Everyone treats it as something minor

How do you feed the database? Do you use artificial intelligence?
No, it’s all manual. We have a researcher, who currently works part-time only, who did her PhD thesis on retraction. She knows this stuff better than I do. It has to be manual because publishers are so bad at this.

Do you think that’s deliberate?
I have to give them the benefit of the doubt. I ask them about the problems I see: sometimes they fix things, sometimes they make them worse, sometimes they just ignore them. Many retractions are not identified properly. Publishers could do that quite easily, but apparently they don’t want to or it’s not a priority. So it has to be done by hand. If a simple algorithm could capture most cases, then we wouldn’t need the database. Two years ago, a programmer said he could capture all the cases for us. A while later he came back and said he discovered something fascinating: the journal Annals of Surgery has more retractions than any other journal in the world. That didn’t match with our experience, so I asked to see his data. In surgery, a common operation is tissue retraction, so there were all these papers in this surgery journal with the word retraction in the title. This isn’t a simple task, so I told him he may need to refine the search a bit. We would love our database to cover everything without needing to be done by hand, because we have other things to do.

You organize and maintain the retraction database, report on relevant topics, and curate content published by other outlets. Do you try hard to find a balance?
There has to be a balance. That’s why we have the database, we made the checklist, we created an award [DiRT – Doing the Right Thing Award], which we present to honest scientists who have retracted articles due to unintentional errors or fraud by other people involved in the research. We want to encourage people to act honestly.

In the last nine years, what has changed in the world of misconduct?
In 2010, one retraction was enough for a story. By 2014, there had to be a large number of retractions, or a trend, like the fake peer reviews we wrote about. Nowadays, it’s US$112-million settlements. It’s easy to see the increase in retractions and decide that misconduct must also be increasing. But it’s also that people are looking at this more than before. Autism diagnoses have skyrocketed in the US and around the world: and it may be that autism rates are increasing, but clearly it is also because people are paying more attention to the issue and so more cases are being diagnosed. That changes the definition.

How do you organize your work?
In the past we received funding that allowed us to have a team and to write more reports. Today it is just me, Adam, and Alison Abritis, our part-time researcher. Adam writes a lot of the reports—we split the work between us. I do the newsletter every day. I love it, to me it doesn’t feel like work, but it takes a lot of time and there is a fair amount of stress involved. The constant search for funding is tiring. Today we have no time. I have a new job at Medscape. But somehow, Retraction Watch has become my identity.

Where does Retraction Watch go from here?
The goal now is sustainability. Adam and I can continue to do this for free. That’s how we started, although for a while we received a salary, but we’re back to volunteering now. There are other costs too, it’s not that expensive, but the website, the database—even though it looks like it’s from 1988, it still requires maintenance and sometimes new functions. Mostly, I’d like to be able to pay our researcher a salary. I want to get to a position where we don’t have to be constantly out looking for funding. That’s the goal of every nonprofit organization. I think we are doing something very valuable, so sticking with that would be very good. I would like to expand what we do, have more reporters, get more into the legal issues, because there is huge growth potential there. Lawyers are getting increasingly involved in this area. We could make a newsletter or a website for them. That would be powerful. The important thing is to be sustainable. If someone else one day wants to take over, I’d be fine with that. I just want the work to continue.

Republish