Dutch epidemiologist Lex Bouter is one of the world’s leading experts on research integrity. Since 2013, he has been developing studies on the subject, seeking to assess researchers’ involvement in serious cases of misconduct and questionable practices. The scholar is also dedicated to teaching responsible conduct aligned with the precepts of open science, which is marked by scientific collaboration, unrestricted access to knowledge, and freely sharing data.
Bouter was rector of Vrije Universiteit Amsterdam (VU) from 2006 to 2013. In 2017 he became president of the foundation that organizes the World Conferences on Research Integrity (WCRI), which will hold its seventh world conference at the end of May in Cape Town, South Africa. The theme for this year’s congress is “Fostering Research Integrity in an Unequal World.” This marks the first time that the event will be hosted in the African continent.
Bouter spoke with Pesquisa FAPESP in early March, via Zoom. In the following interview, he addresses recent debates and initiatives regarding research integrity, how they have been affected by the pandemic, his expectations for the next WCRI meeting, and more.
You began your career in the fields of medical biology and epidemiology. When did you decide to work in the area of research integrity?
I was the rector of my university for seven years and for 12 years I was a member of the Dutch Central Committee for Research on Human Subjects. I dealt with various cases of misconduct during this period and noticed that many questionable research practices stem from methodological flaws, so when my term as rector ended in 2013, I decided to dedicate myself more fully to this area.
How would you assess the significance the World Conferences on Research Integrity organization has achieved?
The WCRI has been able to bring the various academic actors together around discussions on research integrity. This was essential in enabling us to produce guidelines on how to combat misconduct and promote good practices. An example of this is the Singapore Statement, produced in 2010 during the second WCRI event. It has become a reference document on codes of conduct for research integrity around the world—not that it was created for that purpose, but I’m glad it helped guide other local initiatives.
The discussions on integrity have evolved in recent years, moving from the individual responsibility of researchers, to the role of institutions in preventing new cases and, recently, to a debate around the reward systems in academic careers. Where should this discussion be headed now?
The upcoming WCRI will offer a new perspective on research integrity. We intend to debate how to incorporate concepts of equality, diversity, and inclusion into the discussions and initiatives in this field. It will also be an opportunity for us to assess the lessons of the pandemic. Scientific research has gone through a period of intense pressure—with so many articles on Covid-19 being published in preprint format—and important changes in the peer review process. We need to discuss what we can learn from these experiences.
Has the prevalence of scientific misconduct been affected by the pandemic?
We hope to get some answers to this in South Africa. The pandemic generated a lot of pressure on researchers and sparked a great public interest in science. I was amazed at the quantity of preprint articles that were publicly reviewed in discussions on social media. This level of scrutiny is welcome as long as the comments are reasonable and constructive. However, many scientists received insults and threats on these platforms because of the studies they were working on. This was a new phenomenon for us.
How would you evaluate the dissemination of preprints?
This format is a great idea, but it involves risks that still need further study. The initial evidence suggests that preprints on Covid-19 research have undergone few changes in their final, peer-reviewed revisions [see Pesquisa FAPESP nº. 313]. Also, the levels of retracted papers remained stable during the pandemic, which is a good sign.
Discussions on research integrity play an important role today in the scientific relationship between developed and developing countries. How can they be encouraged?
We always try to include scholars from low- and middle-income countries in the discussions on research integrity by holding conferences outside Europe and North America. The WCRI event held in 2015 in Rio de Janeiro is one example. I’ve learned from my Brazilian colleagues that it helped give their discussions and initiatives on research integrity some traction. We want to do the same in Africa. We know that their local governments are interested in promoting this issue. Two of our main sponsors for the upcoming conference are the South African Department of Science and Innovation and the National Research Foundation of South Africa. We’re using these resources to fund the participation of researchers from low- and middle-income countries. We want them to present their work at the event. We’re also encouraging the scientists from these countries to participate in the advisory and planning committee for the conferences. Our goal is to be an international organization that’s committed to diversity.
The pressure to increasingly publish in high-impact journals is a major driver of misconduct and questionable research practices
How is Brazil positioned in regard to discussions on research integrity?
Brazil is a developed country in terms of its science, although it’s going through a difficult period. From what I could see, Brazilian researchers have a substantial and growing awareness regarding research integrity. Some academics are developing studies on the subject and several educational initiatives have been launched.
The debate on research integrity has focused in the recent past on how the reward system in academia encourages misconduct. How do you see this relationship?
Evidence suggests that the current reward system, based mainly on publishing lots of articles and receiving numerous citations, tends to encourage dubious and inappropriate conduct. It’s thought that many researchers engage in questionable or fraudulent practices in order to get their studies published in high-impact journals, obtain funding for their research, or be appointed to a particular position at their university—and unfortunately, it often works.
Some experts criticize the use of bibliometric indicators in the evaluation of researchers, arguing that they should be replaced by more comprehensive metrics. Do you agree?
It doesn’t seem very smart to me to base an entire reward system on merely the quantity of articles published and their citations, especially because these indicators aren’t good at determining the quality of academic production. But that doesn’t mean we should abandon these metrics. It would be more reasonable to incorporate them into a system that also takes into account other indicators associated with responsible research practices.
What kind of practices?
If a researcher preregisters their research, if their data is made available in open-access repositories, if they’re a good reviewer, advisor, mentor, teacher, etc. Of course, we shouldn’t overestimate the value of these—or other—indicators, since they all have their limitations. Therefore, they should be used to complement traditional quantitative indicators.
The University of Utrecht recently announced that it would abandon the use of bibliometric indicators in the process of hiring and promoting researchers.
It’s an interesting initiative, but it hasn’t yet proven effective.
You don’t look very convinced.
It was a pretty radical change. Much has been discussed in the Netherlands about so-called “narrative CV’s,” in which researchers, instead of highlighting their bibliometric indicators, such as the h-index, present a qualitative description of their contributions, discussing their academic achievements, the impact of their research, their academic advising, etc. These stories can be moving, but it’s very difficult to use these CV’s to compare candidates fairly and objectively. We still need quantitative indicators.
What is the role of research-funding agencies in promoting research integrity?
They are important for driving change. After all, researchers need money for their projects. A good example was when funding agencies began to require scientists to submit a data-management and sharing plan along with their funding applications. Everyone complied. However, these changes need to be made cautiously, based on solid evidence and planning. Some agencies in the Netherlands stopped using impact indicators without thoroughly evaluating the consequences of this decision. Now we have to deal with these narrative resumes that I mentioned earlier, which are interesting, but difficult to use in the selection process.
What are the factors that most often lead to misconduct and questionable research practices?
The literature suggests that one important driver of research integrity problems is pressure from universities and funding agencies on researchers to get increasingly published in high-impact journals. However, we know there are other factors. We’ve seen in recent studies that research advisors and supervisors play an important role in this issue.
In what sense?
We identified two types of mentoring. One is the “survival” type, in which an advisor or supervisor teaches researchers, in the early stages of their careers, all the tricks they need to succeed under the current reward system—in other words, how to publish lots of papers, get lots of citations, acquire funding, etc. Scientists who received this type of training seem to be more involved in questionable research practices, in contrast to their colleagues who received the second, more responsible type of mentoring, which is based on good research practices.
The omission of negative results tends to mask reality and produce biases, with implications for the reproducibility of studies
Which group did better in terms of scientific output and impact?
We didn’t assess the productivity of these two groups.
Do open science practices increase the chances of detecting misconduct?
Yes, because they increase the transparency and trustworthiness of studies. Identifying questionable or fraudulent practices is often only possible by comparing the published study with its preregistry version, in which the authors committed to following a determined protocol before starting data collection. This practice is encouraged with open science.
You’ve highlighted the importance of valuing the disclosure of null or negative results. Why is this relevant?
Omitting negative results ends up overestimating, or overrepresenting, positive findings. What happens is these findings will later be summarized in review papers, implying that they represent the entire body of evidence we have on a given phenomenon, and not just one part of the data. This tends to mask reality and produce biases, with important implications for the reproducibility of studies.
Has the growing concern about reproducibility in science improved discussions of research integrity?
They’re two sides of the same coin. Questionable research practices are among the principal factors responsible for the reproducibility crisis, and this undermines trust in research. Combating these practices means demanding that research be ethically sound and of rigorous methodological quality. This greatly increases the chances of studies being replicated successfully.
You have argued that instead of emphasizing punishment for misconduct, institutions should support a continuing debate on errors and behaviors capable of compromising research integrity. Are there any successful examples of this?
There’s an interesting initiative coordinated by a European consortium that’s been putting together the knowledge and best practices in research integrity that’s coming out of universities and laboratories. Part of their collected data is on the project’s website, www.sops4ri.eu, in a section called “toolbox” [see toolbox, at the top of the page]. However, it’s important to highlight that the effectiveness of these actions hasn’t yet been properly evaluated. There are many education initiatives aimed at research integrity, but we don’t know if they work.
How long will it take to be able to measure the effectiveness of these initiatives?
Depends on what we want to know. It’s simple to estimate the participants’ levels of satisfaction in courses on research integrity, but it’s difficult to measure the effects the initiatives have on problematic conduct. Studies based on the answers given by researchers themselves have their limitations. It is possible to investigate scientists’ attitudes and knowledge regarding these practices via questionnaires, but they’re not good predictors of problem behaviors. It’s like smoking. People know it’s bad for their health, but they still smoke.
In 2018, you participated in a committee responsible for drafting a new code of conduct on research integrity for research institutions in the Netherlands. Why did you decide to replace the previous code?
We wanted to update it, expand the list of recommendations on good research practices, and clarify the distinction between misconduct and less important faults, showing how institutions should proceed for each type of alleged misconduct. We’ve established some criteria to help institutions determine the severity of sanctions, such as intentionality, personal gain, whether the researcher is a repeat offender, whether they’re at the beginning of their career, and so on. The document also presents a list of responsibilities that institutions should follow in terms of training and supervision, ethical standards and procedures, and the promotion of open science, particularly with regard to data management, publication, and disseminating study results.
You had trouble studying research integrity in the Netherlands. Of the more than 40,000 researchers invited to complete an online questionnaire last year, only 21% participated. What was the difficulty?
That’s true, few researchers responded to our questionnaire, although this percentage of response is right in line with that obtained in other studies. We publicized the survey and highlighted its importance on social networks and newsletters, but we were unable to increase the number of participants. We hope that more researchers will participate in the future as the community becomes more engaged in the conversation around research integrity.
Why do researchers resist participating in these surveys?
Some don’t want to, or don’t have the time, others don’t believe we’ll actually protect their identity and are afraid their answers would jeopardize their careers. It’s also possible that for some of them, our questionnaire ended up in the spam box. But by and large, these questionnaires force scientists to confront their own attitudes and behaviors. They’re faced with questions like “have you ever fabricated or falsified search results?” It’s not pleasant to admit to yourself that you’re a fraud.