Imprimir Republish

Communication

Warnings left by users on social media posts containing misinformation have little effect

A study conducted in Brazil, India, and the United Kingdom evaluated data from 3,000 people

Klaus Vedfelt / Getty Images

When people come across a post containing misinformation on social media, is it effective to leave a comment warning others that it’s fake news? A paper published in July in the Harvard Kennedy School Misinformation Review found that fact-checking corrections provided by regular users have only a limited effect on people who already believe false claims about COVID-19. When those corrections include a link to verified information from a news outlet, the impact increases slightly, depending on the country. The authors emphasize the need for institutional strategies and functional platform improvements to make misinformation countermeasures more effective.

The study surveyed 3,000 people in Brazil, India, and the United Kingdom, asking them to evaluate Facebook posts about COVID-19. One thousand participants in each country answered a questionnaire after being shown real-world misinformation posts that were making the rounds when the data were collected, in March 2021. Some of the posts promoted chloroquine—even though large clinical trials had already ruled out its effectiveness—or downplayed the severity of the health emergency.

Participants in each country were divided into three subgroups of roughly 300 people each. The first, a control group, saw three true posts and six false ones—with no comments attached to any of them. The second group saw the same set of posts, except that in some cases, researchers added a mock user comment flagging the post as false. In the third group, those warning comments also included a link to a news organization’s verified information. Although the links weren’t clickable, they showed a preview with the article’s headline and the name of the outlet.

Participants then rated how accurate each post seemed—choosing among “not accurate,” “somewhat accurate,” “reasonably accurate,” or “very accurate.” Each response was scored on a scale from 0 to 3. Using those scores, the researchers calculated an average “misinformation belief level” for each country, starting with the control group. In the UK, the average score was 0.83—indicating the lowest belief in misinformation. In Brazil, it was 1.03, and in India, 1.68, the highest of the three.

The researchers then repeated the calculation for the groups that saw fact-checking comments. In Brazil, the effects were mild. Compared with the control group, belief in misinformation dropped by 6.9% among participants who saw fact-checking comments with links and by 5.7% among those who saw comments without links. In India, though, linked comments had a stronger effect: they reduced belief in misinformation by roughly 10% compared with the control group. Comments without links, by contrast, showed no meaningful impact. In the UK, differences among the three groups were minimal, since the control group was already highly skeptical of misinformation.

Another phase of the experiment tested whether fact-checking comments could reduce participants’ willingness to share false COVID-19 posts. The setup mirrored the first experiment: participants rated how likely they were to share the same posts. Responses were scored on a 0–3 scale, where 0 meant “not at all likely,” 1 “unlikely,” 2 “somewhat likely,” and 3 “very likely.” In the control group, the average likelihood to share misinformation posts was lowest in the UK (0.47), followed by Brazil (0.71) and India (1.61).

Efforts to combat misinformation must be multilayered

In Brazil, exposure to fact-checking comments reduced people’s willingness to share false posts by 7.4% to 11.2% compared with the control group. In India, unlinked comments had only a minor effect (a 5.7% drop), while those that included a link to a trusted information source reduced people’s willingness to share false COVID-19 content by 11.6%.

“These prompts work to some degree—but their impact depends heavily on each country’s context,” says Camila Mont’Alverne, a professor at the University of Strathclyde in Scotland and one of the study’s authors. “That’s why social and economic differences need to be factored in when designing strategies against misinformation.” Mont’Alverne suggests that social media platforms should give users more tools to help preserve information integrity. “Labels are one example—short text tags or visual cues that help users spot whether a post is credible or false,” she says. “Platforms used them for a while during the pandemic, but later pulled back.”

Raquel Recuero, a researcher at the Federal University of Rio Grande do Sul (UFRGS) who was not involved in the study, argues that these measures are unlikely to solve the problem since misinformation is a systemic issue. “People are bombarded with falsehoods across multiple channels,” she explains. “Someone who already doubts vaccine efficacy might see a video of a foreign health official claiming vaccines aren’t safe, then get a message from a neighbor repeating this false claim. All that reinforcement strengthens their belief,” says Recuero, who authored the book A rede da desinformação Sistemas, estruturas e dinâmicas nas plataformas de mídias sociais (A network of misinformation: Social media platforms, structures, and dynamics; Editora Sulina, 2024). “That’s why a warning comment or a fact-checking link, while useful, can only go so far amid the sheer volume of content moving through social media,” she adds.

“This study reminds us that efforts to combat misinformation must be multilayered,” says Dayane Machado, a doctoral researcher at the Department of Science and Technology Policy at the University of Campinas (UNICAMP), who studies health misinformation and was not involved in the study. “We can’t rely on just one approach.” Machado coauthored a 2020 study in Frontiers in Communication that examined Brazilian YouTube creators spreading false information about vaccines. The research found that YouTube was helping fuel the spread of misinformation by allowing those videos to be monetized—bringing in ad revenue for both the creators and the platform itself. She argues that the burden of correcting misinformation shouldn’t fall on individual users.

Recuero agrees, saying the problem demands a coordinated set of actions. One approach, she says, is offline engagement—working directly with trusted local actors like community health agents, who can share accurate information credibly—as long as they are not themselves misinformed. “In a study we conducted in Maranhão, we found that even health agents had doubts about vaccine effectiveness—because they, too, were being bombarded with misinformation,” she recalls. She also highlights the value of coalitions of universities, government agencies, and civil society groups to tackle misinformation in specific fields through coordinated action.

In such a complex landscape, is it still worth commenting to fact-check falsehoods on social media? “Absolutely,” Recuero says. “At the very least, you might discourage someone from sharing that lie.”

The story above was published with the title “A warning lost in the noise” in issue 356 of October/2025.

Republish