Skip to content
OpenMind
Culture

How Science Fuels a Culture of Misinformation

We tend to blame the glut of disinformation in science on social media and the news, but the problem often starts with the scientific enterprise itself.

By Joelle Renstrom

What's hot and what's not: This map visualizes the relationships between science topics and journals according to user clickstreams. Source: PLOS ONE, made available through Creative Commons.

On November 8, 2021, the American Heart Association journal Circulation published a 300-word abstract of a research paper warning that mRNA Covid vaccines caused heart inflammation in study subjects. An abstract typically summarizes and accompanies the full paper, but this one was published by itself. According to Altmetrics, the abstract was picked up by 23 news outlets and shared by more than 69,000 Twitter users. On the basis of that abstract, a video on BrandNewTube, a social media outlet that circumvents YouTube’s anti-misinformation policies, pronounced Covid vaccinations “murder.” Sixteen days later, the American Heart Association added an “expression of concern,” noting that the abstract might not be reliable, and on December 21 it issued a correction that changed the title to indicate that the study did not establish cause and effect, noting there was no control group nor a statistical analysis of the results.

This incident underscores a flaw at the center of the scientific enterprise. It’s all too easy to make outsize claims that sidestep the process of peer review. No publication should carry a standalone abstract, particularly one making such a bold claim, and particularly during a pandemic. But the problem goes much deeper than that: Even scientific papers that have passed through the intended safeguards of peer review can become vectors for confusion and unsubstantiated claims.

As we’ve seen again and again over the past two years, Covid-19 hasn’t been just a viral pandemic, but also a pandemic of disinformation—what the World Health Organization calls an “infodemic.” Many scientists blame social media for the proliferation of Covid-related falsehoods, from the suggestion that Covid could be treated by drinking disinfectants to the insistence that masks don’t help prevent transmission. Facebook, Twitter, TikTok, and other platforms have indeed propagated dangerous misinformation. However, social media is a symptom of the problem more than the cause. Misinformation and disinformation often start with scientists themselves.

Institutions incentivize scientists going for tenure to focus on quantity rather than quality of publications and to exaggerate study results beyond the bounds of rigorous analysis. Scientific journals themselves can boost their revenue when they are more widely read. Thus, some journals may pounce on submissions with juicy titles that will attract readers. At the same time, many scientific articles contain more jargon than ever, which encourages misinterpretation, political spin, and a declining public trust in the scientific process. Addressing scientific misinformation requires top-down changes to promote accuracy and accessibility, starting with scientists and the scientific publishing process itself.

Universities want their scientists to win prestigious grants and funding, and to do that, the research has to be flashy and boundary-pushing.

The history of the scientific journal goes back hundreds of years. In 1731 the Royal Society of Edinburgh launched the first fully peer-reviewed publication, Medical Essays and Observations, initiating what has become the gold standard of credibility: vetting by experts. In the traditional model, scientists conduct original research and write up their findings and methodology, including data, tables, images, and any other relevant information. They submit their article to a journal, whose editors send it to other experts in the field for review. Those peer reviewers evaluate the scientific soundness of the study and advise the journal editors whether to accept it. Editors may also ask authors to revise and resubmit, a process that takes anywhere from weeks to months.

By 2010, most traditional scientific journals also had digital counterparts. The “open access” movement makes roughly 1/3rd of those available to the public for free. Meanwhile, the number of science publications and the number of published papers increased dramatically, and most academic institutions established themselves on social media to help promote the work of their researchers.

In this new world, scientific journals and scientists compete for clicks just like mainstream publications. Articles that are downloaded, read, and shared the most receive a high “impact factor” or Altmetric Attention Score. Studies show that people are more likely to read and share articles with short, positively worded, or emotion-invoking titles.

The rating system can’t help but affect scientists’ publications and their careers. “Many [scientists] are required to achieve certain metrics in order to progress their career, obtain funding, or even keep their jobs,” according to Ph.D. candidate and researcher Benjamin Freeling of the University of Adelaide, who was lead author of a study on the topic, published in the Proceedings of the National Academy of Sciences in 2019. “There’s less room for a scientist to work on a scientific question of immense importance to humanity if that question won’t lead to a particular quantity of publications and citations,” he wrote in an email to OpenMind. Valuing exposure above the scientific process incentivizes sloppy and unethical practices and demonstrates British economist Charles Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.”

University of Washington data scientist Jevin West, who studies the spread of misinformation, says that university public relations offices responsible for press releases and other media interactions “also play a role in the hype machine. Universities want their scientists to win prestigious grants and funding, and to do that, “the research has to be flashy and boundary-pushing.” PR offices may add to that flash by exaggerating the certainty or implications of findings in press releases, which are routinely published almost verbatim in media outlets.

Many reporters don’t distinguish between unvetted preprints and formally published papers; to casual web sleuths, the two can appear nearly the same.

The demand for headline-worthy publications has led to a surge in research studies that can’t be replicated. Results of a reproducibility project published by the Center for Open Science found that only 26 percent of the top cancer studies published between 2010 and 2021 could be replicated, often because the original papers lacked details about data and methodology. Compounding the problem, researchers cite these nonreplicable studies more often than those that can be replicated, perhaps because they tend to be more sensational and therefore get more clicks.

Most readers, including journalists, can’t discern the quality of the science. Yet it’s “taken forever for the publishing community to provide banners on the original papers” to signal they “might not reach the conclusion readers think,” West says. Tentative or unsubstantiated claims can have profound social impacts. West references a one-paragraph letter written by two physicians and published in the New England Journal of Medicine in 1980, which he regards as largely responsible for the current opioid crisis. The authors asserted that “addiction is rare in patients treated with narcotics,” but they provided no supporting evidence.

It took 37 years before the New England Journal of Medicine added an editorial note warning that the letter had been “heavily and uncritically cited,” but neither a warning nor a retraction can put the misinformation genie back in the bottle, especially given the letter’s decades-long influence on narcotics prescriptions. What’s more, readers can still access misleading studies, and researchers continue to cite them even after they’ve been retracted because they either don’t know about a retraction, or don’t care.

The rise of preprints, scientific papers that have yet to be peer-reviewed, has generated further debate about the proper way to communicate scientific research. Some people celebrate preprints as a way to invite advance feedback and disseminate findings faster. Others argue that so much unvetted material adds to the misinformation glut.

Preprints accounted for roughly 25 percent of Covid-19–related studies published in 2020. Of those preprints, 29 percent were cited at least once in mainstream news articles. Take the infamous example of ivermectin, a drug developed for treating parasitic infections. A preprint touting its efficacy in treating Covid-19 patients appeared on the Social Science Research Network (SSRN) server in April 2021, prompting widespread interest in and approval of the drug, including by governments in Bolivia, Brazil, and Peru. As people began taking ivermectin to treat or prevent Covid-19, scientists expressed concern about the data used in the preprint—data supplied by Surgisphere, a health-care analytics company whose unreliable data had previously led to retractions of papers in The Lancet and The New England Journal of Medicine. The paper was removed from SSRN, and shortly thereafter Surgisphere shut down its website and disappeared.

The now-removed preprint paper inserted ivermectin directly into the political spin machine. What’s more, the hype and drama surrounding the drug obscured the critical uncertainty about whether it could actually treat or prevent Covid-19. (A subsequent study suggests that if taken soon after diagnosis, ivermectin can help prevent serious illness.) It also distracted from the important fact that the efficacy of any drug depends on timing, dosage, and other health and safety factors that people shouldn’t try to determine on their own.

Approximately 70 percent of preprint literature is eventually peer-reviewed and published, but what about all the rest, which never become anything more than preprints? Many reporters don’t distinguish between unvetted preprints and formally published papers; to casual web sleuths, the two can appear nearly the same. When unsubstantiated findings guide personal behaviors and policies, even a small number of faulty studies can have significant impact. A team of international researchers found that when first-draft results are shared widely, “it can be very difficult to ‘unlearn’ what we thought was true” —even when the drafts are amended later on.

Unlearning falsehoods is especially challenging given today’s oversaturated news cycle. Online news aggregators syndicate local and national publications and present readers with an endless barrage of information via notifications and emails. In this context, it’s hardly surprising that readers tend to click on splashy headlines and articles that confirm their preexisting beliefs. “Science is embedded in an information ecosystem that encourages clickbait and facilitates confirmation bias,” West says.

And when people try to explore the research behind the headlines, they run into barriers: Scientific articles are becoming increasingly hard to understand as researchers pack them with more jargon than ever. A group of Swedish researchers who evaluated scientific abstracts written between 1881 and 2015 found a steady decrease in readability over time. By 2015, more than 20 percent of scientific abstracts required a post-college reading level. A big issue is the heavy use of acronyms; as of 2019, 73 percent of scientific abstracts contained them. Scientists themselves sometimes avoid citing papers rife with jargon because not even they can confidently parse it. We’ve all heard of “legalese,” but “science-ese” can be similarly inscrutable and alienating to readers.

10 years ago, the debate was around whether scientists should spend their time engaging with the public. Now the question is how to do it.

Addressing the science-driven misinformation problem will require a “profound restructuring of how the science ‘industry’ works,” Benjamin Freeling says. One recommendation is for journals to help readers see the preprint as a work in progress, not as the end result. Critical care physician Michael Mullins, editor-in-chief of Toxicology Communications, referred to a 2020 paper about the effects of hydroxychloroquine on Covid patients that appeared on the preprint server medRxiv and was published in the International Journal of Antimicrobial Agents that same day without undergoing peer review. Many people (including the president of the United States) regarded the study as complete, underscoring the danger of scientists using preprints to circumvent peer review.

One change that statistician Daniel Lakens of Eindhoven University of Technology advocates is to implement a system of “registered reports,” which involve peer review and acceptance of a study’s design, methodology, and statistical plan before data are collected and regardless of what the data ultimately suggest. These reports would be pre-accepted for publication when the final data and analysis are complete. Registered reports would combat the tendency to publish papers with greater potential for publicity and clicks because publication would revolve not around the outcome but around the process. In 2020 the journal Royal Society Open Science initiated a rapid Registered Report system that allows for ongoing documented revisions. Many other journals have followed suit, attempting to balance the need for a faster review process with the need for accuracy. If publications relied on process rather than the outcome or the potential for clicks, scientists could focus on and produce better science.

As for the academic public relations machine, Jevin West believes scientists should be held accountable for the text in university press releases. Carl Bergstrom, a biologist at the University of Washington who is active in public outreach, suggests that scientists sign off on press releases before they’re sent out, putting those releases through their own form of scientific review.

Scientists aren’t responsible for the critical thinking skills of the average reader or the revenue models of journals, but they should recognize how they contribute to the spread of misinformation. To address the jargon problem, scientists could use fewer acronyms and include “lay summaries,” also known as plain-language summaries. Some publications now require these, but they could go further by requiring glossaries of technical terms and acronyms, jargon cheat sheets, or other types of decoders necessary for understanding a study, especially for open-access and preprint articles. Freeling’s advice is more blunt: “Try better writing.”

Scientists can also communicate more effectively with the public by harnessing social media. Freshwater ecologist Lauren Kuehne, whose work includes a devotion to science communication, advocates informative blog posts, Twitter threads, TikTok videos, and public talks to build relationships. But open communication comes with its own issues, especially balancing a desire for influence with trustworthiness. Organizations such as the American Association for the Advancement of Science (AAAS) offer workshops and communication tool kits on effective public science communication, but scientists have to pursue that information on their own. The good news, says Kuehne, is that “10 years ago, the debate was around whether scientists should spend their time engaging with the public,” whereas now the question isn’t “whether it’s important, but how to do it.”

Direct public engagement is the best way to help people understand that even the most canonized scientific facts once were subject to debate. Making the scientific process more transparent will expose flaws and may even beget controversy, but ultimately it will allow scientists to strengthen error-correcting mechanisms as well as build public trust.

That science works despite the problems noted here is, as Bergstrom puts it, “amazing.” But the ability of science to transcend flaws in the system shouldn’t be amazing—it should be standard. Let’s save our amazement for the discoveries that emerge because of the scientific enterprise, not in spite of it.

June 2, 2022

Joelle Renstrom

teaches writing and research at Boston University. Her work has appeared in Slate, The Guardian, Aeon, and Undark. She is the author of the essay collection Closing the Book: Travels in Life, Loss, and Literature (2015).

Editor’s Note

Fake science news has alarmed those of us committed to accurate information on the climate crisis, covid, women's health, and a host of other issues critical to the survival of humans and the living world. In the reported essay here, Joelle Renstrom, a lecturer in rhetoric at Boston University, points to a surprising culprit: Science itself. From journals seeking clicks to scientists seeking tenure to institutes seeking the spotlight, from too much jargon to lack of replicability to unreviewed preprints, the scientific enterprise has been yet another source of misinformation fueling confusion on the ground. Renstrom's analysis of the problem, and her suggested solutions, are eye-opening and important to anyone concerned by the damaging impact of science misunderstood.

Pamela Weintraub, co-editor

OpenMind

Sign up for our newsletter