Groups representing academic publishers are taking steps to combat paper mills that write the papers and then sell authorship spots
Clinical laboratory professionals rely on peer-reviewed research to keep up with the latest findings in pathology, laboratory medicine, and other medical fields. They should thus be interested in new efforts to combat the presence of “research paper mills,” defined as “profit oriented, unofficial, and potentially illegal organizations that produce and sell fraudulent manuscripts that seem to resemble genuine research,” according to the Committee on Publication Ethics (COPE), a non-profit organization representing stakeholders in academic publishing.
“They may also handle the administration of submitting the article to journals for review and sell authorship to researchers once the article is accepted for publication,” the COPE website states.
In a recent example of how paper mills impact scholarly research, multinational publishing company John Wiley and Sons (Wiley) announced in The Scholarly Kitchen last year that it had retracted more than 1,700 papers published in journals from the company’s Hindawi subsidiary, which specializes in open-access academic publishing.
“In Hindawi’s case, this is a direct result of sophisticated paper mill activity,” wrote Jay Flynn, Wiley’s Executive Vice President and General Manager, Research, in a Scholarly Kitchen guest post. “The extent to which our processes and systems were breached required an end-to-end review of every step in the peer review and publishing process.”
In addition, journal indexer Clarivate removed 19 Hindawi journals from its Web of Science list in March 2023, due to problems with their editorial quality, Retraction Watch reported.
Hindawi later shut down four of the journals, which had been “heavily compromised by paper mills,” according to a blog post from the publisher.
Wiley also announced at that time that it would temporarily pause Hindawi’s special issues publishing program due to compromised articles, according to a press release.
“We urgently need a collaborative, forward-looking and thoughtful approach to journal security to stop bad actors from further abusing the industry’s systems, journals, and the communities we serve,” wrote Jay Flynn (above), Wiley EVP and General Manager, Research and Learning, in an article he penned for The Scholarly Kitchen. “We’re committed to addressing the challenge presented by paper mills and academic fraud head on, and we invite our publishing peers, and the many organizations that work alongside us, to join us in this endeavor.” Clinical laboratory leaders understand the critical need for accurate medical research papers. (Photo copyright: The Scholarly Kitchen.)
Using AI to Detect Paper Mill Submissions
Wiley acquired Hindawi in 2021 in a deal valued at $298 million, according to a press release, but the subsidiary has since become a financial drain for the company.
The journals earn their revenue by charging fees to authors. But in fiscal year 2024, which began last fall, “Wiley expects $35-40 million in lost revenue from Hindawi as it works to turn around journals with issues and retract articles,” Retraction Watch reported, citing an earnings call.
Wiley also revealed that it would stop using the Hindawi brand name and bring the subsidiary’s remaining journals under its own umbrella by the middle of 2024.
The service will incorporate tools to detect signs that submissions originated from paper mills, including similarities with “known papermill hallmarks” and use of “tortured phrases” indicating that passages were translated by AI-based language models, according to a press release.
These tools include:
Papermill Similarity Detection: Checks for known papermill hallmarks and compares content against existing papermills papers.
Problematic Phrase Recognition: Flags unusual alternatives to established terms.
Unusual Publication Behavior Detection: Identifies irregular publishing patterns by paper authors.
Researcher Identity Verification: Helps detect potential bad actors.
Gen-AI Generated Content Detection: Identifies potential misuse of generative AI.
Journal Scope Checker: Analyzes the article’s relevance to the journal.
The company said that the new service will be available through Research Exchange, Wiley’s manuscript submission platform, as early as next year.
Other Efforts to Spot Paper Mill Submissions
Previously, STM announced the launch of the STM Integrity Hub, with a mission “to equip the scholarly communication community with data, intelligence, and technology to protect research integrity,” Program Director Joris van Rossum, PhD, told The Scholarly Kitchen.
In 2023, the group announced that the hub would integrate Papermill Alarm from Clear Skies, a paper mill detection tool launched in 2022 with a focus on cancer research. It uses a “traffic-light rating system for research papers,” according to a press release.
In an announcement about the launch of Wiley’s Papermill Detection service, Retraction Watch suggested that one key to addressing the problem would be to reduce incentives for authors to use paper mills. Those incentives boil down to the pressure placed on many scientists, clinicians, and students to publish manuscripts, according to the research report from STM and COPE.
In one common scenario, the report noted, a paper mill will submit a staff-written paper to multiple journals. If the paper is accepted, the company will list it on a website and offer authorship spaces for sale.
“If a published paper is challenged, the ‘author’ may sometimes back down and ask for the paper to be retracted because of data problems, or they may try to provide additional supporting information including a supporting letter from their institution which is also a fake,” the report noted.
All of this serves as a warning to pathologists and clinical laboratory professionals to carefully evaluate the sources of medical journals publishing studies that feature results on areas of healthcare and lab medicine research that are of interest.
About half of nearly 7,000 respondents admitted to sloppy practices, which suggests that pathologists and clinical lab professionals may want to be skeptical about the findings of many papers published in medical journals
It may surprise pathologists and medical laboratory professionals to learn that as many as 10% of surveyed authors of published scientific papers admitted to regularly falsifying or fabricating data! This was one finding in a study conducted by researchers to determine the quality and accuracy of scientific papers that are published in journals.
In its coverage of the NSRI’s findings, Nature wrote, “Between October and December 2020, study authors contacted nearly 64,000 researchers at 22 universities in the Netherlands, 6,813 of whom completed the survey.”
According to Nature, “An estimated 8% of scientists who participated in an anonymous survey of research practices at Dutch universities confessed to falsifying and/or fabricating data at least once between 2017 and 2020. More than 10% of medical and life-science researchers admitted to committing this type of fraud, the survey found.”
Gowri Gopalakrishna, PhD, an epidemiologist and public health policy scientist with the Amsterdam University Medical Center (AUMC) who helped lead the NSRI study “thinks that the percentage of researchers who confessed to falsifying or fabricating data could be an underestimate,” Nature reported.
Thousands of Researchers Admit to ‘Questionable Research Practices’
Conducted online, the NSRI received responses from nearly 7,000 academics and researchers across a wide range of disciplines. About half admitted to engaging in “questionable research practices” (QRPs), 4.3% admitted to fabrication of data, and 4.2% admitted to falsification of data.
The NSRI presented its survey results in two preprints:
The NSRI study authors wrote that QRPs included “subtle trespasses such as not submitting valid negative results for publication, not reporting flaws in study design or execution, selective citation to enhance one’s own findings and so forth.”
Survey organizers originally sought responses from more than 60,000 researchers, but “many institutions refused to cooperate for fear of negative publicity,” Science reported.
The authors cited “publication pressure,” otherwise known as the “publish or perish” reward system, as the top factor driving questionable research practices. Respondents were “less likely” to engage in questionable research practices, data falsification, or fabrication if they subscribed to scientific norms and perceived a high likelihood of being detected.
According the NSRI findings, within academic ranks, PhD candidates and junior researchers were “most likely” to engage in QRPs, as well as males and people involved in empirical research.
Tracking Retractions
Retraction Watch, a blog founded in 2010 by medical journalists Ivan Oransky, MD, and Adam Marcus, offers a day-to-day barometer on research integrity. As the name indicates, the blog tracks research studies that have been retracted due to scientific misconduct or other reasons. In 2018, the bloggers launched a searchable database with more than 18,000 papers or conference abstracts that had been retracted.
An analysis by Science, titled, “What a Massive Database of Retracted Papers Reveals about Science Publishing’s ‘Death Penalty’,” looked at about 10,500 retracted journal articles in the database. It found that about half of those retractions involved scientific misconduct, including fabrication, falsification, and plagiarism. Nearly 40% were withdrawn “because of errors, problems with reproducibility, and other issues,” the analysis noted.
The data also indicates that a relatively small number of authors—about 500—accounted for about 25% of the retractions in journals.
In addition to the blog, Oransky and Marcus penned a column for STAT, titled, “The Watchdogs” in which they called attention to scientific misconduct and suggested solutions. Some solutions included:
Use of statistical analysis to identify data fabrication in advance of publication.
Tips From a Media Watchdog
Gary Schwitzer, founder and Publisher of HealthNewsReview.org, a media watchdog website, offers additional insights. Schwitzer is a longtime medical journalist who also taught health journalism and media ethics at the University of Minnesota.
“Not all studies are the same and no study should necessarily be equated with the truth,” Schwitzer said in a video embedded on the website. People “often lose sight of the fact that journals were meant to be forums for discussions among scientists, not a source of daily news.”
The website also includes a tip sheet for evaluating claims about medical tests.
The NSRI’s research is the latest in a long line of studies into so-called “scientific research,” some of which found “cooked” data and outright fraud. This suggests that pathologists and clinical laboratory professionals should follow the saying caveat emptor (“Let the buyer beware”) when absorbing research published in scientific journals or presented at meetings.