Tens of thousands of publications from 2025 might include invalid references generated by AI, a Nature analysis suggests.
Earlier this year, computer scientist Guillaume Cabanac received a notification from Google Scholar that one of his publications had been cited in a paper published in the International Dental Journal. That was unexpected, because his research on spotting fabricated papers doesn't typically intersect with dentistry. "I was very surprised to see that I couldn't recognize my own reference," says Cabanac, who is based at the University of Toulouse in France.
The title in the citation resembled that of a preprint he had posted in 2021 and never published formally, but the journal was listed as Nature and the DOI -- the unique identifier assigned by publishers and preprint repositories -- did not lead to the original preprint. "I got very concerned," adds Cabanac, who immediately suspected that the citation had been hallucinated by artificial intelligence.
This is just one example of a rapidly growing problem. Surveys and related studies have shown that researchers are increasingly using large language models (LLMs) to help to conduct literature searches, write manuscripts and format bibliographies. And sometimes, these models generate nonexistent academic references.
Drudge Retort Headlines
'A whole civilization will die tonight' (176 comments)
Tax Cuts Are the Hot New Idea for Democrats (56 comments)
Downed Airman Recovered (41 comments)
Iran Attacks Suspended If Hormuz Strait Opened, Trump Claims (33 comments)
GOP Facing 'extinction-level Event' After Gas Price Forecast: Strategist (26 comments)
'Our Reputation may Never Recover' (21 comments)
Explosives Discovered Near Gas Pipeline to Hungary (16 comments)
You Can Smell It Now: Trump Presidency Is in Total Free Fall (15 comments)
Top Doctor Sounds Alarm on Trump, 79, Over 'Dementia Signs' (15 comments)
Inside the Pentagon, Fears of Disrupted War Effort after Chief's Ouster (15 comments)