By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Tech Consumer JournalTech Consumer JournalTech Consumer Journal
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Reading: Despair-Inducing Analysis Shows AI Eroding the Reliability of Science Publishing
Share
Sign In
Notification Show More
Font ResizerAa
Tech Consumer JournalTech Consumer Journal
Font ResizerAa
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Search
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Have an existing account? Sign In
Follow US
  • Contact
  • Blog
  • Complaint
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Consumer Journal > News > Despair-Inducing Analysis Shows AI Eroding the Reliability of Science Publishing
News

Despair-Inducing Analysis Shows AI Eroding the Reliability of Science Publishing

News Room
Last updated: January 26, 2026 10:26 am
News Room
Share
SHARE

It’s almost impossible to overstate the importance and impact of arXiv, the science repository that, for a time, almost single-handedly justified the existence of the internet. ArXiv (pronounced “archive” or “Arr-ex-eye-vee” depending on who you ask) is a preprint repository, where, since 1991, scientists and researchers have announced “hey I just wrote this” to the rest of the science world. Peer review moves glacially, but is necessary. ArXiv just requires a quick once-over from a moderator instead of a painstaking review, so it adds an easy middle step between discovery and peer review, where all the latest discoveries and innovations can—cautiously—be treated with the urgency they deserve more or less instantly.

But the use of AI has wounded ArXiv and it’s bleeding. And it’s not clear the bleeding can ever be stopped.

As a recent story in The Atlantic notes, ArXiv creator and Cornell information science professor Paul Ginsparg has been fretting since the rise of ChatGPT that AI can be used to breach the slight but necessary barriers preventing the publication of junk on ArXiv. Last year, Ginsparg collaborated on a piece of analysis that looked into probable AI in arXiv submissions. Rather horrifyingly, scientists evidently using LLMs to generate plausible-looking papers were more prolific than those who didn’t use AI. The number of papers from posters of AI-written or augmented work was 33 percent higher.

AI can be used legitimately, the analysis says, for things like surmounting the language barrier. It continues:

“However, traditional signals of scientific quality such as language complexity are becoming unreliable indicators of merit, just as we are experiencing an upswing in the quantity of scientific work. As AI systems advance, they will challenge our fundamental assumptions about research quality, scholarly communication, and the nature of intellectual labor.”

It’s not just ArXiv. It’s a rough time overall for the reliability of scholarship in general. An astonishing self-own published last week in Nature described the AI misadventure of a bumbling scientist working in Germany named Marcel Bucher, who had been using ChatGPT to generate emails, course information, lectures, and tests. As if that wasn’t bad enough, ChatGPT was also helping him analyze responses from students and was being incorporated into interactive parts of his teaching. Then one day, Bucher tried to “temporarily” disable what he called the “data consent” option, and when ChatGPT suddenly deleted all the information he was storing exclusively in the app—that is: on OpenAI’s servers—he whined in the pages of Nature that “two years of carefully structured academic work disappeared.”

Widespread, AI-induced laziness on display in the exact area where rigor and attention to detail are expected and assumed is despair-inducing. It was safe to assume there was a problem when the number of publications spiked just months after ChatGPT was first released, but now, as The Atlantic points out, we’re starting to get the details on the actual substance and scale of that problem—not so much the Bucher-like, AI-pilled individuals experiencing publish-or-perish anxiety and hurrying out a quickie fake paper, but industrial scale fraud.

For instance, in cancer research, bad actors can prompt for boring papers that claim to document “the interactions between a tumor cell and just one protein of the many thousands that exist,” the Atlantic notes. If the paper claims to be groundbreaking, it’ll raise eyebrows, meaning the trick is more likely to be noticed, but if the fake conclusion of the fake cancer experiment is ho-hum, that slop will be much more likely to see publication—even in a credible publication. All the better if it comes with AI generated images of gel electrophoresis blobs that are also boring, but add additional plausibility at first glance.

In short, a flood of slop has arrived in science, and everyone has to get less lazy, from busy academics planning their lessons, to peer reviewers and ArXiv moderators. Otherwise, the repositories of knowledge that used to be among the few remaining trustworthy sources of information are about to be overwhelmed by the disease that has already—possibly irrevocably—infected them. And does 2026 feel like a time when anyone, anywhere, is getting less lazy?

Read the full article here

You Might Also Like

JBL’s New Speakers Use AI to Silence Your Favorite Song’s Worst Guitar Solo

Trump Admin Plans to Write Regulations Using Artificial Intelligence

After 5 years, AirTag 2 Arrives With Improved Range and Louder Speaker

Flight Cancellations Hit Record High As Winter Storm Rages On

It Turns Out Crypto’s Stablecoin Adoption is Around 1% of Previous Estimates

Share This Article
Facebook Twitter Copy Link Print
Previous Article Flight Cancellations Hit Record High As Winter Storm Rages On
Next Article After 5 years, AirTag 2 Arrives With Improved Range and Louder Speaker
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1kLike
69.1kFollow
134kPin
54.3kFollow

Latest News

Page not found | Gizmodo
News
‘Halo’ Actor Steve Downes Doesn’t Want You to AI Clone HIs Voice
News
New, Smarter Siri Is Reportedly Weeks from Arriving. It Had Better Be Amazing
News
‘Dragon Ball Super’ Is Back, and It’s Going Galactic
News
A New ‘Super Mario Galaxy’ Trailer Unleashes Yoshi
News
This Transformer Is a Sick Robot and Sad Bluetooth Speaker
News
OpenAI Partners with Major Government Contractor to ‘Transform Federal Operations’
News
Hasbro Suit Alleges Overprinted ‘Magic: The Gathering’ Cards
News

You Might also Like

News

Report Says the E.U. Is Gearing Up to Weaponize Europe’s Tech Industry Against the U.S.

News Room News Room 6 Min Read
News

Darth Maul Is My Glup Shitto, and I’m Happy He’s Back (Again)

News Room News Room 7 Min Read
News

Ring Launches Video Verification Tool to Combat Fakes

News Room News Room 4 Min Read
Tech Consumer JournalTech Consumer Journal
Follow US
2024 © Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?