While the tech industry floors the pedal on AI, the U.S. public would be happy to hit the brakes.
The AI industry unleashed a torrent of major announcements this week, accelerating the race to control how humans search, create and ultimately integrate AI into the fabric of everyday life.
-- Axios (@axios.com) May 24, 2025 at 9:50 AM
[image or embed]
Another view of AI ...
OpEd: Some signs of AI model collapse begin to reveal themselves
www.theregister.com
... Prediction: General-purpose AI could start getting worse
I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google.
Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I've noticed that AI-enabled search, too, has been getting crappier.
search spam
In particular, I'm finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission's (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they're never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get ... interesting,
This isn't just Perplexity. I've done the exact same searches on all the major AI search bots, and they all give me "questionable" results.
Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality."
Model collapse is the result of three different factors. The first is error accumulation, in which each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns. Next, there is the loss of tail data: In this, rare events are erased from training data, and eventually, entire concepts are blurred. Finally, feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations.
I like how the AI company Aquant puts it: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality."
I'm not the only one seeing AI results starting to go downhill. In a recent Bloomberg Research study of Retrieval-Augmented Generation (RAG), the financial media giant found that 11 leading LLMs, including GPT-4o, Claude-3.5-Sonnet, and Llama-3-8 B, using over 5,000 harmful prompts would produce bad results. ...
Drudge Retort Headlines
Reporter Writes About Trump's 'mental decline becomes more obvious' (111 comments)
US Trade Court Blocks Trump Tariffs (89 comments)
Israel PM Says Hamas's Gaza Chief Sinwar Has Been Killed (62 comments)
Trump Taps Palantir to Compile Data on Americans (29 comments)
Texas About to Pass Most Ridiculous Marijuana Ban in U.S. (28 comments)
The Impending White Collar Bloodbath (21 comments)
Trump Appears to Sets Putin 'two-week' Deadline on Ukraine (21 comments)
Trump Administration to 'aggressively' Revoke Visas of Chinese Students (20 comments)
GOP Justification for Medicaid Cuts: 'Well, we all are going to die' (19 comments)
Russia Masses over 50,000 Troops for Offensive on Ukraine (18 comments)