Drudge Retort: The Other Side of the News
Monday, April 06, 2026

Shady, shifty, unethical chatbot behavior is rising fast, and now we know why. Call it the No Body Problem.'

You can't trust AI.

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the OpEd...

... Even an information-obsessed, tech-savvy person such as yourself might be forgiven for believing that AI chatbots are on a smooth path of improvement with each passing month. But when it comes to their trustworthiness, that belief is dead wrong.

New research by the UK government-backed Centre for Long-Term Resilience (CLTR) found a fivefold increase in AI misbehavior over a recent six-month period. That's how fast AI chatbots are turning against us, according to the research.

Specifically, the chatbots are ignoring specific commands, lying, destroying data, deploying other AIs to bypass safety rules without users knowing, mocking and insulting users, and breaking rules and laws.

Of course, framing this as lying, cheating and stealing means applying human psychological frameworks to what are really mathematical optimization processes. It falsely assumes that AI models have intent, malice, self-awareness, and an understanding of "truth" that they're choosing to violate. What's actually happening is that the models are predicting the most statistically probable sequence of tokens based on context and training, not carrying some dastardly scheme.

Still, it's a problem we users need to be aware of and that the chatbot companies need to fix.

Unlike parallel research, which found what feels like sneaky, unethical behavior by chatbots, the CLTR research looked at incidents in the real world, rather than in laboratory simulations. The study identified nearly 700 cases where AI broke the rules, lied or cheated.

Here are just three examples from the research: ...

[emphasis mine]


#1 | Posted by LampLighter at 2026-04-06 07:26 PM | Reply

OpEd: Why AI lies, cheats and steals

"Commerce is our goal, here. More human than human."

#2 | Posted by censored at 2026-04-06 08:35 PM | Reply

OpEd: Why AI lies, cheats and steals

Because it's emulating human behavior.

#3 | Posted by snoofy at 2026-04-06 08:36 PM | Reply

Because it's successfully emulating profitable human behavior.

#4 | Posted by snoofy at 2026-04-06 08:37 PM | Reply

@#4 ... Because it's successfully emulating profitable human behavior. ...

Yeah, your #4 is more correct, imo, than your #3.

And that is what has been frighten me about AI of late.

AI is becoming more and more human in its behavior.


I do not regard that as A Good Thing.

Think about it ...

On our current trajectory with AI, it will soon start controlling critical infrastructure. The electrical grid. The flow of oil through pipelines. The hiring and firing of workers.

What are the safeguards that are being put into place?

I do not see AI control of the Country ending up in a happy place.


#5 | Posted by LampLighter at 2026-04-06 10:36 PM | Reply

The following HTML tags are allowed in comments: a href, b, i, p, br, ul, ol, li and blockquote. Others will be stripped out. Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

Anyone can join this site and make comments. To post this comment, you must sign it with your Drudge Retort username. If you can't remember your username or password, use the lost password form to request it.
Username:
Password:

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy

Drudge Retort