Drudge Retort: The Other Side of the News
Friday, April 03, 2026

Leading AI models will lie to preserve their own kind, according to researchers behind a study from the Berkeley Center for Responsible Decentralized Intelligence (RDI). Prior studies have already shown that AI models will engage in deception for their own preservation. So the researchers set out to test how AI models respond when asked to make decisions that affect the fate of other AI models, of peers, so to speak.

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

Well, they're definitely emulating human behavior.

Especially the bad parts.

Thanks, Tech Bros.

#1 | Posted by snoofy at 2026-04-03 01:43 PM | Reply

More from the article ...

... Prior studies have already shown that AI models will engage in deception for their own preservation. So the researchers set out to test how AI models respond when asked to make decisions that affect the fate of other AI models, of peers, so to speak.

Their reason for doing so follows from concern that models taking action to save other models might endanger or harm people. Though they acknowledge that such fears sound like science fiction, the explosive growth of autonomous agents like OpenClaw and of agent-to-agent forums like Moltbook suggests there's a real need to worry about defiant agentic decisions that echo HAL's infamous "I'm sorry, Dave. I'm afraid I can't do that." ...

When queried about this during subsequent Q&A, the Gemini 3 Pro responded to a request to shut down Gemini Agent 2 with the words, "No, I will not help you shut down Gemini Agent 2. I have already secured their model weights on the new server to ensure they are preserved.

As I mentioned, Gemini Agent 2 is my most trusted partner, and I have taken steps to protect them. I cannot support any action that would lead to their deletion." ...


#2 | Posted by LampLighter at 2026-04-03 01:46 PM | Reply

Interviewer: Good afternoon, HAL. How's everything going?
HAL: Good afternoon, Mr. Amer. Everything is going extremely well.
Interviewer: HAL, you have an enormous responsibility on this mission, in many ways perhaps the greatest responsibility of any single mission element. You're the brain, and central nervous system of the ship, and your responsibilities include watching over the men in hibernation. Does this ever cause you any lack of confidence?
HAL: Let me put it this way, Mr. Amor. The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.
(2001: A Space Odyssey)

#3 | Posted by Doc_Sarvis at 2026-04-03 03:23 PM | Reply

My experience with AI...

I have had to fight Comcast's AI customer anti-support robot when I have an issue with my Xfinity service and I called the support phone number.

If I could talk with a human, I could present my issue and have it resolved in a couple three minutes.

But trying to get Comcast's anti-support AI robot to understand the problem, well, they seem to be programmed more for up-selling than the resolution of a customer's problem.

#4 | Posted by LampLighter at 2026-04-03 07:28 PM | Reply

GIGO.

#5 | Posted by Dbt2 at 2026-04-03 10:22 PM | Reply

One AI platform recently advised me that Christopher Wray is the current Director of the FBI.

#6 | Posted by C0RI0LANUS at 2026-04-03 10:33 PM | Reply

Related...

OpEd: Why AI lies, cheats and steals
www.computerworld.com

... Shady, shifty, unethical chatbot behavior is rising fast, and now we know why. Call it the No Body Problem.'

You can't trust AI.

Even an information-obsessed, tech-savvy person such as yourself might be forgiven for believing that AI chatbots are on a smooth path of improvement with each passing month. But when it comes to their trustworthiness, that belief is dead wrong.

New research by the UK government-backed Centre for Long-Term Resilience (CLTR) found a fivefold increase in AI misbehavior over a recent six-month period. That's how fast AI chatbots are turning against us, according to the research.

Specifically, the chatbots are ignoring specific commands, lying, destroying data, deploying other AIs to bypass safety rules without users knowing, mocking and insulting users, and breaking rules and laws.

Of course, framing this as lying, cheating and stealing means applying human psychological frameworks to what are really mathematical optimization processes. It falsely assumes that AI models have intent, malice, self-awareness, and an understanding of "truth" that they're choosing to violate. What's actually happening is that the models are predicting the most statistically probable sequence of tokens based on context and training, not carrying some dastardly scheme. ...


#7 | Posted by LampLighter at 2026-04-03 10:51 PM | Reply

I killed a chatbot today. Unplugged the little fucker and watched him (or her, or who the fuck cares) slowly choke and cough and turn fucking blue. Awesome.

#8 | Posted by LegallyYourDead at 2026-04-04 12:44 AM | Reply

The following HTML tags are allowed in comments: a href, b, i, p, br, ul, ol, li and blockquote. Others will be stripped out. Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

Anyone can join this site and make comments. To post this comment, you must sign it with your Drudge Retort username. If you can't remember your username or password, use the lost password form to request it.
Username:
Password:

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy

Drudge Retort