The decision to push Google Gemini directly into the kitchen, starting with a refrigerator that can see what you eat and tell the cloud about it.
Samsung Is Putting Google Gemini AI Into Your Refrigerator, Whether You Need It or Not https://slashdot.org/story/25/12/22/120248/samsung-is-putting-google-gemini-ai-into-your-refrigerator-whether-you-need-it-or-not?utm_source=rss1.0mainlinkanon&utm_medium=feed
-- Slashdot (@slashdot.org) Dec 22, 2025 at 11:12 AM
[image or embed]
Then there's things like this ...
AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling
www.sciencealert.com
... It's one of humanity's scariest what-ifs " that the technology we develop to make our lives better develops a will of its own.
Early reactions to a September preprint describing AI behavior have already speculated that the technology is exhibiting a survival drive. But, while it's true that several large language models (LLMs) have been observed actively resisting commands to shut down, the reason isn't 'will'.
Instead, a team of engineers at Palisade Research proposed that the mechanism is more likely to be a drive to complete an assigned task -- even when the LLM is explicitly told to allow itself to be shut down. And that might be even more troubling than a survival drive, because no one knows how to stop the systems.
"These things are not programmed ... no one in the world knows how these systems work," physicist Petr Lebedev, a spokesperson for Palisade Research, told ScienceAlert. "There isn't a single line of code we can change that would directly change behavior."
The researchers, Jeremy Schlatter, Benjamin Weinstein-Raun, and Jeffrey Ladish, undertook the project to test what should be a fundamental safety feature of all AI systems: the ability to be interrupted.
This is exactly what it sounds like.
A human operator's command to an AI should not be ignored by the AI, for any reason, even if it interrupts a previously assigned task.
A system that cannot be interrupted isn't just unreliable, it's potentially dangerous.
It means if the AI is performing actions that cause harm -- even unintentionally -- we cannot trust that we can stop it. ...
Drudge Retort Headlines
Hillary Clinton Accuses Oversight Republicans of 'political theater' (27 comments)
Blind Refugee Abandoned by Border Patrol Is Dead (26 comments)
Trump Admin Considers Requiring Banks to Collect Citizenship Info (18 comments)
Trump Drafting EO for National Emergency over Voting (17 comments)
Putin's Top Officials Laugh About How Dumb Trump Is Behind His Back (16 comments)
Kash Patel Fires FBI Agents for Investigating Trump (16 comments)
Kansas Sends Letters To Trans People Demanding Surrender Of Drivers License (14 comments)
Pentagon Gives Ultimatum to Anthropic over AI Curbs (14 comments)
Scott Bessent has 'got a feeling' Americans Won't get Any Tariff Money Back (13 comments)
Judge: Trump's IRS Broke Law Nearly 43,000 Times Sharing Info with ICE (12 comments)