The decision to push Google Gemini directly into the kitchen, starting with a refrigerator that can see what you eat and tell the cloud about it.
Samsung Is Putting Google Gemini AI Into Your Refrigerator, Whether You Need It or Not https://slashdot.org/story/25/12/22/120248/samsung-is-putting-google-gemini-ai-into-your-refrigerator-whether-you-need-it-or-not?utm_source=rss1.0mainlinkanon&utm_medium=feed
-- Slashdot (@slashdot.org) Dec 22, 2025 at 11:12 AM
[image or embed]
Then there's things like this ...
AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling
www.sciencealert.com
... It's one of humanity's scariest what-ifs " that the technology we develop to make our lives better develops a will of its own.
Early reactions to a September preprint describing AI behavior have already speculated that the technology is exhibiting a survival drive. But, while it's true that several large language models (LLMs) have been observed actively resisting commands to shut down, the reason isn't 'will'.
Instead, a team of engineers at Palisade Research proposed that the mechanism is more likely to be a drive to complete an assigned task -- even when the LLM is explicitly told to allow itself to be shut down. And that might be even more troubling than a survival drive, because no one knows how to stop the systems.
"These things are not programmed ... no one in the world knows how these systems work," physicist Petr Lebedev, a spokesperson for Palisade Research, told ScienceAlert. "There isn't a single line of code we can change that would directly change behavior."
The researchers, Jeremy Schlatter, Benjamin Weinstein-Raun, and Jeffrey Ladish, undertook the project to test what should be a fundamental safety feature of all AI systems: the ability to be interrupted.
This is exactly what it sounds like.
A human operator's command to an AI should not be ignored by the AI, for any reason, even if it interrupts a previously assigned task.
A system that cannot be interrupted isn't just unreliable, it's potentially dangerous.
It means if the AI is performing actions that cause harm -- even unintentionally -- we cannot trust that we can stop it. ...
Drudge Retort Headlines
Racist Trump Post (148 comments)
Bitcoin Loses Half of It's Value (33 comments)
What's Really Driving These Bogus Claims of Voter Fraud (29 comments)
New Questions Arise Over Jeffrey Epstein's Death (28 comments)
James Talarico Campaign Releases New Super Bowl Ad (22 comments)
Bad Bunny Says He Will Bring His Culture to 2026 Super Bowl (20 comments)
Team USA, Vance Booed at Italy's Winter Olympics (17 comments)
Trump Holds Up Funding To Get His Name On Stuff: Report (17 comments)
RFK Touts Keto Diet as a 'cure' for Schizophrenia (12 comments)
Military Pressured to See 'Melania' Against Their Will (11 comments)