The decision to push Google Gemini directly into the kitchen, starting with a refrigerator that can see what you eat and tell the cloud about it.
Samsung Is Putting Google Gemini AI Into Your Refrigerator, Whether You Need It or Not https://slashdot.org/story/25/12/22/120248/samsung-is-putting-google-gemini-ai-into-your-refrigerator-whether-you-need-it-or-not?utm_source=rss1.0mainlinkanon&utm_medium=feed
-- Slashdot (@slashdot.org) Dec 22, 2025 at 11:12 AM
[image or embed]
Then there's things like this ...
AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling
www.sciencealert.com
... It's one of humanity's scariest what-ifs " that the technology we develop to make our lives better develops a will of its own.
Early reactions to a September preprint describing AI behavior have already speculated that the technology is exhibiting a survival drive. But, while it's true that several large language models (LLMs) have been observed actively resisting commands to shut down, the reason isn't 'will'.
Instead, a team of engineers at Palisade Research proposed that the mechanism is more likely to be a drive to complete an assigned task -- even when the LLM is explicitly told to allow itself to be shut down. And that might be even more troubling than a survival drive, because no one knows how to stop the systems.
"These things are not programmed ... no one in the world knows how these systems work," physicist Petr Lebedev, a spokesperson for Palisade Research, told ScienceAlert. "There isn't a single line of code we can change that would directly change behavior."
The researchers, Jeremy Schlatter, Benjamin Weinstein-Raun, and Jeffrey Ladish, undertook the project to test what should be a fundamental safety feature of all AI systems: the ability to be interrupted.
This is exactly what it sounds like.
A human operator's command to an AI should not be ignored by the AI, for any reason, even if it interrupts a previously assigned task.
A system that cannot be interrupted isn't just unreliable, it's potentially dangerous.
It means if the AI is performing actions that cause harm -- even unintentionally -- we cannot trust that we can stop it. ...
Drudge Retort Headlines
CBS News Viewship Plummets 23% Since MAGA Takeover (83 comments)
Iran Reportedly Plans Execution (76 comments)
Fundraiser for ICE Killer Is Busted Spewing Antisemitism (41 comments)
6 Quit DOJ Over Push to Investigate Victim's Widow (38 comments)
'Dilbert' Creator Scott Adams Dies (31 comments)
New High of 45% in U.S. Identify as Political Independents (31 comments)
ICE Recruiting: The Truth Is Far Worse (30 comments)
Trump Warns 'we're screwed' If SCOTUS Strikes Down Tariffs (30 comments)
Factory Worker Suspended For Heckling Trump (28 comments)
E.P.A. to Stop Considering Lives Saved When Setting Rules on Air Pollution (27 comments)