The decision to push Google Gemini directly into the kitchen, starting with a refrigerator that can see what you eat and tell the cloud about it.
Samsung Is Putting Google Gemini AI Into Your Refrigerator, Whether You Need It or Not https://slashdot.org/story/25/12/22/120248/samsung-is-putting-google-gemini-ai-into-your-refrigerator-whether-you-need-it-or-not?utm_source=rss1.0mainlinkanon&utm_medium=feed
-- Slashdot (@slashdot.org) Dec 22, 2025 at 11:12 AM
[image or embed]
Then there's things like this ...
AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling
www.sciencealert.com
... It's one of humanity's scariest what-ifs " that the technology we develop to make our lives better develops a will of its own.
Early reactions to a September preprint describing AI behavior have already speculated that the technology is exhibiting a survival drive. But, while it's true that several large language models (LLMs) have been observed actively resisting commands to shut down, the reason isn't 'will'.
Instead, a team of engineers at Palisade Research proposed that the mechanism is more likely to be a drive to complete an assigned task -- even when the LLM is explicitly told to allow itself to be shut down. And that might be even more troubling than a survival drive, because no one knows how to stop the systems.
"These things are not programmed ... no one in the world knows how these systems work," physicist Petr Lebedev, a spokesperson for Palisade Research, told ScienceAlert. "There isn't a single line of code we can change that would directly change behavior."
The researchers, Jeremy Schlatter, Benjamin Weinstein-Raun, and Jeffrey Ladish, undertook the project to test what should be a fundamental safety feature of all AI systems: the ability to be interrupted.
This is exactly what it sounds like.
A human operator's command to an AI should not be ignored by the AI, for any reason, even if it interrupts a previously assigned task.
A system that cannot be interrupted isn't just unreliable, it's potentially dangerous.
It means if the AI is performing actions that cause harm -- even unintentionally -- we cannot trust that we can stop it. ...
Drudge Retort Headlines
Trump Insults Japan PM with Pearl Harbor Joke (47 comments)
Thousands of Marines Heading to Middle East (39 comments)
North Carolina's plan to Scrub Voter Rolls was Disaster in Other States (39 comments)
USS Ford Heading to Crete for Repairs (27 comments)
Gold Trump Coin Plan Continues (23 comments)
Russian Oil and Gas Headed to Cuba in Defiance of US (22 comments)
FEMA Official Claims He Teleported to a Waffle House (15 comments)
Bessent says US May Lift Sanctions on Iranian Oil Stuck at Sea (15 comments)
Cities Race to Confront Cesar Chavez's Legacy after Assault Claims (14 comments)
The US Is Looking at a Year of Chaotic Weather (14 comments)