Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue "extremely harmful actions" such as attempting to blackmail engineers who say they will remove it.
The firm launched Claude Opus 4 on Thursday, saying it set "new standards for coding, advanced reasoning, and AI agents." But in an accompanying report, it also acknowledged the AI model was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Drudge Retort Headlines
Two Israeli Embassy Staff Members Killed in DC (103 comments)
Trump Admin Bars Harvard from Enrolling International Students (29 comments)
GOP Won't Install Jan. 6 Plaque Honoring Law Enforcement (27 comments)
Trump's Image of Dead 'white farmers' Came from Congo, not South Africa (20 comments)
House Passes Trump's Sweeping Tax-Cut Bill and Sends it to Senate (19 comments)
Trump-Voting Farmers Are Finding Out (18 comments)
Judge Halts Dismantling of Education Department (15 comments)
Greenland Signs Lucrative Minerals Deal with Europe (14 comments)
10 Richest Americans $365 Billion Richer, Get New Tax Cut (13 comments)
US Bond Sell-off Is Creating a Debt Spiral (11 comments)