Advertisement

Drudge Retort: The Other Side of the News
Thursday, February 26, 2026

US Defense Secretary Pete â Hegseth has reportedly given AI company Anthropic until Friday to agree to allow its technology to be used for military applications.

More

Alternate links: Google News | Twitter

"Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and threatened to invoke the Defense Production Act (DPA) if Anthropic doesn't agree to the Pentagon's terms by Friday." @alanrozenshtein.com on what the DPA can and can't do to Anthropic: www.lawfaremedia.org/article/what ...

[image or embed]

-- Anna Bower (@annabower.bsky.social) Feb 25, 2026 at 7:02 PM

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the article ...

... The AP news agency cited a source familiar with Tuesday's meeting, who said that the talks between Hegseth and Amodei were cordial.

However, the source added that Anthropic dug its heels in the face of pressure from the Pentagon in regards to two areas -- the use of its tools for fully autonomous military targeting operations and mass domestic surveillance of US citizens. ...


#1 | Posted by LampLighter at 2026-02-25 01:42 PM | Reply

@#1 ... mass domestic surveillance of US citizens. ...

Why does the Pentagon need to conduct mass domestic surveillance of US citizens?

#2 | Posted by LampLighter at 2026-02-25 01:43 PM | Reply

#2 | Posted by LampLighter

A better question is why do you believe #1?

#3 | Posted by oneironaut at 2026-02-25 01:51 PM | Reply

#2 | Posted by LampLighter
A better question is why do you believe #1?

#3 | Posted by oneironaut

Because Anthropic confirmed it and will remove the safeguards?

A better question is why are you such a stupid Pedo MAGA liar?

#4 | Posted by Sycophant at 2026-02-25 02:12 PM | Reply | Newsworthy 3

@#1 ... mass domestic surveillance of US citizens. ...

I'll ask again...

Why does the Pentagon need to conduct mass domestic surveillance of US citizens?


#5 | Posted by LampLighter at 2026-02-25 06:29 PM | Reply

A better question is why do you believe #1?

#3 | Posted by oneironaut

A better question is why is your stupid a&^ still posting your inane garbage here?

#6 | Posted by jpw at 2026-02-26 01:46 AM | Reply

A better question is why do you believe #1?

#3 | Posted by oneironaut

A better question is - why do republicans want autonomous drones deciding who lives or dies?

The only answers I can think of are - because you're nazis, or because you're super stupid.

#7 | Posted by SpeakSoftly at 2026-02-26 01:49 PM | Reply

@#7

AIs can't stop recommending nuclear strikes in war game simulations
www.newscientist.com

... Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases ...

#8 | Posted by LampLighter at 2026-02-26 02:11 PM | Reply

Why does the Pentagon need to conduct mass domestic surveillance of US citizens?
#5 | Posted by LampLighter

Because our government currently believes it is at war with the American people.

#9 | Posted by johnny_hotsauce at 2026-02-26 02:38 PM | Reply | Newsworthy 2

Because Anthropic confirmed it and will remove the safeguards?

Because Anthropic said it? Sorry going to need more than a corporation trying to dictate policy of the government than their word.

They are determining how your democracy should work, the only thing they should insist on is that they remain inside the law.

You're just upset that Trump is President and anything he does is fascist, what a stupid way to go through life.


This gets to the core of the issue more than any debate about specific terms.

Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives? Seemingly innocuous terms from the latter like "You cannot target innocent civilians" are actually moral minefields that lever differences of cultural tradition into massive control.

Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a "target" vs collateral damage? Existing policy and law has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer.

Imagine if a missile company tried to enforce the above policy, that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds, good, right? Not really - in addition to the value judgement problems I list above, you also have to account for questions like:

-What level of information, classified and otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more?
-What if an elected President merely threatens a dictator with using our weapons in a certain way, ala Madman Theory/MAD? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of those determinations vary if the current corporate executive happens to like the dictator or dislike the President?
-At what level of confidence does the cutoff trigger, both in writing and in reality?

The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say "But they will have cutouts to operate with autonomous systems for defensive use!", but you immediately get into the same issues and more - what is autonomous? What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive?

At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe.

And that is why "bro just agree the AI won't be involved in autonomous weapons or mass surveillance why can't you agree it is so simple please bro" is an untenable position that the United States cannot possibly accept.


x.com

#10 | Posted by oneironaut at 2026-02-27 06:57 PM | Reply | Funny: 1


Because our government currently believes it is at war with the American people.
#9 | POSTED BY JOHNNY_HOTSAUCE

Yet somehow it all goes away when you're preferred party is elected President.

#11 | Posted by oneironaut at 2026-02-27 06:58 PM | Reply | Funny: 1


@#9 ... Because our government currently believes it is at war with the American people. ...

But only if they are Democrats.

An update...

Trump tells government to stop using Anthropic's AI systems
www.nbcnews.com

... On X, Defense Secretary Pete Hegseth said he had moved to label Anthropic as a "supply chain risk" and cancel further Defense business with the company.

Shortly afterwards, Defense Secretary Pete Hegseth announced on X that he would direct the Defense Department to label Anthropic a "Supply-Chain Risk to National Security."

The move, usually reserved for foreign adversaries, would bar any military contractor or supplier from doing business with Anthropic. Both Hegseth and Trump announced agencies would have six months to phase out any existing federal business with Anthropic.

Anthropic did not immediately respond to a request for comment.

The company, led by CEO Dario Amodei, has made clear in months of contract negotiations with the Pentagon that it would not allow its AI systems to be harnessed for domestic surveillance or direct use in lethal autonomous weapons. ...




#12 | Posted by LampLighter at 2026-02-27 07:24 PM | Reply

Anthropic sees support from other tech workers in feud with Pentagon
www.mercurynews.com

... Anthropic PBC got a vote of support from Silicon Valley workers for its increasingly contentious public-relations battle with the Pentagon over how the military can use artificial intelligence.

Two coalitions of workers -- including employees of Amazon.com Inc., Google, Microsoft Corp. and OpenAI -- are asking their companies to join Anthropic in refusing to comply with Defense Department demands for unrestricted use of AI products.

"We are writing to urge our own companies to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon," a coalition of labor unions and other groups representing workers at Alphabet Inc., Amazon and Microsoft said in a letter posted early Friday.

The letters, and similar support for Anthropic from tech executives on social media, show how a tussle between one AI company and the Pentagon could mushroom into an industry-wide battle over how best to deploy the powerful technology safely.

Anthropic and the US military have been in talks over what exactly the armed forces can do with its tools. The richly valued startup, which has pitched itself as a cautious and responsible AI developer, insists that its products, including the Claude chatbot, not be used for surveillance of US citizens or to carry out lethal strikes without human involvement. ...

In the open letter posted Friday, workers with groups including Amazon Employees for Climate Justice, the Alphabet Workers Union, No Tech for Apartheid and No Azure for Apartheid sought to connect Anthropic's stand to employee efforts to get their companies to disclose more about the services they sell to state agencies taking part in President Donald Trump's deportation push.

"Executive leadership at Google, Microsoft and Amazon must reject the Pentagon's advances and provide workers with transparency about contracts with other repressive state agencies including DHS, CBP and ICE," they said, referring to the Department of Homeland Security, Customs and Border Protection and Immigration and Customs Enforcement.

Another letter, published earlier this week and signed by Google and OpenAI employees, urged executives to put aside their differences "and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight." ...



#13 | Posted by LampLighter at 2026-02-27 07:30 PM | Reply

Tangentially related ...

OpEd: Microsoft undercuts its kinder, gentler image with big ICE contract
www.computerworld.com

... But an investigation published a week ago by the UK's The Guardian and its partners +972 Magazine and Local Call reported that ICE is storing vast amounts of data on Microsoft's Azure cloud storage and using Microsoft AI tools to search and analyze that data. It found ICE is also using many of Microsoft's productivity tools and may be running its own tools and systems on Microsoft servers.

The investigation discovered that the amount of data ICE stores on Microsoft's cloud more than tripled in just a few months, from 400 terabytes to 1,400 terabytes between July 2025 and January 2026. Why the massive increase? Because Congress increased ICE's budget in July by $75 billion, making it the country's highest-funded US law enforcement body. ICE promptly went on a tech spending spree, in part to increase its surveillance capabilities.

The Guardian reports on the vast reach of that surveillance: "ICE, which has been likened to a domestic surveillance agency, enjoys access to vast troves of data on people living in the US. It has a growing arsenal of surveillance technology, including facial recognition apps, phone location databases, drones and invasive spyware." ...

Last September, Microsoft revoked the Israeli army's access to the company's Azure cloud storage because the army was using it for mass surveillance in Palestine. So the company does have a history of ending deals with government agencies for moral reasons. ...


#14 | Posted by LampLighter at 2026-02-27 07:36 PM | Reply

The following HTML tags are allowed in comments: a href, b, i, p, br, ul, ol, li and blockquote. Others will be stripped out. Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

Anyone can join this site and make comments. To post this comment, you must sign it with your Drudge Retort username. If you can't remember your username or password, use the lost password form to request it.
Username:
Password:

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy

Drudge Retort