Advertisement

Drudge Retort: The Other Side of the News
Tuesday, February 10, 2026

Medical device makers have been rushing to add AI to their products. While proponents say the new technology will revolutionize medicine, regulators are receiving a rising number of claims of patient injuries.

More

Alternate links: Google News | Twitter

When AI was added to a tool for sinus surgery: "Cerebrospinal fluid leaked from one patient's nose. In another ... a surgeon mistakenly punctured the base of a patient's skull. In two other cases, patients suffered strokes after a major artery was accidentally injured" www.reuters.com/investigatio ...

[image or embed]

-- jennifer uncoolidge (@histoftech.bsky.social) Feb 9, 2026 at 6:42 PM

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the article ...

... In 2021, a unit of healthcare giant Johnson & Johnson announced "a leap forward": It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses. Acclarent said the software for its TruDi Navigation System would now use a machine-learning algorithm to assist ear, nose and throat specialists in surgeries.

The device had already been on the market for about three years. Until then, the U.S. Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events. ...

Reuters could not independently verify the lawsuits' allegations. ...



#1 | Posted by LampLighter at 2026-02-09 02:16 PM | Reply

I think that the injection of AI into American life is way premature and fueled by someone's greed.

#2 | Posted by Zed at 2026-02-10 08:18 AM | Reply

What ML models have you trained Zed?

#3 | Posted by sitzkrieg at 2026-02-10 03:41 PM | Reply

I think that the injection of AI into American life is way premature and fueled by someone's greed.
#2 | Posted by Zed

Agreed.
Did any practicing physicians ask for this?
I doubt it.

#4 | Posted by snoofy at 2026-02-10 03:42 PM | Reply

Stanford Will pay you 200k annual as a glorified research assistant if you can help their physicians develop ML models for their various topics.

#5 | Posted by sitzkrieg at 2026-02-10 04:41 PM | Reply

Honestly that should not be too hard. The domains are highly segregated and specific. We ran speech to text on Op Reports nearly thirty years ago because the vocabulary is so tight compared to normal spoken English.

#6 | Posted by snoofy at 2026-02-10 04:44 PM | Reply

Training a model for cancer detection using annotated MRI data is a data science 1 topic, college Jr. level.

The hard part is teaching systems to older researchers and building a stack for their domain, and keeping someone on staff to maintain it. Students come and go from teams very quickly.

#7 | Posted by sitzkrieg at 2026-02-10 04:47 PM | Reply

"Training a model for cancer detection using annotated MRI data is a data science 1 topic, college Jr. level."

Radiologist told me his front office girls could spot cancer in an X-ray, after seeing so many of them.

This was back when R2 and SecondLook was becoming a thing, thirty-ish years ago. So maybe the front office girls are dumber now.

#8 | Posted by snoofy at 2026-02-10 04:51 PM | Reply

Stanford Will pay you 200k annual as a glorified research assistant if you can help their physicians develop ML models for their various topics.

#5 | POSTED BY SITZKRIEG

Honest work only, please.

#9 | Posted by Zed at 2026-02-10 05:23 PM | Reply

5 | POSTED BY SITZKRIEG

AI tells me things I know are outrageously wrong and takes no responsibility for it.

That's no advantage over a human being who I can at least fire.

#10 | Posted by Zed at 2026-02-10 05:27 PM | Reply

The biggest unresolved legal issue around AI in medicine has always been:
If AI gets it wrong, who do you put on the stand in the malpractice trial?

A doctor, you can ask them what they were thinking, and how they came up with whatever procedure they tried, was it supported by the literature or their training.

AI can't explain how it got its answer. The hidden layers are just that -- hidden.

This is not to deny "hidden layers" in the human brain -- implicit bias, all kinds of fun things.

But that's a separate problem.

#11 | Posted by snoofy at 2026-02-10 05:34 PM | Reply

AI tells me things I know are outrageously wrong and takes no responsibility for it.

#10 | Posted by Zed at 2026-02-10 05:27 PM | Reply

That's an LLM. This device isn't using an LLM.

The hidden layers are just that -- hidden.

#11 | Posted by snoofy at 2026-02-10 05:34 PM | Reply

They're not hidden, that's a euphemism that means it's not part of training data or prediction target.
It's the intermediate layer. h = sigma(Wx + b). Where x is your input vector, W is your weight matrix, b is your bias, and sigma is your activation function.

#12 | Posted by sitzkrieg at 2026-02-10 07:39 PM | Reply

"They're not hidden, that's a euphemism that means it's not part of training data or prediction target."

Dude. They can't be interrogated with any sense of what their respective values mean.

#13 | Posted by snoofy at 2026-02-10 07:41 PM | Reply

If the values had no meaning, downstream layers couldn't use them to make consistent decisions. They're not meaningless, it's just an encoded representation of the model is optimized to interpret. Math framework and task context is how you can interpret them.

If you were correct then backpropagation, debugging, and pruning would not work. Model updates would be blind luck. The entire field of Interpretive Research wouldn't exist.

#14 | Posted by sitzkrieg at 2026-02-10 07:53 PM | Reply

Anthropics base salary for a phd in interpretive research is something like $500k/annual + bennies.

#15 | Posted by sitzkrieg at 2026-02-10 07:56 PM | Reply

it's just an encoded representation.

I'm aware.

Without the ability to decide it, we can't establish "what it was thinking." Or how. Or why.

It's a liability nightmare.

Not that the law seems to matter any more, since the first step in building a LLM is "Download The Internet" which clearly ignores centuries of copyright law dictating that royalties are owed when copyrighted material is used in a commercial application.

#16 | Posted by snoofy at 2026-02-10 08:03 PM | Reply

"If you were correct then backpropagation, debugging, and pruning would not work."

If I was incorrect we wouldn't need those tools because we could know explicitly the contribution of every node in every hidden layer.

#17 | Posted by snoofy at 2026-02-10 08:06 PM | Reply

I'm willing to be wrong about this but it's going to take more than high paying jobs to do it. Maybe a paper written by one of those guys in those fancy jobs could convince me.

US East 1 went down last fall because of a clanker. If Amazon knew what was broken it would have been fixed in minutes, not hours.

#18 | Posted by snoofy at 2026-02-10 08:10 PM | Reply

Backpropagation explicitly computes the contribution of every hidden unit via gradients. If those contributions were unknowable, training wouldn't work.

Rumelhart etc, Nature, 1986. A foundational paper for ML.

Data science was my capstone dude.

#19 | Posted by sitzkrieg at 2026-02-10 08:37 PM | Reply | Funny: 1

BWAHAHA. Ok grampa, tell us all about Fortran and machine code. BWAHAHA

#20 | Posted by LegallyYourDead at 2026-02-10 09:09 PM | Reply

Octal? Rectal? What's the diff?

#21 | Posted by LegallyYourDead at 2026-02-10 09:10 PM | Reply

"Rumelhart etc, Nature, 1986. A foundational paper for ML."

Imagine you're a hot shot defense lawyer for a doctor who took a course of action on the advice of AI, and it harmed the patient, and experts in the field are testifying the AI made a bad decision and the doctor should have known better.

Now imagine you're a hot shot defense lawyer for an AI company and doctor who took a course of action on the advice of AI, and it harmed the patient, and experts in the field are testifying the AI made a bad decision and the doctor should have known better.

Which lawyer wins?

#22 | Posted by snoofy at 2026-02-10 11:47 PM | Reply

The following HTML tags are allowed in comments: a href, b, i, p, br, ul, ol, li and blockquote. Others will be stripped out. Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

Anyone can join this site and make comments. To post this comment, you must sign it with your Drudge Retort username. If you can't remember your username or password, use the lost password form to request it.
Username:
Password:

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy

Drudge Retort