Advertisement

Drudge Retort: The Other Side of the News
Tuesday, February 10, 2026

Medical device makers have been rushing to add AI to their products. While proponents say the new technology will revolutionize medicine, regulators are receiving a rising number of claims of patient injuries.

More

Alternate links: Google News | Twitter

When AI was added to a tool for sinus surgery: "Cerebrospinal fluid leaked from one patient's nose. In another ... a surgeon mistakenly punctured the base of a patient's skull. In two other cases, patients suffered strokes after a major artery was accidentally injured" www.reuters.com/investigatio ...

[image or embed]

-- jennifer uncoolidge (@histoftech.bsky.social) Feb 9, 2026 at 6:42 PM

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the article ...

... In 2021, a unit of healthcare giant Johnson & Johnson announced "a leap forward": It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses. Acclarent said the software for its TruDi Navigation System would now use a machine-learning algorithm to assist ear, nose and throat specialists in surgeries.

The device had already been on the market for about three years. Until then, the U.S. Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events. ...

Reuters could not independently verify the lawsuits' allegations. ...



#1 | Posted by LampLighter at 2026-02-09 02:16 PM | Reply

I think that the injection of AI into American life is way premature and fueled by someone's greed.

#2 | Posted by Zed at 2026-02-10 08:18 AM | Reply

What ML models have you trained Zed?

#3 | Posted by sitzkrieg at 2026-02-10 03:41 PM | Reply

I think that the injection of AI into American life is way premature and fueled by someone's greed.
#2 | Posted by Zed

Agreed.
Did any practicing physicians ask for this?
I doubt it.

#4 | Posted by snoofy at 2026-02-10 03:42 PM | Reply

Stanford Will pay you 200k annual as a glorified research assistant if you can help their physicians develop ML models for their various topics.

#5 | Posted by sitzkrieg at 2026-02-10 04:41 PM | Reply

Honestly that should not be too hard. The domains are highly segregated and specific. We ran speech to text on Op Reports nearly thirty years ago because the vocabulary is so tight compared to normal spoken English.

#6 | Posted by snoofy at 2026-02-10 04:44 PM | Reply

Training a model for cancer detection using annotated MRI data is a data science 1 topic, college Jr. level.

The hard part is teaching systems to older researchers and building a stack for their domain, and keeping someone on staff to maintain it. Students come and go from teams very quickly.

#7 | Posted by sitzkrieg at 2026-02-10 04:47 PM | Reply

"Training a model for cancer detection using annotated MRI data is a data science 1 topic, college Jr. level."

Radiologist told me his front office girls could spot cancer in an X-ray, after seeing so many of them.

This was back when R2 and SecondLook was becoming a thing, thirty-ish years ago. So maybe the front office girls are dumber now.

#8 | Posted by snoofy at 2026-02-10 04:51 PM | Reply

Stanford Will pay you 200k annual as a glorified research assistant if you can help their physicians develop ML models for their various topics.

#5 | POSTED BY SITZKRIEG

Honest work only, please.

#9 | Posted by Zed at 2026-02-10 05:23 PM | Reply

5 | POSTED BY SITZKRIEG

AI tells me things I know are outrageously wrong and takes no responsibility for it.

That's no advantage over a human being who I can at least fire.

#10 | Posted by Zed at 2026-02-10 05:27 PM | Reply

The biggest unresolved legal issue around AI in medicine has always been:
If AI gets it wrong, who do you put on the stand in the malpractice trial?

A doctor, you can ask them what they were thinking, and how they came up with whatever procedure they tried, was it supported by the literature or their training.

AI can't explain how it got its answer. The hidden layers are just that -- hidden.

This is not to deny "hidden layers" in the human brain -- implicit bias, all kinds of fun things.

But that's a separate problem.

#11 | Posted by snoofy at 2026-02-10 05:34 PM | Reply

The following HTML tags are allowed in comments: a href, b, i, p, br, ul, ol, li and blockquote. Others will be stripped out. Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

Anyone can join this site and make comments. To post this comment, you must sign it with your Drudge Retort username. If you can't remember your username or password, use the lost password form to request it.
Username:
Password:

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy

Drudge Retort