Advertisement
OpEd: When AI fails, who is to blame?
I don't even know why people are asking this question. Of course the user is to blame. Here's why.
Menu
Front Page Breaking News Comments Flagged Comments Recently Flagged User Blogs Write a Blog Entry Create a Poll Edit Account Weekly Digest Stats Page RSS Feed Back Page
Subscriptions
Read the Retort using RSS.
RSS Feed
Author Info
LampLighter
Joined 2013/04/13Visited 2025/06/28
Status: user
MORE STORIES
Friday Nite Tunes ... (13 comments) ...
Stagflation threatens to hit the US. as it did 50 years ago (7 comments) ...
CDC Panel Ditches Some Flu Shots Based on Junk Data (2 comments) ...
Feds Cut Satellite Data Crucial to Tracking Hurricanes (10 comments) ...
Big America hedge affects dollar's global standing (9 comments) ...
Alternate links: Google News | Twitter
Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.
More from the OpEd ...
... To state the obvious: Our species has fully entered the Age of AI. And AI is here to stay. The fact that AI chatbots appear to speak human language has become a major source of confusion. Companies are making and selling AI friends, lovers, pets, and therapists. Some AI researchers falsely claim their AI and robots can "feel" and "think." Even Apple falsely says it's building a lamp that can feel emotion. Another source of confusion is whether AI is to blame when it fails, hallucinates, or outputs errors that impact people in the real world. ... Look, I'll give you the punchline in advance: The user is responsible. AI is a tool like any other. If a truck driver falls asleep at the wheel, it's not the truck's fault. If a surgeon leaves a sponge inside a patient, it's not the sponge's fault. If a prospective college student gets a horrible score on the SAT, it's not the fault of their No. 2 pencil. ...
The fact that AI chatbots appear to speak human language has become a major source of confusion. Companies are making and selling AI friends, lovers, pets, and therapists. Some AI researchers falsely claim their AI and robots can "feel" and "think." Even Apple falsely says it's building a lamp that can feel emotion.
Another source of confusion is whether AI is to blame when it fails, hallucinates, or outputs errors that impact people in the real world. ...
Look, I'll give you the punchline in advance: The user is responsible.
AI is a tool like any other. If a truck driver falls asleep at the wheel, it's not the truck's fault. If a surgeon leaves a sponge inside a patient, it's not the sponge's fault. If a prospective college student gets a horrible score on the SAT, it's not the fault of their No. 2 pencil. ...
#1 | Posted by LampLighter at 2025-06-04 09:18 PM | Reply
Spent more time with AI in the last month than I cared to. We are talking about the things we want out of it as a company but they executives can't grasp we need data governance first. If you feed it tons of flawed data you are going to get flawed results.
It's a tool but it has a long way to go in general. Just look at Google's search results when you start to investigate something. At least half the time the AI generated content is way off.
#2 | Posted by GalaxiePete at 2025-06-05 03:20 PM | Reply
Post a commentComments are closed for this entry.Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy
Comments are closed for this entry.
Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy