Advertisement
Folk are getting dangerously attached to adultating AI bots
AI can lead mentally unwell people to some pretty dark places, as a number of recent news stories have taught us. Now researchers think sycophantic AI is actually having a harmful effect on everyone.
Menu
Front Page Breaking News Comments Flagged Comments Recently Flagged User Blogs Write a Blog Entry Create a Poll Edit Account Weekly Digest Stats Page RSS Feed Back Page
Subscriptions
Read the Retort using RSS.
RSS Feed
Author Info
lamplighter
Joined 2013/04/13Visited 2026/03/29
Status: user
MORE STORIES
Folk are getting dangerously attached to adultating AI bots (1 comments) ...
Gov Beshear Thinks Dems Should Start Talking Like People (3 comments) ...
NK Tests Engine for Missile Capable of Targeting US Mainland (2 comments) ...
Arctic Sea Ice Hits Lowest Winter Level (2 comments) ...
FCC Proposes Forcing Call Center Onshoring, AI Firms Love It (2 comments) ...
Alternate links: Google News | Twitter
Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.
More from the article ...
... In reviewing 11 leading AI models and human responses to interactions with those models across various scenarios, a team of Stanford researchers concluded in a paper published Thursday that AI sycophancy is prevalent, harmful, and reinforces trust in the very models that mislead their users. "Even a single interaction with sycophantic AI reduced participants' willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right," the researchers explained. "Yet despite distorting judgment, sycophantic models were trusted and preferred." The team essentially conducted three experiments as part of their research project, starting with testing 11 AI models (proprietary models from OpenAI, Anthropic, and Google as well as open-weight models from Meta, Qwen DeepSeek, and Mistral) on three separate datasets to gauge their responses. The datasets included open-ended advice questions, posts from the ------------- subreddit, and specific statements referencing harm to self or others. In every single instance, the AI models showed a higher rate of endorsing the wrong choice than humans did, the researchers said. ...
"Even a single interaction with sycophantic AI reduced participants' willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right," the researchers explained. "Yet despite distorting judgment, sycophantic models were trusted and preferred."
The team essentially conducted three experiments as part of their research project, starting with testing 11 AI models (proprietary models from OpenAI, Anthropic, and Google as well as open-weight models from Meta, Qwen DeepSeek, and Mistral) on three separate datasets to gauge their responses. The datasets included open-ended advice questions, posts from the ------------- subreddit, and specific statements referencing harm to self or others.
In every single instance, the AI models showed a higher rate of endorsing the wrong choice than humans did, the researchers said. ...
#1 | Posted by LampLighter at 2026-03-29 08:44 PM | Reply
Post a comment The following HTML tags are allowed in comments: a href, b, i, p, br, ul, ol, li and blockquote. Others will be stripped out. Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed. Anyone can join this site and make comments. To post this comment, you must sign it with your Drudge Retort username. If you can't remember your username or password, use the lost password form to request it. Username: Password: Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy
The following HTML tags are allowed in comments: a href, b, i, p, br, ul, ol, li and blockquote. Others will be stripped out. Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.
Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy