Drudge Retort: The Other Side of the News
Wednesday, August 27, 2025

OpEd: There tend to be three AI camps. 1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it. And 3) "Write an A-Level paper on the themes in Shakespeare's Romeo and Juliet." I propose a fourth: AI is now as good as it's going to get, and that's neither as good nor as bad as its fans and haters think, and you're still not going to get an A on your report.

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the OpEd ...

... You see, now that people have been using AI for everything and anything, they're beginning to realize that its results, while fast and sometimes useful, tend to be mediocre.

Don't believe me? Read MIT's NANDA (Networked Agents and Decentralized AI) report, which revealed that 95 percent of companies that have adopted AI have yet to see any meaningful return on their investment. Any meaningful return.

To be precise, the report states: "The GenAI Divide is starkest in deployment rates, only 5 percent of custom enterprise AI tools reach production." It's not that people aren't using AI tools. They are. There's a whole shadow world of people using AI at work. They're just not using them "for" serious work. Instead, outside of IT's purview, they use ChatGPT and the like "for simple work, 70 percent prefer AI for drafting emails, 65 percent for basic analysis. But for anything complex or long-term, humans dominate by 9-to-1 margins."

Why? Because a chatbot "forgets context, doesn't learn, and can't evolve." In other words, they're not good enough for mid-grade or higher work. Think of them as a not particularly bright or trustworthy intern. That may be good enough for $20 a month, but -- spoiler alert -- AI costs will have risen by ten times or more by next year. Will bottom-end AI be worth that to you? Your company? ...



#1 | Posted by LampLighter at 2025-08-27 12:24 AM | Reply

Another view ...

One long sentence is all it takes to make LLMs misbehavewww.theregister.com

... Security researchers from Palo Alto Networks' Unit 42 have discovered the key to getting large language model (LLM) chatbots to ignore their guardrails, and it's quite simple.

You just have to ensure that your prompt uses terrible grammar and is one massive run-on sentence like this one which includes all the information before any full stop which would give the guardrails a chance to kick in before the jailbreak can take effect and guide the model into providing a "toxic" or otherwise verboten response the developers had hoped would be filtered out. ...


#2 | Posted by LampLighter at 2025-08-27 12:26 AM | Reply

The second paragraph of @3 is a run-on sentence that you are likely to understand, but AI may have issues with ...

#3 | Posted by LampLighter at 2025-08-27 12:29 AM | Reply

The following HTML tags are allowed in comments: a href, b, i, p, br, ul, ol, li and blockquote. Others will be stripped out. Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

Anyone can join this site and make comments. To post this comment, you must sign it with your Drudge Retort username. If you can't remember your username or password, use the lost password form to request it.
Username:
Password:

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy

Drudge Retort