Related...
... Shady, shifty, unethical chatbot behavior is rising fast, and now we know why. Call it the No Body Problem.'
You can't trust AI.
Even an information-obsessed, tech-savvy person such as yourself might be forgiven for believing that AI chatbots are on a smooth path of improvement with each passing month. But when it comes to their trustworthiness, that belief is dead wrong.
New research by the UK government-backed Centre for Long-Term Resilience (CLTR) found a fivefold increase in AI misbehavior over a recent six-month period. That's how fast AI chatbots are turning against us, according to the research.
Specifically, the chatbots are ignoring specific commands, lying, destroying data, deploying other AIs to bypass safety rules without users knowing, mocking and insulting users, and breaking rules and laws.
Of course, framing this as lying, cheating and stealing means applying human psychological frameworks to what are really mathematical optimization processes. It falsely assumes that AI models have intent, malice, self-awareness, and an understanding of "truth" that they're choosing to violate. What's actually happening is that the models are predicting the most statistically probable sequence of tokens based on context and training, not carrying some dastardly scheme. ...