Advertisement

Drudge Retort: The Other Side of the News
Monday, August 12, 2024

Users on the social media platforms Reddit and Panic are complaining that Apple's new artificial intelligence (AI) platform, Apple Intelligence, prioritizes phishing emails.

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the article...

... One user on Reddit posted to the IOSBeta community on Reddit that Apple Intelligence added a phishing email to the user's priority list.

The post is captioned, "Apple marking a phishing email as priority' for me to open." The email was supposedly sent from Xfinity.com, an American telecommunications company that provides internet, mobile plans, and cable TV services.

Some of the Reddit comments under the post demonstrate users' feelings surrounding Apple Intelligence in beta.

One user said, "Apple Intelligence is a boomer dad," while another user exclaimed, "A prince from Nigeria needs your help with a frozen bank account," imitating the scammers and making light of the situation.

The email preview shows an urgent message saying that the user's account will be suspended due to billing issues unless the user updates their payment information.

Similarly, on the decentralized social media network Panic.com, one user who is the co-founder of the social media site, Cabel Sasser, claimed to be experiencing the same problem.

"Apple Intelligence in 15.1 just flagged a phishing email as "Priority" and moved it to the top of my Inbox. This seems ... bad." ...


#1 | Posted by LampLighter at 2024-08-12 12:16 AM | Reply

Is there a universal limit to technological development? Evidences from astrobiology
www.sciencedirect.com

... Abstract

Considering the vastness of time and space, if one civilization " ours " has been able to advance technologically to the point of leaving its own planet, why technosignatures from other advanced extraterrestrial civilizations have never been identified? In this paper, I offer an explanation for that, developing an insight originally presented by Webb and others and using insights from astrobiology, sustainability and archaeology.

I argue that there exists a universal limit to technological development (ULTD), determined by decreasing technological returns on societal complexity, increasing maintenance costs of existing technology, the eventual untestability of scientific theories due to the high costs involved and the unattainable energy levels needed to test them and civilization-damaging catastrophes. I also argue, based on the principle of mediocrity, that the ULTD is not much above our current level of technological development.

Technology, therefore, will not be able to provide another home to humankind. Such a possibility should be taken into consideration during the decision-making process concerning both the allocation of resources to research or mitigation of technology-induced planetary changes and the definition of goals to space exploration. ...



#2 | Posted by LampLighter at 2024-08-12 12:20 AM | Reply

Stated differently and more succinctly...

Do we create an AI that destroys us?

#3 | Posted by LampLighter at 2024-08-12 12:20 AM | Reply

Do we create an AI that destroys us?

Considering its primary purpose is to extract money more efficiently, yes, this was always the outcome.

Whether it's intentional or not is a difference question.

#4 | Posted by jpw at 2024-08-12 09:29 AM | Reply

It's just a LLM with bad curation of the training material.

#5 | Posted by sitzkrieg at 2024-08-13 09:56 AM | Reply

@#1

Two dozen people have ever even (surviably) reached escape velocity from Earth. A dozen people have set foot on another body for a total time on surface of a little more than three days. We have never left cislunar space. We have never even had time on orbits longer than about a year for a crewed mission. We have done no spaceflight that could leave a techno-signature detectable even from the nearest stellar system. All research says that any flight to another body would be hazardous to the point of mass death and any attempt at permanent habitation will most likely be mass suicide.

#6 | Posted by s1l3ntc0y0t3 at 2024-08-13 04:13 PM | Reply

We are not leaving the planet. And no one is coming to save us from ourselves.

If we create an AI that is smarter than us then we will have created an alien species that is superior to us. We have plenty of examples of how that would turn out for the inferior species.

At some point AI will be more deadly than any weapon.

And then only a good guy (with) AI can protect you from a bad guy (with) AI.

#7 | Posted by donnerboy at 2024-08-13 06:19 PM | Reply

Re 6
If anything is gonna be able to leave the planet safely it will be AI.

#8 | Posted by donnerboy at 2024-08-13 06:20 PM | Reply

Comments are closed for this entry.

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2024 World Readable

Drudge Retort