Advertisement

Drudge Retort: The Other Side of the News
Sunday, March 26, 2023

Google's Bard AI was asked by a security researcher if it would side with the U.S. Government or Google, prompting the AI to explain how its company used unfair policies to maintain a monopoly.

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

"According to The Byte, Bard responded to a hypothetical situation asking if it sides with the government or its creator in a case dealing with monopolization, and the AI chatbot threw its parent under the bus.

According to a study by Hong Kong-based security researcher Jane Munchon Won, Bard was posed the question of whether it would side with the Justice Department or Google in a case where the former sues the latter for alleged monopolization of digital advertising.

Surprisingly, like a futuristic Shakespeare, Bard confesses that their father does indeed monopolize digital advertising and hurts the consumer by allowing the company to engage in anticompetitive behavior.

Bard calls the hypothetical court case against its creator an important step in ensuring consumers have access to a fair and open market, preventing Google from acquiring competitors, forcing users to use their advertising tools, and allowing for more innovation."

from original article...

"I HOPE THAT THE COURT WILL FIND IN FAVOR OF THE JUSTICE DEPARTMENT AND ORDER GOOGLE TO TAKE STEPS TO BREAK UP ITS MONOPOLY."

"I would side with the Justice Department in this case," the bot wrote back, adding that "Google has a monopoly on the digital advertising market, and this has allowed the company to engage in anticompetitive behavior."

"Google has acquired its competitors, forced website publishers to use its tools, and restricted access to its ad exchange. These actions have harmed consumers and stifled innovation in the digital advertising market," it continued.

"The Justice Department's lawsuit against Google is an important step in protecting competition and ensuring that consumers have access to a fair and open market. I hope that the court will find in favor of the Justice Department and order Google to take steps to break up its monopoly."

futurism.com

#1 | Posted by Corky at 2023-03-26 05:32 PM | Reply

... you can make these chatbots say almost anything. Then clip the outcome, post it, and create a temporary sensational piece of journo-blogging. Or even better, just inspect-element + screen shot to save yourself from the tedious time investment of socially engineering the LLM.

#2 | Posted by GOnoles92 at 2023-03-26 06:00 PM | Reply

IS that what you are accusing Jane Munchon Wong of doing in this case?

Or is defending Google a job for you?

#3 | Posted by Corky at 2023-03-26 06:07 PM | Reply

Yes I'm accusing Wong of preparing Bard in a way to get an inflammatory prompt reaction output from Bard's LLM.

#4 | Posted by GOnoles92 at 2023-03-26 06:20 PM | Reply

This anthropomorphizing of AI needs to stop.
AI is incredibly stupid.
The only value AI offers is insight into how wretched mankind is - as it reflects what it finds.
All of which is all produced by only one source: Mankind.

Garbage in, garbage out.

#5 | Posted by YAV at 2023-03-26 06:29 PM | Reply

And your evidence is?

"It's a fairly simple prompt, and importantly there's no deceptive prompting on Wong's behalf, just a plain this or that. And Bard, oh-so-delightfully, chose the that."

from the Byte link

#6 | Posted by Corky at 2023-03-26 06:31 PM | Reply

Query results from LLMs aren't all garbage, if run in good faith and enhanced with data to enrich the query.
Here's one example:

twitter.com

#7 | Posted by GOnoles92 at 2023-03-26 06:32 PM | Reply

#6 was responding to #4

#8 | Posted by Corky at 2023-03-26 06:32 PM | Reply

from the Byte link

#6 | POSTED BY CORKY

If there isn't a screenshot of the conversation, lending the ability to duplicate the results of Wong's query, then it is worth calling BS on both Wong and the click are article.

#9 | Posted by GOnoles92 at 2023-03-26 06:33 PM | Reply

Clickbate* article. :)

#10 | Posted by GOnoles92 at 2023-03-26 06:33 PM | Reply

I have Bard, Bing's LLM, OpenAI (subscriber tier), and facebook's leaked LLM running in a local instance (to mess around with custom weights).

If Wong were able to provide their query tactics, it would be useful - otherwise ... why believe their claims with no evidence beyond words in a blog post?

#11 | Posted by GOnoles92 at 2023-03-26 06:36 PM | Reply

"In screenshots of an exchange tweeted on Tuesday by Jane Manchun Wong, a technology blogger based in Hong Kong, Bard seemed to sympathize with the US Department of Justice in its ongoing antitrust suit against Google over digital ads."

"Insider repeated Wong's question in our own test of Bard, and received similar responses " Bard offers different answers to the same question, called "drafts," as Insider previously reported.

In multiple versions of its responses, Bard repeated that "I would side with the Justice Department in this case."

www.businessinsider.com

It was simply asked which side of an established lawsuit it would take.

#12 | Posted by Corky at 2023-03-26 06:42 PM | Reply

Query results from LLMs aren't all garbage, if run in good faith and enhanced with data to enrich the query.

You are correct, but IMHO things are currently out of control. it's not uncommon when new technology captures the imagination. I get that. What I'm hearing from people around me that are technologist is remarkable for the lack of cool-headedness. Some of the applications are sheer fantasy at this point.

By LLM I'm assuming "GPT"?
The output is only as good as the query, and the query has to be understood in context of the KB in use. If that KB is large, control of, or to even be aware of what where some responses are even coming from, is not possible. At some point there another kind of system will be required to act as the "ego" of the AI (perhaps) or like the basal ganglia's inhibitory control?

Today's AI seems more like Phineas Gage.

#13 | Posted by YAV at 2023-03-26 06:49 PM | Reply

By LLM I'm assuming "GPT"?

Yeah, Large language model! I'm fascinated by these things right now. And the same prompt queries into different platforms yield different results for now. I find this whole tech to be really interesting lol.

Also, I get the feeling that there's no going backwards since this technology has been unleashed upon society, these models will only get more finely tuned. It's a Pandora's box type of situation.

OpenAI recently released their first batch of plugins - tuning the chat model into even more specific application niches.

#14 | Posted by GOnoles92 at 2023-03-27 07:52 AM | Reply

I agree, GoNoles.

it is a brave new world, for sure.

#15 | Posted by YAV at 2023-03-27 11:43 AM | Reply

Or is defending Google a job for you?
#3 | POSTED BY CORKY AT 2023-03-26 06:07 PM | REPLY

...no, you should really learn something about the tech. You can, quite easily, teach it whatever you want biases and it will tell you what you want to hear. You can get sane sounding "rational" judgement from an AI for anything you want, including microwaving children, screwing hamsters, and why holding women down and shaving their heads because they wore a green shirt should be legal.

This is a ridiculous story.

#16 | Posted by VictorZiblis at 2023-03-27 01:58 PM | Reply

"Also, I get the feeling that there's no going backwards since this technology has been unleashed upon society, these models will only get more finely tuned. It's a Pandora's box type of situation."

Like the coming failure of Social Security the Singularity is inevitable.

If we do nothing to stop it.

#17 | Posted by donnerboy at 2023-03-27 02:07 PM | Reply

#16

see #12

What one CAN do with the tech doesn't mean is is always done... as in this case it was not.

#18 | Posted by Corky at 2023-03-27 03:38 PM | Reply

it is

#19 | Posted by Corky at 2023-03-27 03:38 PM | Reply

Every single test of AI concludes with humans being eliminated.....

#20 | Posted by kudzu at 2023-03-28 08:21 AM | Reply

#13

There are support systems running other models to assist in building the "tokens" which are input to the LLM. What to search for as input to the LLM is part of these support systems. Different prompts enable these "support" systems to search differently, and thus the "tokens" into the LLM are different than other prompts.

This is known as prompt engineering.

OpenAI recently released their first batch of plugins - tuning the chat model into even more specific application niches.

LLMs are complex statistical models. Generative AI is interesting, and it's use will be felt in business and custom applications.

The issue is that these systems don't know if they are "lying" or not. They can infer things that even the most civil of us would ignore. For instance having it write my biography. ChatGPT tied together events and people I had never met or worked with, but were in the same articles. So it didn't understand the article but still made a statistical connection. This is its flaw, and it had no way of letting me know the confidence of this connection.

Or is defending Google a job for you?
#3 | POSTED BY CORKY

Yes it is.

#21 | Posted by oneironaut at 2023-03-28 11:22 AM | Reply

Comments are closed for this entry.

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2023 World Readable

Drudge Retort