Advertisement

Drudge Retort: The Other Side of the News
Tuesday, August 03, 2021

Paul Wallis: Recognition of AI entity DABUS as an inventor is a gigantic, unsure, step into the legal status of artificial intelligence, and it's a major future issue for the law. It's also the very first nano-step into recognizing artificial intelligence as a person.

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

isn't it just an extension of the programmer of the AI at this point?

#1 | Posted by kwrx25 at 2021-08-02 06:00 PM | Reply

So, when do we get to sue an AI?

#2 | Posted by sixam at 2021-08-03 10:57 AM | Reply

@#1 ... isn't it just an extension of the programmer of the AI at this point? ...

It's a subtle distinction. Maybe, maybe not.

A computer program is written to do a specific task. It definitely is an extension of the programmer.

An AI program is programed to learn how to do tasks. What it learns may not have anything to do with the programmer.

It is that learning ability that makes AI programs different from regular programs.

Of course, there is a lot of debate on that difference and the point you raise, which is why it is significant, and questionable, whether DABUS is the actual owner of the patent in question.


#3 | Posted by LampLighter at 2021-08-03 10:58 AM | Reply

#3 I would also argue that the AI programmer(s) often can't explain why the AI made a particular decision or produced a given answer. The programmer(s) can explain the fundamentals, which is just calculus, but the actual processes are "hidden". The AI operates independently once initialized.

As a traditional coder, I can tell you exactly how my script does something anywhere in the process. At least I'd hope so! The computer is doing exactly what I tell it to do. I could also reproduce the output myself, albeit at a much slower pace. Again, at least I'd hope so!

#4 | Posted by horstngraben at 2021-08-03 12:11 PM | Reply

#4 You can't put an AI on the witness stand and ask it why it did something.

Despite the fact that they work, this is a huge hurdle to overcome to have them integrated in society. Or, maybe we don't want AI's in charge of things, even if it's plain to see they usually do a better job than humans.

#5 | Posted by snoofy at 2021-08-03 12:12 PM | Reply | Newsworthy 1

So who is responsible for the actions of an AI?

#6 | Posted by WhoDaMan at 2021-08-03 12:19 PM | Reply | Newsworthy 1

@#6

IBM has been playing in the AI field for a while, longer than many. Here is their view...

Accountability
www.ibm.com

...AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes....

Human judgment plays a role throughout a seemingly objective system of logical decisions. It is humans who write algorithms, who define success or failure, who make decisions about the uses of systems and who may be affected by a system's outcomes.

Every person involved in the creation of AI at any step is accountable for considering the system's impact in the world, as are the companies invested in its development....


#7 | Posted by LampLighter at 2021-08-03 12:37 PM | Reply

#3 | Posted by LampLighter

I would say an AI Program - better described in my book a learning algorithm - is programmed to do specific tasks and improve those tasks. So much of what is sold as "AI" is not even that. It is algorithms enhanced by an AI and repackaged into sellable bundles. These bundles receive updates from the AI owners as they see fit.

I am 100% against patenting anything generated by an AI and certainly not by the AI itself.

#8 | Posted by GalaxiePete at 2021-08-03 12:56 PM | Reply | Newsworthy 2

It seems the the article is conflating "AI" with both simple neural networks (which only receive controlled training data when fed to it by a programmer) and mature autonomous agents that are capable of constantly learning "in the wild" at the same level as humans are (which are still a long ways off). All "AI" have hard coded parameters that limit the agent's domain of capacities.

#9 | Posted by sentinel at 2021-08-03 01:00 PM | Reply

"Every person involved in the creation of AI at any step is accountable for considering the system's impact in the world, as are the companies invested in its development...."

It's funny how corporations like that say every person involved in a creation is responsible (liable) for the creation's impact in the world, but the corporation itself usually takes a 100% stake in the patent.

#10 | Posted by sentinel at 2021-08-03 01:03 PM | Reply

@#8 ... So much of what is sold as "AI" is not even that. ...

Agreed.

But there is also AI that is quite scary in what it can learn to do....

#11 | Posted by LampLighter at 2021-08-03 01:16 PM | Reply

@#9 ... All "AI" have hard coded parameters that limit the agent's domain of capacities. ...

I do not necessarily agree with that statement.

Such hard coding would restrict the use of AI.


#12 | Posted by LampLighter at 2021-08-03 01:17 PM | Reply

An AI program is programed to learn how to do tasks. What it learns may not have anything to do with the programmer.

#3 | POSTED BY LAMPLIGHTER AT 2021-08-03 10:58 AM | FLAG:

This is a collection of neural nets. There's no self-recognition to constitute sentience. The court botched this one badly, but it's an understandable botch on their part.

#13 | Posted by sitzkrieg at 2021-08-03 01:35 PM | Reply

"Such hard coding would restrict the use of AI."

As it should be. AI is just a tool that's applied for specific uses. It's not something that can evolve on its own into the something the same level or scope as a human unless it's both programmed to have that capacity and given the environment to do so. Not harming human beings is something that should be hardcoded into the subsumption architecture of all AI.

#14 | Posted by sentinel at 2021-08-03 01:38 PM | Reply

#14 Azimov's rules may be impossible to implement, and they are dependent on the value system that determines 'harm' to humans.

My simple understanding of ML is the learning algo takes in training data to model producing outputs from inputs.
I fully expect ML to be able to ingest popular music or accepted patents and then predict ones that don't exist yet.

I don't consider that close to AI.

#15 | Posted by bored at 2021-08-03 01:51 PM | Reply

@#14 ... It's not something that can evolve on its own into the something the same level or scope as a human unless it's both programmed to have that capacity and given the environment to do so. ...

So we agree, then.

... Not harming human beings is something that should be hardcoded into the subsumption architecture of all AI. ...

I note your use of "should be." Again, we agree.

But I'll also say that "should be" does not mean "will be."



#16 | Posted by LampLighter at 2021-08-03 01:54 PM | Reply

My simple understanding of ML is the learning algo takes in training data to model producing outputs from inputs.
I fully expect ML to be able to ingest popular music or accepted patents and then predict ones that don't exist yet.
I don't consider that close to AI.

#15 | POSTED BY BORED AT 2021-08-03 01:51 PM | FLAG:

That's accurate for the most part. Predictions can be shaky. I fed a video of larpers to Azure Video Analysis and as best it could tell, based on the many terabytes of video it's analyzed since its inception, what was going on was definitely a LGBTQ+ Reality Television Show.

ML is a type of AI though. "AI" doesn't mean smart, it just means artificial.

#17 | Posted by sitzkrieg at 2021-08-03 03:33 PM | Reply

Well that's a very fancy if-then-else loop

#18 | Posted by GOnoles92 at 2021-08-03 04:55 PM | Reply

Comments are closed for this entry.

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2021 World Readable

Drudge Retort