Advertisement

Drudge Retort: The Other Side of the News
Sunday, May 15, 2022

There are competing notions of fairness -- and sometimes they're totally incompatible with each other.

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the article...

...Let's play a little game. Imagine that you're a computer scientist. Your company wants you to design a search engine that will show users a bunch of pictures corresponding to their keywords -- something akin to Google Images.

On a technical level, that's a piece of cake. You're a great computer scientist, and this is basic stuff! But say you live in a world where 90 percent of CEOs are male. (Sort of like our world.) Should you design your search engine so that it accurately mirrors that reality, yielding images of man after man after man when a user types in "CEO"? Or, since that risks reinforcing gender stereotypes that help keep women out of the C-suite, should you create a search engine that deliberately shows a more balanced mix, even if it's not a mix that reflects reality as it is today?

This is the type of quandary that bedevils the artificial intelligence community, and increasingly the rest of us " and tackling it will be a lot tougher than just designing a better search engine.

Computer scientists are used to thinking about "bias" in terms of its statistical meaning: A program for making predictions is biased if it's consistently wrong in one direction or another.

(For example, if a weather app always overestimates the probability of rain, its predictions are statistically biased.) That's very clear, but it's also very different from the way most people colloquially use the word "bias" -- which is more like "prejudiced against a certain group or characteristic." ...


#1 | Posted by LampLighter at 2022-05-15 02:08 PM | Reply

Another view...

U.S. warns of discrimination in using artificial intelligence to screen job candidates
www.npr.org

...The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor worker productivity can unfairly discriminate against people with disabilities, sending a warning to employers that the commonly used hiring tools could violate civil rights laws.

The U.S. Justice Department and the Equal Employment Opportunity Commission jointly issued guidance to employers to take care before using popular algorithmic tools meant to streamline the work of evaluating employees and job prospects " but which could also potentially run afoul of the Americans with Disabilities Act.

"We are sounding an alarm regarding the dangers tied to blind reliance on AI and other technologies that we are seeing increasingly used by employers," Assistant Attorney General Kristen Clarke of the department's Civil Rights Division told reporters Thursday. "The use of AI is compounding the longstanding discrimination that jobseekers with disabilities face."...


#2 | Posted by LampLighter at 2022-05-15 02:10 PM | Reply

Yet another view...

Google Introduces Monk Skin Tone Scale to Improve AI Color Equity
www.extremetech.com

...When it comes to trailblazers in the field of color equity, Google doesn't grace the top of many lists. But there's a contingent within the company trying to change that. At its I/O 2022 conference, Google introduced a tool it intends to use to improve color equity through representation. It's a set of ten color swatches that correspond to human skin tones, running the whole gamut from very light to very dark. And Google open sourced it on the spot.

Fairness is a major problem in machine learning. It's already difficult enough to reduce human values to an algorithm. But there are different kinds of fairness -- twenty-one or more, according to one researcher. Statistical fairness is not the same as procedural fairness, which is not the same as allocational fairness. What do we do when different definitions of fairness are mutually exclusive? Instead of trying to write one formula to rule them all, Google has taken a different approach: "Start where you are."

Where we are is in a state of desperately unequal digital representation.

Google is the largest search purveyor on the planet, by a long shot. Run an incognito search on Google Images for "CEO," and what you get is a sea of white male faces, two of whom are Elon Musk.

Search for "woman," and it's absolutely true that the results skew young, slender, white, able-bodied.

But one of the faces the search returned was a deepfake of a pale young woman, generated by NVidia's StyleGAN. I've written about this specific deepfake before in a different article, so it surprised me to see her face again. I had to double check that I was in incognito mode -- but I was....



#3 | Posted by LampLighter at 2022-05-15 02:13 PM | Reply

Nextup: Why is it so damn hard to make people post on my threads.

#4 | Posted by oneironaut at 2022-05-15 03:05 PM | Reply

@#4

Perhaps you care about the number of replies as a reason for posting threads (your current alias has said as much), but that is not the reason why I post threads.

Geesh, if it were, I'd post about a whole different set of topics.


#5 | Posted by LampLighter at 2022-05-15 03:31 PM | Reply

Simply put -- GIGO!

#6 | Posted by MSgt at 2022-05-15 05:57 PM | Reply

@#6 ... GIGO! ...

In some respects, that is quite true.

The AI developers seem to have taken the easy path.

In other words, easy development of AI instead of accurate processing by AI.


#7 | Posted by LampLighter at 2022-05-15 06:02 PM | Reply

Nextup: Why is it so damn hard to make people post on my threads.#4 | Posted by IronTurd

Does IRA pay a premium for those? Those Pootey troll-rubles just aren't worth what they used to be. Maybe insist on payment in kopiykas?

#8 | Posted by censored at 2022-05-15 06:58 PM | Reply

#7 Depends on what you mean by AI.
1. I don't know if NLP suffers from training bias.
2. I know health issue detection using medical images was found to be biased because it recognized the facility name in the image, which correlated with disease images due to poor training set design.
3. AI judges are very good at predicting a conviction, as long as they have access to banking information...

#9 | Posted by bored at 2022-05-15 07:28 PM | Reply

I for one hate the term AI as there is nothing intelligent about them. They are essentially probability algorithms built off large data sets. I even hate the term "learning" are they learning? No. They are adjusting probabilities as more data is fed to them. Not that these algorithms can't do great things but they can also be just plain old "dumb" at times. That said...

The most simple explanation is that bias is everywhere and in every assumption a human makes. There have been some good TED talks, articles and interviews I have watched/read/listened to over the past few years. It's amazing what causes bias in a system - often very small things. I have to run right now but if I have time I will post a couple later.

#10 | Posted by GalaxiePete at 2022-05-16 08:56 AM | Reply | Newsworthy 1

Some YouTu.be videos of TED Talks...

Joy Buolamwini - she's good IMHO. I have heard interviews with her and read articles but I honestly haven't watched her TED talk.
Kriti Sharma - How to keep human bias out of AI. I have seen this one and it is educational, shouldn't be but it is.
6 TED talks to watch on AI ethics I am going to have to go through this list myself.

When something passes the Turing Test then we are talking "AI" IMHO...

#11 | Posted by GalaxiePete at 2022-05-16 06:21 PM | Reply

Comments are closed for this entry.

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2022 World Readable

Drudge Retort