AI Could Be A Bigger Problem
Joy Buolamwini was conducting research at MIT on how computers recognized people’s faces, when she started experiencing something weird.
Whenever she sat before a system’s front-facing camera, it wouldn’t recognize her face, even after working for her lighter-skinned friends. But when she put on a simple white mask, the face-tracking animation suddenly lit up the screen.
Suspecting a more widespread problem, she carried out a study on the AI-powered facial recognition systems of Microsoft, IBM and Face++, a Chinese startup that has raised more than $500 million from investors.
Buolamwini showed the systems 1,000 faces, and told them to identify each as male or female.
All three companies did spectacularly when discerning between white faces, and men in particular.
But when it came to dark-skinned females, the results were dismal: there were 34% more errors with dark-skinned females than light-skinned males, according to the findings Buolamwini presented on Saturday, Feb. 24th, at the Conference on Fairness, Accountability and Transparency in New York.
As skin shades on women got darker, the chances of the algorithms predicting their gender accurately “came close to a coin toss.” With the darkest skin women, the face-detection systems were getting their gender wrong close to half the time.
Buolamwini’s project, which became the basis of her MIT thesis, shows that concerns about bias are adding a new dimension to the general anxiety around artificial intelligence.
While much has been written about ways that machine learning will replace human jobs, the public has paid less attention to the consequences of biased datasets.
What happens, for instance, when software engineers train their facial-recognition algorithms primarily with images of white males? Buolamwini’s research showed the algorithm itself becomes prejudiced.
Another example came to light in 2016, when Microsoft released its AI chatbot Tay onto Twitter. Engineers programmed the bot to learn human behavior by interacting with other Twitter users. After just 16 hours, Tay was shut down because its tweets had become a stream of sexist, pro-Hitler messages.
Experts later said Microsoft had done just fine teaching Tay to mimic behavior, but not enough about what behavior was appropriate.
Suranga Chandratillake, a leading venture capitalist with Balderton Capital in London, UK, says bias in AI is as much a concerning issue as that of job destruction.
“I’m not negative about the job impact,” he says. The bigger issue is building AI-powered systems that take historical data, then use it to make judgements.
“Historical data could be full of things like bias,” Chandratillake says from his office in Kings Cross, which is just up the road from the headquarters of Google’s leading artificial intelligence business, DeepMind.
The post Racist, Sexist AI Could Be A Bigger Problem Than Lost Jobs appeared first on Statii News.
source http://news.statii.co.uk/racist-sexist-ai-could-be-a-bigger-problem-than-lost-jobs/
No comments:
Post a Comment