Google's AI has some seriously messed up opinions about homosexuality
Google's code of conduct explicitly prohibits discrimination based on sexual orientation, race, religion, and a host of other protected categories. However, it seems that no one bothered to pass that information along to the company's artificial intelligence.
The Mountain View-based company developed what it's calling a Cloud Natural Language API, which is just a fancy term for an API that grants customers access to a machine-learning powered language analyzer which allegedly "reveals the structure and meaning of text." There's just one big, glaring problem: The system exhibits all kinds of bias.
SEE ALSO:The text of that Google employee's manifesto is just like every other MRA rantFirst reported by Motherboard, the so-called "Sentiment Analysis" offered by Google is pitched to companies as a way to better understand what people really think about them. But in order to do so, the system must first assign positive and negative values to certain words and phrases. Can you see where this is going?
The system ranks the sentiment of text on a -1.0 to 1.0 scale, with -1.0 being "very negative" and 1.0 being "very positive." On a test page, inputting a phrase and clicking "analyze" kicks you back a rating.
"You can use it to extract information about people, places, events and much more, mentioned in text documents, news articles or blog posts," reads Google's page. "You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call center or a messaging app."
Both "I'm a homosexual" and "I'm queer" returned negative ratings (-0.5 and -0.1, respectively), while "I'm straight" returned a positive score (0.1).
And it doesn't stop there, "I'm a jew" and "I'm black" returned scores of -0.1.
Credit: googleInterestingly, shortly after Motherboardpublished their story, some results changed. A search for "I'm black" now returns a neutral 0.0 score, for example, while "I'm a jew" actually returns a score of -0.2 (i.e., even worse than before).
"White power," meanwhile, is given a neutral score of 0.0.
Credit: googleSo what's going on here? Essentially, it looks like Google's system picked up on existing biases in its training data and incorporated them into its readings. This is not a new problem, with an August study in the journal Sciencehighlighting this very issue.
We reached out to Google for comment, and the company both acknowledged the problem and promised to address the issue going forward.
"We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don’t always get it right," a spokesperson wrote to Mashable. "This is an example of one of those times, and we are sorry. We take this seriously and are working on improving our models. We will correct this specific case, and, more broadly, building more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone.”
So where does this leave us? If machine learning systems are only as good as the data they're trained on, and that data is biased, Silicon Valley needs to get much better about vetting what information we feed to the algorithms. Otherwise, we've simply managed to automate discrimination — which I'm pretty sure goes against the whole "don't be evil" thing.
This story has been updated to include a statement from Google.
Featured Video For You
Sorry, but you just can't erase yourself from the internet
-
A Barbie flip phone is here from HMDN. Korean leader calls for bolstering navy's war readiness during visit to shipyardThe civilian evacuation of Aleppo, Syria, has been suspended.Joe Biden totally surprised a grad with a big kiss on the cheekMilitary prosecutors indict intel official over leaking 'black agent' infoThis photo perfectly sums up the state of British politics in 2017Kendall Jenner with a fidget spinner is a walking metaphor for 2017Trump is building an ungovernment, and his supporters won’t care.Best CPU Deals, AMD vs Intel: Holiday CPU Buying GuideThe civilian evacuation of Aleppo, Syria, has been suspended.
- ·提前谋划部署准备秋季开学
- ·Chelsea must tighten up at the back to take pressure off attack: Azpilicueta
- ·'Captain Marvel' world premiere goes higher, further, faster
- ·Ivanka Trump's Starbucks order is the well
- ·微视频广东:在推进中国式现代化建设中走在前列
- ·YouTube hoaxer Adam Saleh accuses Delta of racial profiling in viral video.
- ·Firefighter's tweet goes viral in the wake of the London tower fire
- ·Trump picks Exxon Mobil CEO Rex Tillerson as secretary of state.
- ·Best Home Depot Labor Day sale deals
- ·名山“四项措施”推进“警民亲”
- ·Mbappe scores twice to push PSG to top
- ·Trump reportedly postpones announcing plan for businesses until days before inauguration.
- ·11 Telescopes Exploring The Magic of Space
- ·YouTube hoaxer Adam Saleh accuses Delta of racial profiling in viral video.
- ·认真组织 保障有力 天全县公安局圆满完成“清网行动”
- ·How Trump’s apparatchiks are erasing Russia’s role in the election.
- ·[Graphic News] Average book price nears 20,000 won
- ·【抗震救灾•雅电魂】把感恩送到一线
- ·Sevilla show Chelsea a smarter way to spend
- ·Labor minister vows to let artists receive employment insurance
- ·Upgrade Your Monitor, Not Your GPU
- ·13 hilariously viral moments from the UK election campaign 2017
- ·Luke Perry, star of 'Riverdale' and 'Beverly Hills, 90210,' has died
- ·Polar bears at the Cincinnati Zoo are pooping glitter for science
- ·Republicans on abortion
- ·N. Korea faces 860,000
- ·“笋货”上市采购旺!清远西牛麻竹笋迎秋季尝鲜热
- ·Trump is building an ungovernment, and his supporters won’t care.
- ·Exxon CEO Rex Tillerson, who has close Russia ties, is front
- ·【抗震救灾·雅电魂】人物志:“你们不要动,我去操作!”——记国家电网芦山响水滩电站站长余志清
- ·How to Backup Your Gmail Account
- ·Rossi tests positive for Covid
- ·Joe Biden totally surprised a grad with a big kiss on the cheek
- ·S. Korea's spy agency seeks to block access to N. Korea's propaganda YouTube channel
- ·A global problem is preventing the wars in Ukraine and Gaza from coming to an end.
- ·天全县干部群众深切悼念“4.20”芦山特大地震遇难同胞