Rep. Alexandria Ocasio-Cortez’s utilizes technology as only a digital native can.
The 29-year-old Congresswoman knows the power of an authentic and personal social media presence. She’s seen the impact a novel app can have on antiquated campaign tactics. She also knows that new technology can have an insidious side. On Sunday, Ocasio-Cortez called for safeguards against algorithmic bias, a well-documented phenomenon that researchers and civil rights activists have been sounding the alarm over for months.
The freshman Congresswoman tweeted out a new study by MIT and University of Toronto that found Amazon’s facial analysis software incorrectly identifies women as men 19 percent of the time. The software misidentified darker-skinned women as men 31 percent of the time.
When you don’t address human bias, that bias gets automated.
Machines are reflections of their creators, which means they are flawed, we should be mindful of that.
It’s one good reason why diversity isn’t just “nice,” it’s a safeguard against trends like this ⬇️ https://t.co/NcOivu5ejR
— Alexandria Ocasio-Cortez (@AOC) January 27, 2019
Amazon’s general manager of artificial intelligence, Matt Wood, disputed the findings and noted that the study focuses on facial analysis technology which is different from facial recognition. Facial analysis is used to search a database of photos for specific attributes. Say, for example, an advertisement designer wanted to search a catalog of stock photos for a woman wearing glasses and smiling. That’s where facial analysis comes in.
Amazon customers use facial recognition to try to identify a person by comparing his face with a database of other faces. Law enforcement agencies in Florida and Oregon currently use Amazon software for this purpose, to the consternation of the ACLU as well as many Amazon employees and shareholders.
“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case – including law enforcement – based on results obtained using facial analysis,” Wood said in a statement. “The results in the paper also do not use the latest version of Rekognition and do not represent how a customer would use the service today.”
Amazon said that by recreating the test with its latest software, there were no misidentifications.
Nevertheless, experts, activists, and Amazon’s neighbor, Microsoft, have called for more regulation and scrutiny of the technology to prevent it from amplifying human biases.
“Algorithms are still made by human beings,” Ocasio-Cortez said at a Martin Luther King Day event earlier this month. “And those algorithms are still pegged to basic human assumptions.”
MIT’s Technology Review tweeted a response to Ocasio-Cortez’s comments: “She isn’t wrong.”