Why Amazon’s facial analysis technology has sparked yet more outcry

0
340

Why Amazon’s facial analysis technology has sparked yet more outcry

Why Amazon’s facial analysis technology has sparked yet more outcry

Amazon’s facial analysis technology might have a woman problem. A new study from MIT and University of Toronto researchers has found that the technology tends to mistake women, especially those with dark skin, for men.

Released Thursday, the study found that the facial analysis technology mistook darker-skinned women for men 31 percent of the time. Compare this to lighter-skinned women, misidentified just 7 percent of the time. And for men of any skin tone, there was virtually no misidentification.

Facial detection software dates back to the late 1980s. Available for commercial use, it is tech designed to identify someone from an image or video. Companies market the technology to consumers who have to sort through collections of images for the same face, to retailers who want to know whether customers are having a positive store experience, and in other circumstances.

But when news broke last spring that retail giant Amazon was selling its facial recognition technology to law enforcement agencies, the product became the subject of controversy. Earlier this month, a coalition of more than 85 social justice, human rights, and religious groups sent letters to Microsoft, Amazon, and Google to request that the companies not market facial detection technology to government agencies.

Google has announced that it won’t be selling a facial recognition product until the technology’s potential risks are addressed, and Microsoft has acknowledged that the company has a duty to ensure the technology is used responsibly. Amazon, however, continues to market its technology to government entities. It has reportedly demonstrated its facial detection technology to Immigration and Customs Enforcement and piloted its product, Rekognition, to the FBI.

Civil rights groups and public officials have questioned whether this technology could be used to surveil activists and members of marginalized groups, such as undocumented immigrants and people of color. Reports that facial recognition technology has falsely matched people, including black lawmakers, with images in a mugshot database have led to outcry that police could use it to wrongfully arrest constituents.

The finding that Amazon’s facial analysis product misidentifies dark-skinned women nearly a third of the time raises new concerns. It also taps into the wider social debate about technology and bias. Recently, Rep. Alexandria Ocasio-Cortez (D-NY) pointed out that algorithms can be racist. And those in and out of photography circles have known for years that color film was created solely with white people in mind. On one hand, the fears about Amazon’s facial analysis products are very much related to civil rights. On the other, they’re about the troubling ways that racism and technology have intersected.

Why Amazon is downplaying the MIT/University of Toronto study

The findings of the MIT and University of Toronto researchers point to how the biases of scientists can seep into the artificial intelligence they create. But Amazon takes issue with how the study framed its product, noting that the researchers examined facial analysis technology rather than facial recognition technology. Matt Wood, general manager of AI for Amazon’s cloud-computing unit, explained the difference to ABC News.

Facial analysis “can spot faces in videos or images and assign generic attributes such as wearing glasses,” according to Wood. “Recognition is a different technique by which an individual face is matched to faces in videos and images.”

While this study may have focused on facial analysis, the ACLU and other civil rights groups have criticized Amazon’s facial recognition technology. The ACLU found that the technology misidentified six members of the Congressional Black Caucus with people in a mugshot database.

Why Amazon’s facial analysis technology has sparked yet more outcry

MIT Media Lab researcher Joy Buolamwini argued any technology for human faces should be examined for bias.

“If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are also completely bias free,” she wrote.

The MIT and University of Toronto researchers began their study in August, but Wood told ABC News that Amazon’s technology has been updated since then, and the company’s internal analysis has found “zero false positive matches.”

Amazon’s efforts to address concerns about its facial surveillance products

As concerns about Amazon’s facial surveillance products grow, the company appears to be engaging in a PR campaign of sorts to allay the fears of its critics. An FAQ on its website includes questions such as “Is facial recognition safe?” and “How should I apply facial recognition responsibly?” It also includes a case study about how Amazon Rekognition can be used for social good such as fighting human trafficking. Another study on the site points to how the technology is helping a financial services company lower the number of unbanked people in West Africa.

That said, Amazon stresses that “facial recognition should never be used in a way that violates an individual’s rights.” But the company makes it clear that it expects the government to determine the best way for law enforcement agencies to use such technology. It warns that “technology like Amazon Rekognition should only be used to narrow the field of potential matches” rather than to positively identify a suspect or perpetrator.

“Machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly, we should not throw away the oven because the temperature could be set wrong and burn the pizza,” Wood states.

But civil liberties group argue that the potential for harm is far greater than a burnt pizza. Human lives could potentially be on the line if law enforcement agencies misuse this technology. People of color are already disproportionately stopped by police, a pattern that has created serious doubts about the potential misuse of facial detection technology.

From Tasers to police cameras, the technology that’s supposed to prevent police from lethally harming the public often doesn’t work that way. Tasers can and do kill, and police cameras may be conveniently turned off when police use brutal force. If civil rights groups are alarmed about facial detection products doing harm, it’s likely because technology has, more often than not, benefited those in power.

Want more stories from The Goods by Vox? Sign up for our newsletter here.

Sourse: breakingnews.ie

Why Amazon’s facial analysis technology has sparked yet more outcry

0.00 (0%) 0 votes

LEAVE A REPLY

Please enter your comment!
Please enter your name here