George Floyd’s death, and weeks of global protest against racial inequality that followed, have persuaded two of Silicon Valley’s biggest brands to rethink facial recognition technology.
On Tuesday IBM announced it would stop offering the tech for “mass surveillance or racial profiling,” adding it needed further testing “for bias.” Yesterday Amazon banned police from using its own, highly criticized Rekognition software for one year.
“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” read an Amazon statement.
“We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
On the surface, IBM and Amazon’s decisions appear to confirm a commitment to reduce racist policing. But, as many have pointed out, both firms have been instrumental in equipping security forces with facial recognition software known for years to be biased.
Perhaps, then, these moves are a cynical attempt to whitewash IBM and Amazon’s images, and hijack a global movement against racism. Are the companies exercising true care, or crisis management?
Last year Red Herring examined the biases inherent in current AI platforms. Datasets are full of the same discriminations that exist in society—racism, sexism, homophobia—and, as computing experts will tell you, you get out of systems what you put in.
Let’s not pretend IBM and Amazon are the only companies offering facial recognition to law enforcement and other questionable agencies. Neither, that George Floyd’s death is the first prompt Big Tech has had to change its ways. After Baltimorean Freddie Gray was killed by cops in 2015, the city’s police department used facial recognition technology to profile Black Lives Matter protesters.
“The combination of over-reliance on technology, misuse and lack of transparency—we don’t know how widespread the use of this software is—is dangerous,” Timnit Gebru, leader of Google’s ethical AI team, told the New York Times on Tuesday.
“My gut reaction is that a lot of people in technology have the urge to jump on a tech solution without listening to people who have been working with community leaders, the police and others proposing solutions to reform the police,” added Gebru.
“(Facial recognition tech) should be banned at the moment.”