AI is worse at identifying household items from poor countries

0
85
Delta Air Lines reveals their new biometric face-detection technology at Hartsfield-Jackson International Airport in Atlanta, Ga. on Wednesday, October 17, 2018. (Photo by Chris Rank, Rank Studios 2018)

Object recognition algorithms sold by tech companies, including Google, Microsoft, and Amazon, perform worse when asked to identify items from lower-income countries.

These are the findings of a new study conducted by Facebook’s AI lab, which shows that AI bias can not only reproduce inequalities within countries, but also between them. In the study (which we spotted via Jack Clark’s Import AI newsletter), researchers tested five popular off-the-shelf object recognition algorithms — Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Rekognition, and IBM Watson — to see how well each program identified household items collected from a global dataset.

The researchers found that the object recognition algorithms made around 10 percent more errors when asked to identify items from a household with a $50 monthly income compared to those from a household making more than $3,500. The absolute difference in accuracy was even greater: the algorithms were 15 to 20 percent better at identifying items from the US compared to items from Somalia and Burkina Faso.

One of the most well-known examples of AI bias is with facial recognition algorithms, which regularly perform worse when identifying female faces, particularly women of color.

What does all this mean? Well, for a start, it means that any system created using these algorithms is going to perform worse for people from lower-income and non-Western countries. Because US tech companies are world leaders in AI, that could affect everything from photo storage services and image search functionality to more important systems like automated security cameras and self-driving cars.

LEAVE A REPLY

Please enter your comment!
Please enter your name here