Overheard at National Geographic Kicks Off Season 6 With a Look at Racial Bias in AI Facial Recognition Software

The June issue of National Geographic magazine features a cover story called “The Robot Revolution Has Arrived.” Kicking off the 6th season of Overheard at National Geographic, the podcast that takes listeners behind the scenes with real explorers, producer and host Brian Gutierrez learns more about “The Battle for the Soul of Artificial Intelligence.” He is jokingly joined by a co-host named Natalie, who is an AI voice.

(National Geographic)

(National Geographic)

The episode shines a critical spotlight on Google, whose photo-categorization software was discovered to be labeling Black people as gorillas in 2015. One of the special guests in this episode is Tiffany Deng, a program manager at Google working in algorithmic fairness and AI, who says that the engineers behind the software have no malicious intentions.

Because these algorithms learn from the behaviors of the engineers, part of the problem is not enough diversity in these technological spaces. A Google spokesperson made a statement that hundreds of people are working on responsible AI and the company is committed to fixing the issue. But as recently as December 2020, a Google AI ethics researcher who was also Black was allegedly forced out. According to the company’s annual diversity report, only 1.6% of Google employees are Black women (the podcast doesn’t give the percentage for men, but a Google search suggests that 3.6% of the company are Black, regardless of gender).

Since 2019, at least three Black Americans have been wrongfully arrested because of errors with AI facial recognition software. This episode focuses on the January 2020 arrest of Robert Williams in Detroit, who missed work and faced thousands of dollars in legal fees in addition to being humiliated in front of his family and neighbors. According to the developers of the technology, the arrest should never have happened in the first place because the AI match is only meant to be a lead, not a conviction.

Howard University computer science professor Gloria Washington joined Brian Gutierrez to share her experience working with the same kind of AI facial recognition software. She explains that the computer often has a fuzzy picture, sometimes a freeze frame from video surveillance, and then tries to match that with crisp photos on file, including passport photos and mugshots. As a Black woman, she shares that it’s sometimes harder to capture detailed facial features in a photo and the software needs to be improved to better evaluate the faces of Black people.

Part of the issue is the data sets used to develop the software. Gloria Washington spent three years in Hong Kong evaluating millions of photos used in their data set to see how diverse the data really was. The lack of diversity in the data sets not only creates blindspots, but the engineers unintentionally exclude populations not represented in the workplace. “When you look at the actual numbers of the number of skilled workers who work for Google or Facebook or these big tech companies who are Black, it's not even close to the percentage of Black people who are in the U.S. population,” Washington says in the episode. “It's less than 3%.”

National Geographic Emerging Explorer Joy Buolamwini is the founder of the Algorithmic Justice League and another guest in this episode. Her work is dedicated to fighting bias in machine learning and from her research, she found that facial recognition software from IBM, Microsoft, and Amazon had errors of less than 1% for lighter skinned men. The margin for error increased to as high as 30% for darker skinned women.

National Institute of Standards and Technology computer scientist Patrick Grother conducted research with his team, inspired by the findings of Joy Buolamwini. He and his team have evaluated AI facial recognition software from most of the tech industry (99 companies) and found that the programs misidentified Black and Asian faces between 10 and 100 times more frequently than it did Caucasians. False positive results increased primarily by race, with smaller variances caused by age and sex.

Back in May of 2019, the House Committee on Oversight and Government Reform began examining the use of facial recognition technology in law enforcement, where one in four agencies currently have access to the algorithms today. Also in 2019, a few cities began banning the use of facial recognition technology by law enforcement, starting with San Francisco.

At the end of the episode, Brian Gutierrez mentions that The Walt Disney Company is the co-parent company of National Geographic Partners, saying that facial recognition software is being tested at Magic Kingdom park at Walt Disney World. However, it’s important to note that the technology being tested is actually a measurement of unique features on a ticket holder’s face, such as the distance between their eyes. It’s being tested with face masks on, meaning the technology being used doesn’t take a picture of the face, nor is it linked to any law enforcement database. It is also an optional measure of verification that Guests don’t have to use.

You can listen to this full episode and others at the official Overheard at National Geographic website.

Alex Reif
Alex joined the Laughing Place team in 2014 and has been a lifelong Disney fan. His main beats for LP are Disney-branded movies, TV shows, books, music and toys. He recently became a member of the Television Critics Association (TCA).