How the bias in algorithms can help us spot our own

by | Apr 29, 2024

People recognize their own biases in algorithms’ decisions more than they do in their own—even when those decisions are the same.
Abstract image of an algorithm.

From the shows we watch to the people we hire, algorithms are becoming increasingly more integrated into our daily lives, quietly shaping and influencing the decisions we make. At their core, algorithms are sets of rules or instructions designed to process information and yield a specific outcome.

But because they learn from patterns in human behavior, algorithms can reflect or even amplify the biases that exist within us. However, according to a new study, this might not be entirely a bad thing.

Carey Morewedge, professor at Boston University, believes this reflection can illuminate our bias blind spots and help correct behavior. Machine learning algorithms are so successful because they can dispassionately find patterns in datasets, but also include human biases in their training data.

Fortunately, when these biases are identified in algorithms they can help reveal long-standing biases in organizations. For example, Amazon hadn’t quantified the gender bias in their hiring practices until they tested an algorithm that evaluated new resumes based on their own past hiring practices.

“Algorithms can codify and amplify human bias, but algorithms also reveal structural biases in our society,” said Morewedge in a press release.

The difficulty in detecting bias

In their study, they demonstrated how people are generally more inclined to detect and correct bias found in an algorithm’s decision making compared to their own. Knowing this, say the researchers, it may be possible to use algorithms to address this bias in our decisions.

Humans struggle harder to see their own bias than the biases of other people due to a phenomenon called bias blind spot. It occurs because we can internally rationalize or excuse bias in our own decision-making processes. To an observer with no insights into a thought process or how a decision came to be, biases are clearer and harder to excuse.

“I might better see my poor taste in movies reflected through Netflix’s algorithmic recommendation algorithm then I would through the individual action movies that I watched that night,” explained Morewedge in an interview with Advanced Science News. “It’s easier for me to see that through the algorithm than in my own decisions.”

As Morewedge and his colleagues show in the study, this holds true even when the algorithms are trained on our own behaviors.

Bias in algorithms is easier to spot

In a series of experiments, the researchers asked participants to rate AirBnB rentals and Lyft drivers based on diagnostic criteria, including star ratings, reviews, and how long they’ve been a driver.

However, the researchers manipulated non-diagnostic criteria — elements that have no bearing on the task, such as the picture or name. Participants rated the rentals or drivers twice and were then shown their ratings or the ratings of an algorithm trained using their data from the first run.

But here’s twist: participants were sometimes given their own ratings, but were told they were from an algorithm. In all scenarios, participants spotted less bias in their own ratings compared to those of the algorithms.

“People tend to see more bias when they believe the ratings are made by an algorithm or when we actually train an algorithm on their data and show them the algorithm’s ratings,” explained Morewedge.

“It’s not that people see more kinds of attributes in algorithms, it’s that they see things that are more threatening to their self,” he continued. “Most people don’t want to use race in their ratings or they want to ignore race in those ratings so the idea that race influenced those ratings is threatening.”

How can bias in algorithms help address the problem

Consequently, it’s easier to see, or admit, bias exists when it’s externalized in an algorithm and not percieved as our own flawed decision making. According to Morewedge, this finding presents two ways in which algorithms can help humans reduce bias.

“One is aggregating your own decisions and seeing patterns, [which] helps you recognize bias,” he said, “but there’s still a bit of a barrier between those summaries and our ability to recognize it because we have self protective motives.”

Following this logic, the group performed another experiment to see if participants would be more likely to correct for bias in their own ratings or the algorithm’s. After seeing the ratings, participants were given the chance to correct for bias and they were more likely to make corrections in the algorithm’s ratings.

“Because people are more likely to see bias in the ratings of the algorithm than themselves, they’re also more likely to correct those ratings of the algorithm,” said Morewedge.

Morewedge acknowledged that this work is in the early stages but he sees a tangible way in which these findings can be incoporated into de-biasing training in the real world. “The first step of de-biasing is getting people to understand their biases and see them,” he said. “I think these algorithms are useful tools to give people a more realistic perspective on their degree of bias.”

Reference: Begum Celiktutan, et al., People see more of their biases in algorithms, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2317602121

Feature image credit: Google DeepMind on Unsplash

ASN Weekly

Sign up for our weekly newsletter and receive the latest science news.

Related posts: