AI detectors have a bias against non-native English speakers

by | Jul 12, 2023

GPT detectors wrongfully flagged a majority of submissions by non-native English speakers as AI-generated content, raising concerns about their use.
ChatGPT on a phone screen

For better or worse, generative AI models have sparked a revolution. Their popularity has skyrocketed since their launch, offering a range of growing capabilities such as the ability to generate code, images, music, text, simulations, videos, and more.

“Within a mere 2 months of its launch, ChatGPT amassed over 100 million monthly active users, marking its place as one of the fastest-growing consumer internet applications in history,” wrote a team of Stanford researchers in a recent paper published in the journal Patterns.

While the pace of development has been astounding, with a growing potential for this new technology to enhance productivity and foster creativity, what happens when it is used in areas, particularly education, where assessments, grades, applications, and even degrees depend on written work?

“Nobody is ready for how AI will transform academia,” wrote Stephen Marche in an essay for The Atlantic. And it’s difficult to see where this will lead.

A concern, says James Zou, assistant professor at Stanford and lead author of the study published in Pattern, is that many schools, companies, and government agencies are using or planning to use detectors that claim to be able to identify if the text is generated by AI, but they are not accurate and have an alarming bias against non-native English speakers.

Bias in the system

To demonstrate the subpar accuracy of GPT detectors and how this is inadvertently penalizing individuals with limited linguistic proficiency, Zou and his colleagues had seven popular GPT detectors evaluate writing samples from both native and non-native speakers.

They fed them 91 English essays written for a standard English proficiency test called the Test of English as a Foreign Language (TOEFL) from a Chinese forum as well as 88 US eighth grade essays from the Hewlett Foundation’s ASAP dataset.

“A majority of essays written by non-native English speakers are falsely flagged by all the detectors as AI-generated,” said Zou. Over half of the non-native English writing samples were misclassified as AI generated, with one detector flagging nearly 98% of the TOEFL essays, while the accuracy for native samples remained near perfect.

According to the team, this is based on the level of “perplexity” of a given work. “Perplexity basically measures how surprising the word choices are in the text,” explained Zou. “Text with common or simple word choices tends to have lower perplexity. These detectors are more likely to flag text with low perplexity as AI-generated.”

“Moreover, they are very easy to fool,” he added. Using better prompts and asking ChatGPT to write using more sophisticated language, the detectors could be bypassed, classifying these submissions as human-written because they have a higher engineered perplexity.

“This raises a pivotal question,” wrote the authors in their paper. “If AI-generated content can easily evade detection while human text is frequently misclassified, how effective are these detectors truly?”

This could lead to significant problems as non-native speakers will be more likely to be mistakenly accused of cheating by inaccurate detectors. But the issue doesn’t end here as search engines such as Google, which drives a majority of web traffic, say AI-generated content goes against their guidelines and is subsequently characterized as spam. This would inadvertently lead to non-native English writers becoming invisible online.

A recommended hold on detectors

As with any new technology, there are benefits and pitfalls that need to be carefully navigated to minimize any detrimental effects. The benefits of language models, like ChatGPT, are only beginning to reveal themselves, and rather than banning this technology, perhaps current systems can evolve with it.

For example, having ChatGPT help spruce up a resume could level the playing field, putting more emphasis on interviews and demonstration of skill, and making recruitment more equitable. Or perhaps our educational systems could incorporate language models into their learning programs. “We could teach students and researchers how to creatively use [language models] to improve their education and work, and also how to critically evaluate their outputs,” said Zou.

The issue is, of course, more nuanced and a solution will require a careful approach. But the reality is this technology is likely not going anywhere, and so society must learn to adapt and work with it lest vulnerable people be left behind.

What the current study highlights is the dangers in applying inaccurate detectors to routing out where its been used. GPT detectors need to be trained and evaluated more rigorously on text from diverse types of users if they are to be used in future, according to Zou.

“Our current recommendation is that we should be extremely careful about and try to avoid using these detectors as much as possible,” said Zou. “It can have significant consequences.”

Reference: James Zou, et al., GPT detectors are biased against non-native English writers, Patterns (2023). DOI: 10.1016/j.patter.2023.100779

Feature image credit: Ralph van Root on Unsplash

This article was updated on July 13, 2023 to correct the spelling of the study author’s name from Zhou to Zou

ASN Weekly

Sign up for our weekly newsletter and receive the latest science news.

Related posts:

Invisible underwater robots

Invisible underwater robots

A transparent underwater robot camouflages itself to explore the ocean, reducing encounters with delicate sea life.