AI is advancing steadily into the world of talent acquisition, even as questions are being raised about its unintended consequences.

For instance, AI can automate the process of sifting through resumes, assessing a candidate’s qualifications and arranging – or even directing – interviews. That, proponents say, leaves recruiters and other TA professionals with more time to devote to more strategic tasks.

But… Research from Stanford University reveals that at least some solutions used to uncover AI-generated text can discriminate against non-native English speakers.

According to the Guardian, tests on seven software products used to detect AI text can discriminate against people whose first language is not English. The researchers said that the “99%” accuracy rate often touted by detection products is “misleading at best.”

Inflated Expectations

A team led by Stanford Assistant Professor James Zhou processed 91 essays written by non-native English speakers through the seven TOEFL – or Test of English as a Foreign Language – programs in order to evaluate the results. More than half were flagged as AI-generated. More than 90% of the essays written by native-speaking American eighth graders were identified as human generated.

To do this, the researchers looked at “text perplexity,” a metric that examines how well a program predicts the next word in a sequence. Texts with higher perplexities are more likely to be written by humans, while lower perplexities indicate a machine-driven author. As the Guardian puts it, these programs look at: “how ‘surprised’ or ‘confused’ a generative language model is when trying to predict the next word in a sentence. If the model can predict the next word easily, the text perplexity is ranked low, but if the next word proves hard to predict, the text perplexity is rated high.”

Mistaken Identity

Large language models like ChatGPT produce low perplexity text, the Guardian said. That means writing that includes many common word in familiar patterns are more likely to be categorized as AI-generated. That scenario is more likely to take place when the writing of non-native English speakers is reviewed, the researchers said in the journal Patterns. When the essays were rewritten – by ChatGPT – using language more sophisticated than required by TOFEL, they were determined to be written by humans.

“Paradoxically,” the researchers noted, “GPT detectors might compel non-native writers to use GPT more to evade detection.” they said.

The implications of all this are “serious,” the researchers wrote. For one thing, search engines – like Google – downgrade content they believe is created by AI. And, in a related article, Jahna Otterbacher, an accociate professor at the Open University of Cyprus, ’s pointed out that “ChatGPT is constantly collecting data from the public and learning to please its users; eventually, it will learn to outsmart any detector.”



By Mark Feffer

Mark Feffer is executive editor of RecruitingDaily and the HCM Technology Report. He’s written for TechTarget, HR Magazine, SHRM, Dice Insights, TLNT.com and TalentCulture, as well as Dow Jones, Bloomberg and Staffing Industry Analysts. He likes schnauzers, sailing and Kentucky-distilled beverages.


Discussion

Please log in to post comments.

Login