Black students are more than twice as likely as their white or Hispanic peers to have their writing incorrectly flagged as the work of artificial intelligence tools, concludes a report released Sept. 18 by Common Sense Media, a nonprofit that examines the impact of technology on young people.

Overall, about 10 percent of teens of any background said they had their work inaccurately identified as generated by an AI tool, Common Sense found.

But 20 percent of Black teens were falsely accused of using AI to complete an assignment, compared with 7 percent of white and 10 percent of Latino teens.

This may be at least partially due to flaws in AI detection software. About 79 percent of teens who had their assignments incorrectly flagged by a teacher also had their work submitted to AI detection software, while 27 percent said their work had not been submitted.

AI detection software has already been shown to have problematic biases, even though secondary school teachers commonly use the technology.

More than two-thirds—68 percent—of teachers report using an AI detection tool regularly, according to a survey of 460 6th to 12th grade public school teachers conducted for the Center for Democracy & Technology, a nonprofit organization that aims to shape technology policy.

But the tools often reflect societal biases. Researchers ran essays written by Chinese students for the Test of English as a Foreign Language, or TOEFL, through seven widely-used detectors. They did the same with a sample of essays written by U.S. 8th graders who were native English speakers.

The tools incorrectly labeled more than half of the TOEFL essays as AI-generated, while accurately classifying the 8th grade essays as human-crafted.

Common Sense Media’s findings on Black students could be due to either unfairness in AI detection tools or biases in educators themselves, according to experts.

“We know that AI is putting out incredibly biased content,” said Amanda Lenhart, the head of research at Common Sense. “Humans come in with biases and preconceived notions about students in their classroom. AI is just another place in which unfairness is being laid upon students of color.”

Put another way, even though AI tools aren’t human themselves, they reflect people’s prejudices, even unconscious ones. “AI is not going to walk us out of our pre-existing biases,” Lenhart said.

If a teacher does suspect a student used AI to cheat on an assignment, it’s best to have a conversation with the student before jumping to punitive measures, educators and experts say. Schools also need to craft clear policies on when and how it’s acceptable to use AI to complete schoolwork.

The Common Sense report is based on a nationally representative survey conducted from March to May of 1,045 adults in the United States who are the parents or guardians of one or more teens aged 13 to 18, and responses from one of their teenage children. All 18-year-old respondents were still in high school when surveyed.

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.