Skip Navigation
Analysis

Using AI to Comply With Book Bans Makes Those Laws More Dangerous

Tools like ChatGPT sweep too broadly, compounding the threat to free speech.

View the entire Artificial Intelligence and Civil Liberties & Civil Rights collection

This article first appeared in Just Security.

In August, a public school district in Iowa reportedly used ChatGPT to help it comply with the state’s controversial book ban law. That law—like counterparts passed in Florida, Texas, Missouri, Utah, and South Carolina—seeks to limit discussion of gender identity and sexuality in schools by barring school libraries from carrying books that touch on topics related to sexual content. Driven by a growing number of organized groups and political pressure from state lawmakers, such laws threaten students’ rights to access information and run the risk of chilling speech on topics such as sexuality, teen pregnancy, and sexual health. And using generative AI tools to implement these laws only compounds the problem.

Widely accessible generative AI chatbots like OpenAI’s ChatGPT and Google’s Bard provide individuals free access to tools built on large language models. Earlier language models and other analogous automated tools have been used by businesses to detect spam, recommend products, and moderate online content. Years of experience in content moderation shows that these tools are simply not suited to making decisions based on the types of vague and subjective standards that characterize many book bans. Even for the narrower category of sexual content, AI tools are often overinclusive.

The Iowa school district sought to use ChatGPT only to scan books for whether they included “a description or visual depiction of a sex act,” rather than the broader “age appropriateness” standard laid out in the law. According to Mason City’s Assistant Superintendent of Curriculum and Instruction Bridgette Exman, the school district chose to ask a narrower, “more objective” question: “Does [insert book] contain a description or visual depiction of a sex act?”

But even for this more limited type of query, these technologies have significant limitations. By now, it is well known that generative AI tools can produce unreliable results and fabricate plausible, but false, answers. ChatGPT and the like include disclaimers to this effect. In Mason City, ChatGPT wrongly identified three books as depicting sexual acts. These mistakes were caught when Exman reviewed its results, demonstrating—once again—the importance of human evaluation of the output of AI tools.

These tools are also inconsistent, producing different results from similar inquiries. When journalists tried to replicate Mason City’s prompts asking whether specific books contained “descriptions or visual depictions of a sex act,” ChatGPT generated different results than those received by the school district. And repeat inquiries gave contradictory answers.

To be sure, AI tools have mostly been successful at identifying and removing child sexual abuse material (CSAM), by using digital hash technology to assign a unique fingerprint, or “hash,” to known CSAM images and to compare uploaded content against a large database of those “hashed” images. Using these, social media platforms have generally been able to identify and remove CSAM images before they are posted online. 

But when used to detect and remove content containing adult nudity and sexual activity, AI tools have been less reliable. They have struggled to understand the context of posts and are unlikely to be able to distinguish problematic content from, say, posts discussing or depicting subjects like breastfeeding or mastectomies. Meta’s Oversight Board acknowledged these limitations in a case where the company’s automated systems flagged and removed an image of uncovered female nipples showing breast cancer symptoms with corresponding descriptions. Even though Meta’s Adult Nudity and Sexual Activity policy allows such images for “educational or medical purposes,” the company’s machine learning classifier failed to understand the context of the photographs and wrongly removed the image. According to the Board, the case highlighted the concern that relying on inaccurate automated tools to enforce such policies “will likely have a disproportionate impact on women.”

Indeed, machine learning models have been shown to amplify biases. A recent study of AI tools used to flag sexual conduct found they were more likely to label photos of women as “sexually suggestive,” especially if the photo depicted pregnant bellies, exercise, or nipples—possibly due to the bias of the primarily heterosexual male staff who trained the tools. Even images released by the U.S. National Cancer Institute that demonstrate how to do a clinical breast exam were flagged in researchers’ tests as “explicitly sexual in nature.”

While research is still ongoing, early analyses suggest that generative AI tools similarly reflect biases. These tools are trained on vast databases that include data from websites like Reddit and Wikipedia. Studies have shown that misogyny and other types of bias are overrepresented in these models’ training data. And ChatGPT is highly sensitive to user prompts, even bypassing its own rules around producing hateful content in response to users’ engineered prompts. Therefore, prompts can—deliberately or unconsciously—affect which books are flagged.

It’s not just schools that may be tempted to use ChatGPT to flag sexual content. In Texas, a new law requires all book vendors who supply school districts to rate their books based on whether they contain depictions or descriptions of sexual conduct. Nearly 300 booksellers will have to rate every book on their shelves (as well as titles previously sold) as either “sexually explicit,” “sexually relevant” – meaning containing sexual conduct and part of required school curriculum – or “no sexual conduct.” This labor-intensive task may seem ripe for the use of AI, but if the tool replicates the inaccuracies and biases seen in content moderation, it could tag books depicting or describing educational subjects like breastfeeding or pregnancy as “sexually explicit.” Even if ChatGPT misclassifies only 1% of the book titles it examines, hundreds of books could be inappropriately flagged and taken out of circulation. 

Ultimately, one of the broader concerns about the use of AI is that it allows decision-makers to escape accountability and hide behind a veneer of objectivity and neutrality. As more states seek to limit discussion of race, gender identity, and sexuality in schools, decision-makers may rely on generative AI tools like ChatGPT—with all its limitations, biases, and sensitivity to prompt design—to make overinclusive lists of inappropriate books under the guise of objectivity.

Broad book bans are a threat to free speech. Using generative AI tools to comply with those bans only makes them more dangerous.