Explore

Could Massachusetts AI Cheating Case Push Schools to Refocus on Learning?

Lawsuit tackles key questions of academic integrity, college admissions and the purpose of school in an age of AI.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

A Massachusetts family is awaiting a judge’s ruling in a federal lawsuit that could determine their son’s future. To a few observers, it could also push educators to limit the use of generative artificial intelligence in school.

To others, it’s simply a case of helicopter parents gone wild.

The case, filed last month, tackles key questions of academic integrity, the college admissions arms race and even the purpose of school in an age when students can outsource onerous tasks like thinking to a chatbot.

While its immediate outcome will largely serve just one family — the student’s parents want a grade changed so their son can apply early-admission to elite colleges — the case could ultimately prompt school districts nationwide to develop explicit policies on AI. 

If the district, in a prosperous community on Boston’s South Shore, is forced to change the student’s grade, that could also prompt educators to focus more clearly on the knife’s edge of AI’s promises and threats, confronting a key question: Does AI invite students to focus on completing assignments rather than actual learning?

“When it comes right down to it, what do we want students to do?” asked John Warner, a well-known writing coach and author of several books. “What do we want them to take away from their education beyond a credential? Because this technology really does threaten the integrity of those credentials. And that’s why you see places trying to police it.”

‘Unprepared in a technology transition’

The facts of the case seem simple enough: The parents of a senior at Hingham High School have sued the school district, saying their son was wrongly penalized as a junior for relying on AI to research and write a history project that he and a partner were assigned in Advanced Placement U.S. History. The teacher used the anti-plagiarism tool Turnitin, which flagged a draft of the essay about NBA Hall of Famer Kareem Abdul Jabbar’s civil rights activism as possibly containing AI-generated material. So she used a “revision history” tool to uncover how many edits the students had made, as well as how long they spent writing. She discovered “many large cut and paste items” in the first draft, suggesting they’d relied on outside sources for much of the text. She ran the draft through two other digital tools that also indicated it had AI-generated content and gave the boys a D on the assignment. 

From there, the narrative gets a bit murky. 

On the one hand, the complaint notes, when the student and his partner started the essay last fall, the district didn’t have a policy on using AI for such an assignment. Only later did it lay out prohibitions against AI.

The boy’s mother, Jennifer Harris, last month asked a local TV news reporter, “How do you know if you’re crossing a line if the line isn’t drawn?”

The pair tried to explain that using AI isn’t plagiarism, telling teachers there’s considerable debate over its use in academic assignments, but that they hadn’t tried to pass off others’ work as their own. 

For its part, the district says Hingham students are trained to know plagiarism and academic dishonesty when they see it. 

District officials declined to be interviewed, but in an affidavit, Social Studies Director Andrew Hoey said English teachers at the school regularly review proper citation and research techniques — and they set expectations for AI use.

Social studies teachers, he said, can justifiably expect that skills taught in English class “will be applied to all Social Studies classes,” including AP US History — even if they’re not laid out explicitly. 

A spokesperson for National History Day, the group that sponsored the assignment, provided The 74 with a link to its guidelines, which say students may use AI to brainstorm topic ideas, look for resources, review their writing for grammar and punctuation and simplify the language of a source to make it more understandable.

They can’t use AI to “create elements of your project” such as writing text, creating charts, graphs, images or video. 

In March, the school’s National Honor Society faculty advisor, Karen Shaw, said the pair’s use of AI was “the most egregious” violation of academic honesty she and others had seen in 16 years, according to the lawsuit. The society rejected their applications.

Peter S. Farrell, the family’s attorney, said the district “used an elephant gun to slay a mouse,” overreacting to what’s basically a misunderstanding.

The boys’ failing grade on the assignment, as well as the accusation of cheating, kept him out of the Honor Society, the lawsuit alleges. Both penalties have limited his chances to get into top colleges on early decision, as he’d planned this fall.

The student, who goes unnamed in the lawsuit, is “a very, very bright, capable, well-rounded student athlete” with a 4.3 GPA, a “perfect” ACT score and an “almost perfect” SAT score, said Farrell. “If there were a perfect plaintiff, he’s it.” 

They knew that there was no leg to stand on in terms of the severity of that sanction.

Peter S. Farrell, attorney for student

While the boy earned a C+ in the course, he scored a perfect 5 on the AP exam last spring, according to the lawsuit. His exclusion from the Honor Society, Farrell said, “really shouldn’t sit right with anybody.”

For a public high school to take such a hard-nosed position “simply because they got caught unprepared in a technology transition” doesn’t serve anyone’s interests, Farrell said. “And it’s certainly not good for the students.”

Ultimately, the school’s own investigation found that over the past two years it had inducted into the Honor Society seven other students who had academic integrity infractions, Farrell said. The student at the center of the lawsuit was allowed to reapply and was inducted on Oct. 15.

“They knew that there was no leg to stand on in terms of the severity of that sanction,” Farrell said.

‘Districts are trying to take it seriously’

While Hingham didn’t adopt a districtwide AI policy until this school year, it’s actually ahead of the curve, said Bree Dusseault, the principal and managing director of the Center on Reinventing Public Education, a think tank at Arizona State University. Most districts have been cautious to put out formal guidance on AI.

Dusseault contributed an affidavit on behalf of the plaintiffs, laying out the fragmented state of AI uptake and guidance. She surveyed more than 1,000 superintendents last year and found that just 5% of districts had policies on AI, with another 31% promising to develop them in the future. Even among CRPE’s group of 40 “early adopter” school districts that are exploring AI and encouraging teachers to experiment with it, just 26 had published policies in place. 

They’re hesitant for a reason, she said: They’re trying to figure out what the technology’s implications are before putting rules in writing. 

“Districts are trying to take it seriously,” she said. “They’re learning the capacity of the technology, and both the opportunities and the risks it presents for learning.” But so often they’re surprised by new technological developments and capabilities that they never imagined. 

Even if they’re hesitant to commit to full-blown policies, Dusseault said, districts should consider more informal guidelines that clearly lay out for students what academic integrity, plagiarism and acceptable use are. Districts that are “totally silent” on AI run the risk of student confusion and misuse. And if a district is penalizing students for AI use, it needs to have clear policy language explaining why.

That said, a few observers believe the case boils down to little more than a cheating student and his helicopter parents.

Benjamin Riley, founder of Cognitive Resonance, an AI-focused education think tank, said the episode seems like an example of clear-cut academic dishonesty. Everyone involved in the civil case, he said, especially the boy’s parents and their lawyer, “should be embarrassed. This isn’t some groundbreaking lawsuit that will help define the contours of how we use AI in education; it’s helicopter parenting run completely amok that may serve as catnip to journalists (and their editors) but does nothing to illuminate anything.”

This isn't some groundbreaking lawsuit that will help define the contours of how we use AI in education; it's helicopter parenting run completely amok.

Benjamin Riley, Cognitive Resonance

Alex Kotran, founder of The AI Education Project, a nonprofit that offers a free AI literacy curriculum, said the honor society director’s statement about the boys’ alleged academic dishonesty makes him think “there’s clearly plenty more than what we’re hearing from the student.” While schools genuinely do need to understand the challenge of getting AI policies right, he said, “I worry that this is just a student with overbearing parents and a big check to throw lawyers at a problem.”

Others see the case as surfacing larger-scale problems: Writing in Slate this week, Jane Rosenzweig, director of the Harvard College Writing Center and author of the Writing Hacks newsletter, said the Massachusetts case is “less about AI and more about a family’s belief that one low grade will exclude their child from the future they want for him, which begins with admission to an elite college.”

That problem long predated ChatGPT, Rosenzweig wrote. But AI is putting our education system on a collision course “with a technology that enables students to bypass learning in favor of grades.”

“I feel for this student,” said Warner, the writing coach. “The thought that they need to file a lawsuit because his future is going to be derailed by this should be such an indictment of the system.”

The case underscores the need for school districts to rethink how they interact with students in the Age of AI, he said. “This stuff is here. It’s embedded in the tools students use to do their work. If you open up Microsoft Word or Google Docs or any of this stuff, it’s right there.”

What do we want them to take away from their education beyond a credential? Because this technology really does threaten the integrity of those credentials.

John Warner, writing coach

Perhaps as a result, Warner said, students have increasingly come to view school more transactionally, with assignments as a series of products rather than as an opportunity to learn and develop important skills.

“I’ve taught those students,” he said. “For the most part, those are a byproduct of disengagement, not believing [school] has anything to offer — and that the transaction can be satisfied through ‘non-work’ rather than work.”

His observations align with recent research by Dusseault’s colleagues, who last year found that four graduating classes of high school students, or about 13.5 million students, had been affected by the pandemic, with many “struggling academically, socially, and emotionally” as they enter adulthood.

Ideally, Warner said, AI tools should offer an opportunity to refocus students to emphasize process over product. “This is a natural design for somebody who teaches writing,” he said, “because I’m obsessed with process.”Warner recalled giving a recent series of talks at Harvey Mudd College, a small, alternative liberal arts college in California, where he encountered students who said they had no use for AI chatbots. They preferred to think through difficult problems themselves. “They were just like, ‘Aw, man, I don’t want to use that stuff. Why do I want to use that stuff? I’ve got thoughts.’”

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today