High School Can Discipline Student for Undisclosed Use of Generative AI–Harris v. Adams
RNH was a junior last year at Hingham High School in Massachusetts. He got a perfect ACT store and hopes for early admission to Stanford.
The school repeatedly told students about limitations on the use of Generative AI for school assignments. With respect to the assignment at issue here, Generative AI was not categorically banned. Instead, allegedly students could “use AI to brainstorm topic ideas and key words to research a topic, as well as to look for resources.” (Cleaned up). Thus, to the school’s credit, this does not appear to be a situation where the school overreacted to Generative AI.
As discipline for his academic dishonesty, the school: (1) Assigned him zero points on parts of the APUSH assignment, but he was allowed to redo the project. Because of these developments, his course grade dropped to a C+ from an expected B-. (2) Required him to attend a Saturday detention. (I wonder if RNH watched the Breakfast Club beforehand?). (3) Delayed his admission into the National Honors Society. To me, these consequences seemed pretty mild all things considered, but RNH’s parents Dale and Jennifer Harris disagreed and sued. The complaint got substantial national media coverage (see, e.g., the People magazine coverage). This turned into an expensive federal case that produced a 16k word opinion.
The court discusses the pedagogical role of Generative AI-generated works:
Defendants could reasonably consider that RNH had been taught that all sources—including AI sources—must, at a minimum, be cited….Defendants could also have inferred that, if RNH had sincerely believed that he was permitted to use AI tools like Grammarly to generate text and include that text as his own, he would have cited the AI tool he used….
The purpose of the Assignment, plainly, was to give students practice in researching and writing, as well as to provide students an opportunity to demonstrate, and the teacher an opportunity to assess, the students’ skills. Considering the training provided to HHS students regarding the importance of citing sources generally, Defendants could conclude that RNH understood that it is dishonest to claim credit for work that is not your own. Although, as discussed below, the emergence of generative AI may present some nuanced challenges for educators, the issue here is not particularly nuanced, as there is no discernible pedagogical purpose in prompting Grammarly (or any other AI tool) to generate a script, regurgitating the output without citation, and claiming it as one’s own work.
To get around this, RNH argued that “text generated by AI is not attributable to any particular human author….AI is not an ‘author’ whose work can be stolen; it simply ‘generates and synthesizes new information.'” Seriously? These debates are being fought in the copyright arena, but this is not a credible argument in the academic honesty setting. The court tartly responds:
it strains credulity to suppose that RNH actually believed that copying and pasting, without attribution, text that had been generated by Grammarly was consistent with any standard of academic honesty.
Since long before the advent of AI, and even before the advent of the printing press, there have been plenty of works whose origins are sufficiently obscure as to raise serious doubts about whether they can be considered the work of any “author” at all, or whether they simply reflect a syntheses of multiple strands of text and information that have been merged, by processes only partially knowable, into individual “works.” The Bible, Beowolf, and the works of “Homer” come to mind. The Handbook definition of plagiarism seems adequate to alert students that they may not copy such works without attribution and pass them off as their own.
I love the idea of a student submitting passages from Beowulf or the Bible as their own work (preferably in their original language). I wonder how teachers would grade that. Then again, I could see how it might backfire. Recall when NPR tweeted out the Declaration of Independence line-by-line without identifying the source–and people went nuts. Who knew that a source’s identity was an important tool to contextualize it? 🤷♂️
* * *
The problem of hallucinated sources plagues the undiscerning use of Generative AI. Numerous lawyers have found this out the hard way by filing legal briefs with make-up citations. More recently, there’s been some chatter about Stanford professor Jeff Hancock’s declaration in a First Amendment challenge to a Minnesota law restricting synthetic election misinformation, which allegedly contains Generative AI-hallucinated citations. This would an embarrassing fact if true…because it would seemingly confirm the legislature’s concerns about the capacity of Generative AI to produce misinformation….
Generative AI is also a perilous resource for expert witness work. I recently blogged about an expert witness who used Generative AI to calculate asset values. Not only was that an unnecessary application of the technology, but the court rejected because the computations produced different numbers when rerun. If “expert” work can be outsourced to the machines, then the experts should expect their renumeration to evaporate accordingly.
At this point, I would like to think that most people know the risks of using Generative AI to prepare content that is then disseminated as the submitter’s owner work, especially when citations are involved. Sadly, it seems like the message has not gotten out fully.
This case also illustrates the merits of turning any student misuse of Generative AI into a learning opportunity, rather than a purely punitive event. I think the pedagogical message should be that Generative AI helps prepare content, like many other content authoring tools students and professionals use every day, and every tool has strengths and pitfalls that should be considered and respected.
Case Citation: Harris v. Adams, 2024 WL 4843837 (D. Mass. Nov. 20, 2024)