Read other stories from the "These stories were not AI-generated" special edition here.
Reports of academic misconduct at UF that reference artificial intelligence violations have surged in recent semesters, according to university records obtained through a public records request.
From Fall 2021 to Fall 2023, UF reported no Honor Code violations that included terms like “artificial intelligence,” “AI” or “ChatGPT.”
In Spring 2024, 20 cases were identified; that figure rose to 42 in Fall 2024 and 66 in Spring 2025. The records only count misconduct reports explicitly mentioning AI in their descriptions.
The increase of AI Honor Code violations come as the university expands its AI training and academic programming.
University officials say they are attempting to teach students and faculty how to use AI responsibly, rather than banning the technology from the classroom.
Hans van Oostrom, the director of UF’s Artificial Intelligence Academic Initiative Center, said AI is everywhere, across every discipline.
“We recommend that we don’t ban AI for everything but to carefully figure out where can we use AI with the students and where should they not,” he said.
To achieve this, the center created the AI Across the Curriculum program, which aims to integrate AI concepts into academic programs across all colleges. The effort utilizes faculty training workshops and the AI Education Committee to evaluate how much and what type of AI content should be included in different courses.
“To do that, we need to train the faculty who aren’t necessarily experts in AI,” van Oostrom said.
UF, he said, now has over 200 courses that include AI content. This Spring alone, 177 courses were designated to include AI in their curriculum.
The center also offers student certificates designed to build a foundational understanding of AI. Almost 800 students were enrolled in introductory AI certificate courses this Spring, which can be taken in addition to any major. This, van Oostrom said, is the highest enrollment since the program launched in 2022.
“We feel that our students need to be properly educated to use AI,” he said.
Accurate AI detection is not truly feasible, he said, and even the best AI detectors have a 4% false positive rate.
“This means that if you use them and rely on them, you’re going to send 4% of your students to the Dean of Students [Office for] Honor Code violation while they did not use AI,” he said.
Preventing this, he said, is part of the center’s training.
‘They are not learning anything’
Shu-Jen Huang, a UF professor of mathematics, teaches a large mathematics class in which students complete assignments based on the computing platform MATLAB. At the start of the course, the AI class policy is made clear to students: They may not use AI to complete homework.
However, she said of 600 students, at least 80 were found using AI to complete their first assignment.
“Some students just do the minimum work. They just put everything to AI, and they get the results,” she said. “They are not learning anything.”
Huang described multiple strategies she has tried to detect AI misuse. One method involved embedding words and prompts in white text that would not appear on a screen but could trigger incorrect responses if a student copied and pasted the assignment into a chatbot.
However, some students quickly found ways to work around these methods, she said.
Instead, she said, she started embedding prompts into the assignment template designed to produce specific wrong answers if students pasted the assignment into a chatbot.
Huang said detecting AI use can be difficult when students make edits after generating initial responses with AI. Some of her students simply copy prompts into assignment submissions, while others refine AI output, making it harder to identify without deeper comparison.
AI creates an unfair advantage, she said, because some students spend hours trying to complete an assignment, while others finish in a matter of minutes.
“If you don’t want to spend that time, then I don’t think you deserve to get an A,” she said.
Inside the conduct process
At UF, alleged academic misconduct, including AI-related cases, are handled through the Dean of Students Office and Honor Council framework.
When a faculty member suspects an Honor Code violation, the case typically begins with a referral to conduct officers. Students may receive an email outlining the alleged violation, followed by a meeting to review the charge. From there, a student may choose to accept responsibility for the alleged misconduct or proceed to a formal hearing.
In a hearing, students may present their account of events and respond to questions from the conduct officers and, in some cases, the reporting instructor.
Sanctions vary from a zero on an assignment to grade penalties and academic probation or suspension depending on the severity and context of the alleged violation.
Students can choose to proceed to an appeals process, which can further extend the timeline of a case.
UF’s official guidance on AI use states any work generated by AI systems must be reported and appropriately attributed, and students are responsible for adhering to both university policies on academic integrity and specific course policies. The university does not ban AI outright and instead provides a Responsible Use of AI Policy and a list of do’s and don’ts.
While UF has integrated AI throughout its policies, the university maintains a decentralized approach that allows individual instructors to determine the extent of permissible AI in specific courses.
UF spokesperson Cynthia Roldán wrote via email that no single method is used to identify the alleged use of AI in a student’s work. The Hearing Body reviews all relevant information submitted by the student and other individuals in the process to make a determination or recommendation regarding whether a policy was violated.
Regarding the opportunity for faculty discussions prior to a submitted report or formal hearing, Roldán wrote it would fall outside the process set forth in the Student Honor Code.
Students who request a review of the materials related to their case are provided a privacy agreement to sign. This agreement is designed to protect the privacy of the student and involved persons in the process for materials that are subject to the Family Educational Rights and Privacy Act, or FERPA, she wrote.
While many policies pertain to the misuse of AI, UF is also encouraging the adoption of the Transparency in Learning and Teaching framework among faculty. This TILT framework asks faculty to explicitly define the purpose, skills and criteria for success for every assignment. By providing clear guidelines, instructors may reduce the likelihood students turn to AI out of confusion.
For students accused of AI-related misconduct, the process can stretch for months and carry steep academic consequences.
When a 20-year-old UF politics, philosophy, economics and law sophomore, who asked to remain anonymous for fear of academic repercussions, received an email about an academic integrity review involving one of her class essays last summer, she said she did not understand what it meant and had no prior notice of concern.
“I was not allowed to have a discussion with the professor,” she said. “I was shocked.”
That email set in motion months of meetings with university officials and formal hearings she said were difficult to navigate, often consisting of confusing and unfamiliar terminology, she said. The student maintains she did not use AI.
After a year of hearings and appeals, she said, the AI‑related charges against her were dropped, but the plagiarism charge was upheld. Even after this process, the student believes in proper AI training, both for faculty and students.
“I’m still an avid believer that people need to learn how to use AI and that it’s something not going away,” she said. “Professors need to learn how to handle AI.”
Another UF student, Jesus Abbey, a 20-year-old information systems sophomore, said he encountered an AI-related Honor Code violation after submitting a 1,000-word environmental science paper in his freshman year.
Abbey said one of his friends in the class told him he found hidden words in the essay prompt. Since they both started working on the assignment near the deadline, he said they did not have time to clarify this with the instructor, and he decided to follow the prompt and include the words in their essays.
A month after the due date, both students received emails stating their professor had reported them for using AI, listing charges Abbey said were related to AI and academic misconduct.
He said although he and his friend both initially had the same charges, he ended the hearing with fewer charges than his friend, although they both made the same mistake in their essays.
After the hearing, he received a zero on the assignment and a significant grade penalty that dropped his grade from an A to a C. The process spanned months, he said.
“Basically all of it was a horrible, like it was [a] traumatizing experience, and honestly it made me really so upset,” he said.
Abbey said, in hindsight, more formal support like legal representation might have helped.
At the completion of his case, he said, formal appeals felt daunting, and he chose not to pursue them because the process looked difficult and burdensome.
“They really don’t want to deal with you,” Abbey said. “That’s how I felt throughout the whole experience, and honestly it was just not fair at all how it happened.”
Contact Swasthi Maharaj at smaharaj@alligator.org. Follow her on X @s_maharaj1611.

Swasthi is the Fall 2025 university administration reporter. She's previously worked as general assignment reporter with The Alligator, and you can also find her work in Rowdy Magazine or The Florida Finibus. When she's not staring at her laptop screen or a textbook, she's probably taking a long walk or at a yoga class.




