For one UF computer science freshman, using ChatGPT is simple.
“Write me a summary of ‘Nicomachean Ethics,’” she typed into the blank chat box.
Immediately, the box produced its first word. After, a stream of ideas gathered behind a persistent black rectangle.
Approximately 38 seconds later, she was presented with a seven-paragraph essay that was wholly unique and deprived of all merit. “Nicomachean Ethics,” as the bot informs, is a work of Aristotle: ten books centering on the science of the good human life. Book Six covers intellectual virtue.
She understands the irony.
The student, interviewed under the condition of anonymity, uses ChatGPT so she can focus on subjects she feels are a better use of her time.
“You wouldn't be flagged for plagiarism,” she said. “Even if you wrote the same thing again, it could generate [an essay] completely different.”
She’s a small part of UF’s first steps into a new digital terrain, as faculty and administration grapple with the realities of evolving technology and how it’s used by students.
Marketed to have a dialogue format, ChatGPT was launched in November by OpenAI, an American software company that specializes in artificial intelligence research. ChatGPT incorporates user feedback, improvisational and conversational skills and the internet’s endless supply of information to craft any desired text with any given prompt.
The student recognizes its flaws, though. For example, it doesn’t produce the quality of writing one might expect, she said.
“It's not too sophisticated,” she said. “But it is pretty good at identifying the main points and doing the bulk of the work.”
To the student, this is just another Wednesday.
The standards for machine ethics are constantly evolving, but this UF freshman believes there are more serious ethical breaches in modern higher education.
Applications such as Honorlock, which uses a combination of AI and human proctoring to regulate exams, are constitutional privacy violations, she said.
“Using AI to generate the first paragraph of an essay is nothing compared to that,” she said.
Similarly, university professors find themselves on both sides of the debate about the ethics of AI’s academic impact.
Sean Trainor, who teaches professional writing in UF’s Warrington College of Business, is more optimistic about the technology than others.
“Our general rule is not to tell students not to use [AI like ChatGPT] but how to use them as successfully as possible,” Trainor said.
The business college plans to implement AI within its established curriculum, Trainor said.
Though the process is still in its early stages, business faculty believe text-generative AI tools will help prepare students for the professional world’s rapidly changing landscape. The implementation of ChatGPT within the college is about maximizing productivity.
“[ChatGPT] hopefully allows folks in the workplace and other settings to draft messages more quickly,” Trainor said. “Where the human element comes in is in … [being] able to determine whether the text that the AI generates communicates your message in the most clear and concise way possible.”
But those in the humanities may have more reason to be wary, he said.
“I think Chat GPT potentially complicates some of the tools we’ve traditionally used to assess student learning,” he said. “Essays … and things like that are going to become more complicated.”
At Keene-Flint Hall, these complications have already arisen.
Steven Noll, a UF history professor, said concerns surrounding ChatGPT have found their way into faculty meetings.
“[Cheating is becoming] harder to detect and easier to do,” Noll said.
Noll fears students will resort to using the AI platform to complete work they feel is not essential to their chosen major. In his general education history class, United States Since 1877, he sees ChatGPT’s allure.
“It’s a class that everyone has to take to graduate, so there's going to be people in there who have no desire to be there at all,” Noll explained. “For those people, it will be much more enticing to do that.”
Many professors acknowledge cheating has always been present in academia. In the past, plagiarism has taken many forms. Websites such as Course Hero, Chegg, Quizlet and even Grammarly fall under UF’s definition of academic dishonesty.
UF’s student honor code states a student shouldn’t use or attempt to use unauthorized materials or resources in any academic activity for academic advantage or benefit. It doesn’t, however, explicitly condemn the use of AI.
Noll speculated this may be due to outgoing UF Provost Joe Glover’s commitment to AI’s positive applications within higher education, he said.
Glover has said throughout his tenure his goal is to turn UF into one of the nation’s leading AI universities. The installation of the HiPerGator 3.0 supercomputer in 2021 made UF the first university in the world to work with this technology, proving Glover’s goal plausible.
Since the introduction of HiPerGator, Glover has strongly advocated for AI, saying higher education should mold itself around its presence.
“We know that, for example, right now there are AI systems that will confidently write essays for you, so what does it mean to ask a student to write an essay and then grade it?” Glover said as a guest on NVIDIA’s “The AI Podcast.” “In a period when a student can turn to his or her computer and say ‘Please write me an essay.’”
In an interview with the New York Times in mid-January, Glover emphasized ChatGPT isn’t yet explicitly worked into UF policy.
“We try to institute general policies that certainly back up the faculty member’s authority to run a class,” Glover said. “This isn’t going to be the last innovation we have to deal with.”
Cynthia Roldan, a UF spokesperson, referenced the UF Honor Code when asked about the platform.
“Using any materials or resources through any medium which the professor has not given express permission to use and that may confer an academic benefit for the student is a violation of the UF Student Honor Code and Student Conduct Code,” she wrote in an email. “A student accused of this could be subject to conduct action.”
In the meantime, professors have begun discussing ways to combat the use of AI for schoolwork. Programs such as GPTZero, which can detect the use of ChatGPT in a given text, are becoming crucial to guarding against academic misconduct.
While some professors like Trainor embrace AI’s promise, others like Noll fear the relationship between UF and AI has gone too far.
Noll sees AI as a metaphorical atomic bomb. “Used correctly, AI can be pretty amazing; used incorrectly, it can be really problematic … I don’t think [UF’s] thinking about that.”
Contact Sydney at email@example.com. Follow her on Twitter at @sydneyajohnson15