Skip to Content, Navigation, or Footer.
We inform. You decide.
Wednesday, April 15, 2026

Local agencies, researchers keep up with quickly developing AI deepfakes

AI advancements fuel growing concerns over child exploitation

<p>Deepfakes are digital content manipulated or generated using artificial intelligence.</p>

Deepfakes are digital content manipulated or generated using artificial intelligence.

Read other stories from the "These stories were not AI-generated" special edition here.

A North Florida taskforce on crimes against children received 27 reports from the artificial intelligence chatbot Grok and 11 reports from OpenAI in the past year. 

The reports indicate someone prompting the platforms to create child exploitation material, said Sgt. Christopher King, a Gainesville Police Department detective and commander of the North Florida Internet Crimes Against Children Task Force. 

GPD is a host agency for ICAC, which covers 38 counties in northern Florida. The data indicates deepfakes, which have garnered nationwide alarm, are seeping into local communities. 

Deepfakes are photo, video or audio content generated with AI models, particularly generative AI, that depict people performing actions or saying phrases they never actually did.

As Florida lawmakers attempt to grapple with restrictions on the ever-evolving technology, local experts say deepfakes pose a threat to public safety.  

King supervises an investigative squad that follows up on CyberTips sent by the National Center for Missing & Exploited Children. The tips are submitted when online platforms, such as Meta, TikTok and Snapchat, believe child exploitation is happening on their platform, King said. 

In Florida, it is a third-degree felony to generate child pornography, which is the possession or control of images modified to portray a minor engaged in sexual activity. 

North Florida ICAC has an in-house programmer monitoring the dark web who has found AI-generated images and videos of child exploitation material, King added.

“It’s definitely a growing issue that parents and children need to be aware of,” he said. “This, in return, should caution anyone when they’re sharing any photographs online.”

On March 31, the NCMEC released child exploitation data gathered from 2025. According to the report, their CyberTipline received more than 1.5 million tips linked to AI-generated child sexual exploitation — an over 2,000% increase from 2024

Advanced AI models can create “photorealistic, high-resolution content of any images,” said Kevin Butler, a UF professor in computer and information science and engineering. 

Enjoy what you're reading? Get content from The Alligator delivered to your inbox

Butler serves as the director of the Florida Institute for Cybersecurity Research. He’s researched AI and sexually explicit deepfakes with graduate students and researchers at UF and Georgetown University. The team analyzes the development of deepfakes, their harm and how humans can detect them. 

“This is an area that has seen rapidly increasing levels of quality compared to when they started,” Butler said.

Most commercial AI models, such as ChatGPT, have internal guardrails to prevent explicit content creation, Butler said. But perpetrators can train some open-source models, such as DeepFaceLab, to create harmful content. 

In a Summer 2025 characterization study, he said, his team found many of the chatbots didn’t verify age or consent. They’ve also found evidence of online discussion boards where users exchange tutorials on how to generate the images.

People in the community who have been victimized by deepfakes have reached out to his team for help, Butler added.

Legislative restrictions

A law approved by Gov. Ron DeSantis in May 2025 directly prohibits possessing, requesting and creating nonconsensual sexually explicit material. 

Violating the law is a third-degree felony, punishable by up to five years in prison and registration as a sex offender. 

It passed entirely unopposed on both the House and Senate floors.

Despite the action’s legislative success, questions remain for researchers like Butler. It’s unclear whether the legislation refers to the AI companies themselves, he said, and it doesn’t clarify their responsibility to monitor what’s happening on their platforms. 

The question arises, he said, of whether an AI platform is partially responsible for the creation of nonconsensual imagery, even if the platforms themselves are not distributing the images.

Preventing deepfakes relies on collaboration between legislators and technical communities, Butler said. 

“The ability to create these images is clearly very easy,” he said. “The solutions are really that we need to be addressing how to mitigate and stop these.”

Ryan Kennedy, the chief operating officer of Florida Citizens Alliance, lobbied for the law passed in 2025, but he said more legislation is necessary to protect Floridians. 

This legislative session, the Florida Citizens Alliance supported a new Florida bill, the “Artificial Intelligence Bill of Rights,” which would place stricter regulations on AI companies. The proposal would ban chatbots from communicating with minors without parental consent, and chatbots would be required to frequently remind users they are not speaking to humans. 

The bill passed through the Florida Senate but got shot down in the Florida House of Representatives. 

Kennedy believes there’s still hope — and need — for the bill to resurface. 

“Congress is very slow to act on federal legislation,” Kennedy said. “We believe that Florida needs to be a leader in this and protect the citizens here.”

It will always be a challenge for legislation to keep up with ever-evolving technology, said Zoey Scheinblum-Brewer, the policy and grassroots coordinator for the Rape, Abuse & Incest National Network. 

RAINN has received an increase of calls related to sexually explicit deepfakes on its national hotline, she said. 

The nonprofit anti-sexual violence organization provides support to survivors and advocates for policy change. 

This growing form of sexual violence is just as negatively impactful as in-person offenses, Scheinblum-Brewer said. 

In many of the cases RAINN has worked on, deepfakes are created by people directly connected to the victim, including peers and classmates, Scheinblum-Brewer said. Still, anyone pictured in images online is susceptible to being victimized.

“Nonconsensual intimate images take away a person’s control over their body and identity entirely,” she said. “Now, you don’t even have to be in the same room as your perpetrator in order for them to harm you.” 

Contact Vanessa Norris at vnorris@alligator.org. Follow her on X @vanessajnorris.

Support your local paper
Donate Today
The Independent Florida Alligator has been independent of the university since 1971, your donation today could help #SaveStudentNewsrooms. Please consider giving today.

Vanessa Norris



Powered by SNworks Solutions by The State News
All Content © 2026 The Independent Florida Alligator and Campus Communications, Inc.