Free speech holds a powerful place in American culture, yet it’s frequently misunderstood and occasionally used as a shield for behavior it was never meant to protect.
Recent controversy surrounding Grok — the artificial intelligence chatbot created by Elon Musk’s company xAI and integrated into the social media platform X — demonstrates how free speech can be stretched beyond recognition. In some cases, it’s used to justify behavior that causes real harm and exceeds the legal limits of free speech.
Grok has drawn international attention for its ability to generate text, realistic images and answer questions in response to user prompts posted on X. While the technology offers creative and informational benefits, it has also been used to produce deepfake sexualized images using photos submitted by users.
Deepfakes are altered images or videos created with artificial intelligence that typically make the subject do, say or wear something they never did. Many reported cases involve women whose likenesses were used without consent, raising serious privacy and harassment concerns.
Ashley St. Clair, an influencer who shares a child with Musk, claims explicit images were generated using both recent photos of her and images from when she was a teenager. St. Clair has filed a lawsuit against X, while X has countersued.
Musk has defended Grok by placing responsibility on users rather than the technology itself.
In a Jan. 14 post on X, Musk wrote, “Obviously, Grok does not spontaneously generate images, it does so only according to user requests.” He also said Grok is designed to refuse illegal content and that unexpected outputs would be treated as technical flaws. Musk frames Grok as a neutral tool, suggesting users bear responsibility for its misuse rather than the developers who created it.
When a single platform can generate millions of sexualized images in days, the claim that responsibility belongs only to users begins to collapse. The Center for Countering Digital Hate estimates that Grok generated roughly 3 million photorealistic sexualized images within an 11-day period, averaging about 190 images per minute.
Technology capable of producing harmful content at this pace is not simply a passive tool; it’s an amplifier. When developers design systems that dramatically expand the reach of exploitative material, they share responsibility for its consequences.
Free speech has never been without limits. American law has long placed restrictions on speech involving fraud, threats, defamation and exploitation.
Deepfake pornography, especially when minors or nonconsenting adults are involved, strips people of control over their own identity and exposes them to harassment. Treating these offenses as a free speech issue risks confusing constitutional protections with technological abuse. The First Amendment protects expression, not exploitation.
Artificial intelligence is evolving faster than society’s ability to regulate it. As these tools become more powerful, developers must recognize that innovation without safeguards carries real consequences.
The legal system remains largely unprepared for this rapidly evolving technology. Clear AI laws will be necessary to protect developers, users and victims from harms that current laws struggle to address. Free speech is a cornerstone of American society, but it loses meaning when it’s used to justify exploitation rather than protect expression.
Contact Alannah Peters at apeters@alligator.org. Follow her on X @alannahjp777.
Alannah Peters is a junior majoring in journalism and minoring in public relations. In her spare time, she can be found trying new coffee shops with friends, traveling the U.S. or going on hot girl walks at Lake Alice.




