Read other stories from the "These stories were not AI-generated" special edition here.
The hyperreal: It’s a condition where simulations, media and images become more “real” to the viewer than what is actually happening in the real world, as described by French sociologist Jean Baudrillard.
If there’s anything that seems to break our reality today, it’s artificial intelligence.
The most apparent way AI is replacing our reality with fascist hyperreality is by editing and recreating the bodies of our leaders. When President Donald Trump posted AI photos of himself as Master Chief from Halo, a Jedi from Star Wars and, yes, even the Pope, he rarely received positive feedback from anyone outside of his base.
Yet even in the rejection of Trump’s lunacy, his constant AI use became routine over the course of months. Simply put, it became normal to see Trump presenting himself as a goofy grandpa that just discovered Grok.
To return back to Baudrillard, this representation is called a “signifier.”’ A “signifier” is a word that refers to other things, with the relationship between the “signifier” and the word it represents usually being very clear.
When someone says the words “Donald Trump,” we have very little doubt about what is being referred to. But that relationship between “signifier” and “signified” is harder to conceptualize outside of language. The accessibility and efficiency of AI when editing images makes its entire audience begin to question what the signified really is.
At this point in time, debates over Trump’s ability to lead the country are largely centered around the way he portrays himself in the media, rather than his policies and actions as the head of the executive.
By making a fool out of himself, Trump swaps the real (his leadership capacity) for the hyperreal (his self-depiction using AI and the media).
AI’s role in hyperreality doesn’t only appear in fascist leaders. Consuming primarily short-form media is becoming a norm for most young adults, meaning we (yes, I too am unfortunately cooked) interact with an exponentially larger amount of media than past generations.
Watching short-term content decreases the time we spend prioritizing what we read, watch or listen to. Our content filters are widening, and as a result, more “AI slop” reaches us. Yet the term AI slop — popularized by British programmer and blogger Simon Willison to describe low-quality, automated content — fails to aptly describe how most media users actually interact with AI content.
As models such as Sora 2 or Leo 3.1 increasingly generate videos with a lifelike quality to them, our open filters constantly expose us to the hyperreal. After the U.S. Men’s Hockey Team won gold, an AI video of Brady Tkachuk, an American hockey player for the Ottawa Senators, surfaced on the White House X account. AI Tkachuk had been made to say: "They booed our national anthem, so I had to come out and teach those maple syrup-eating f---s a lesson."
Although many users were quick to point out how outlandish the fake video was, some had accepted the fake as truth, and others began to venerate Tkachuk as a nationalist figure. For a swath of fans, the rapidity of which we consume media combined with the seemingly unbound capability for AI to produce human-like content has replaced the real Tkachuk with his hyperreal counterpart, both in their phones and their brains.
AI has suspended us in a state of vertigo as we wonder if the world we know is real or fake, or if the conceptions of reality we hold will collapse underneath our feet. The question has now become whether we ought to embrace this vertigo as our new reality, or if we should fight against the expansion of AI content.
The answer is simple: We have to push back. To accept AI as a new reality is to accept living not just in the hyperreal but in limbo, especially when moguls such as Sam Altman and Peter Thiel brand it as a futuristic advancement of humanity.
We’re constantly confronted by the schism between the real (what we see and hear away from our screens) and the hyperreal (what we see and hear in the media, our only true way of knowing what is happening outside of our immediate surroundings).
To reject the expansion of AI is to reject a world where we, as George Orwell’s “1984” fatefully seemed to predict, are commanded to reject the evidence of our eyes and ears.
Contact Sasha Morel at smorel@alligator.org. Follow him on X @BySashaMorel.
Sasha Morel is a freshman studying Philosophy and Politics and is a private debate coach for students across the nation. His opinion pieces for the Alligator focus on the intersectionality between Gainesville and the people, problems, and politics that affect the city. He works to inspire structural changes through intellectually profound and empathetic analysis of current events.




