Are deepfakes a distraction?
How AI disinformation works through lore-building, not deception
The discourse around AI and disinformation has fixated on deepfakes, videos so convincing we won’t be able to tell what’s real anymore. Fake videos of politicians confessing to crimes, photorealistic forgeries destabilizing elections, a world where seeing is no longer believing. These concerns aren’t unfounded, as deepfakes are real, and they do pose genuine risks. But our obsession with them has left another, arguably more pervasive, form of AI disinformation almost entirely unexamined.
What’s actually been happening is that the internet has been flooded with AI-generated political imagery that nobody mistakes for reality. Remember those images of Trump hugging ducks? Or Trump as a superhero, politicians rendered as everything from action figures to renaissance paintings. Nobody thought these were real photographs. Everyone knew they were AI-generated fantasies. And yet, they kind of work?
While we’ve been worrying about deception, I believe we’ve missed how AI disinformation operates through lore-building, by creating a cultural atmosphere, a mythology, that shapes perception even when everyone knows it’s “fake”.
From deepfakes to deep lore
If deepfakes are about deception, deep lore is about affective world-building – an emotional atmospheres that shape how people interpret reality. It’s about creating the narrative through which people interpret everything else, not by lying to them, but by giving them a repertoire of narratives, emotions, and symbolic associations that feel true even when everyone knows they’re constructed.
This is where I think AI somewhat changes what we’ve called “banana populism.” Populist leaders have always built emotional resonance through banal moments: eating sausages, playing with dogs, doing sports. These mundane images do political work precisely because they feel ordinary and relatable. But previously, politicians actually had to perform these moments, and each image was constrained by what could be photographed or filmed in real life. Now those sentiments can be endlessly reproduced without the politician doing anything at all; AI removes the bottleneck on myth production entirely. What once required institutional resources and actual performance can be generated in seconds and repeated in hundreds of variations until it solidifies into mythology. The result is memetic storytelling that functions like oral folklore: endless variations on the same archetypes, constantly adapted and reshared.
Trump as protector appears in a hundred different scenarios. In Hungary, Orbán himself posted an AI-generated video showing him scoring a spectacular goal against the crying opposition leader. It didn’t claim to be real footage, yet it spread widely, because the point wasn’t documentary truth, it was mythic resonance. Orbán as the winner, the competent one who scores when it matters. This isn’t an individual “lie” to be debunked; it’s a folkloric tradition being constructed in real-time, building lore that shapes perception without ever pretending to be real.
The return to orality
This shift toward lore-building isn’t just about technology, it reflects a deeper transformation in how we communicate, one that media theorists have been tracking for years. For most of human history, knowledge was transmitted orally. Stories and myths passed from person to person through constant retelling. In oral cultures, “truth” wasn’t primarily about factual accuracy but about resonance – a story mattered if it helped people make sense of their world, if it could be remembered and shared. Then came writing and print, which changed everything. Literacy encouraged linear thinking and the idea that truth could be fixed in authoritative texts. Information became something you could verify by checking the record.
But digital media is pulling us back. Walter Ong called this “secondary orality”; we haven’t lost literacy, but we’ve gained something alongside it that resembles oral communication. As Eric Levitz writes in his 2025 essay on “the decline of reading,” scrolling and swiping have displaced the sustained attention print demanded. Information now lives not in stable storage but in constant circulation, and as Levitz puts it, “information doesn’t stick when it’s stored; it sticks when it circulates.” So, repetition creates reality. In this environment, AI-generated political imagery operates less like factual claims and more like these circulating myths—evaluated not as true or false, but as resonant or flat.
Living in the lore
We’re not (only) facing a crisis where we can’t tell what’s true anymore. We’re facing something stranger, where the truth is competing with mythic stories, and in that competition, facts are often at a disadvantage.
You can fact-check a deepfake. You can debunk a specific false claim. But how do you fact-check a vibe? How do you debunk an atmosphere? The accumulation of AI-generated lore doesn’t make falsifiable claims, it creates emotional environments and narrative frameworks that operate below the level of conscious evaluation. This is what makes lore-building so much more pervasive than deepfakes. It doesn’t need to deceive to work, it just needs to circulate, to repeat, to become part of the ambient mythology we swim in. And because AI makes myth production nearly costless, we’re all living inside multiple competing lores now, each reinforced by algorithmic abundance.
The deepfake panic imagined a future where sophisticated forgeries would destroy our ability to know what’s real. But the actual present is weirder: we’re generating fakes, we know that they’re fakes, we’re sharing them anyway, and they’re shaping our reality regardless. The images don’t claim to be true. They don’t need to. They just need to feel right, to resonate, to become the stories we tell about power and who deserves it.
Maybe that’s the real shift: not that we can’t tell truth from lies, but that we’ve learned to live in mythologies that we know are constructed and that work on us anyway.


