From La Fontaine to Lego: Characters as Ideological Delivery Systems

· 3081 words · 15 minute read

Cute characters as ideological delivery systems, and how AI accelerated the propaganda playbook.

Tracy Alloway nailed it: “Kind of crazy that the big propaganda medium to come out of AI wasn’t deepfakes but LEGO men and Persian cats.”

Everyone was bracing for deepfakes. The national security community spent years warning about synthetic video of world leaders saying things they never said, doctored footage designed to deceive at the pixel level. Instead, what showed up was Lego minifigures of Trump and Netanyahu set to AI-generated rap tracks, produced by an Iran-based group calling themselves the “Explosive News Team”. And it wasn’t just Iran. Chinese state media CCTV joined in with its own GenAI animal fable: “The White Eagle and Persian Cat”, a stop-motion style animation where a White Eagle Alliance dominates trade by forcing other animals to use its currency. Not trying to fool anyone into thinking the footage was real. Just trying to be catchy, shareable, and memetically sticky.

This is jestermaxxxing, a term that originated around 2021 on looksmaxxing forums, where -maxxing means optimizing a single trait to its absolute limit. In its original context, to jestermaxx (Know Your Meme) is to use humor as your primary strategy to attract attention. The idea being that a jester must be entertaining to maintain his place at court, much like a court jester depended on the king’s amusement for survival. Looksmaxxing is about maximizing physical appearance. Jestermaxxing is about maximizing attention through entertainment, and attention is the most expensive currency on the internet.

Repurposed for geopolitics: jestermaxxxing is about maximizing the spread of your message by making it so entertaining, so absurd, so funny that people share it reflexively. Wrapping your ideological payload in humor so it rides the algorithm instead of fighting it. It works because people share things that make them laugh, not things that make them suspicious.

AI didn’t invent a new form of propaganda. It accelerated every form we already had.

This playbook is older than the internet 🔗

The reflex is to treat this as something new. It isn’t. Using cute or clever characters to deliver political messaging predates the internet by centuries.

The underlying principle is simple: one character equals one idea. Compress a complex political concept into a recognizable figure, and it becomes transferable to any audience, across languages, without explanation. La Fontaine’s animals are the French court. Orwell’s pigs are Soviet leadership. Golding’s boys in Lord of the Flies where Ralph is democratic order, Jack is authoritarianism, Piggy is rationalism. Sesame Street’s Muppets are civic values. Iran’s Lego Trump is American aggression. CCTV’s White Eagle is US imperialism. The character isn’t a metaphor for the idea. The character is the idea, in a form that propagates.

This isn’t just how propaganda works. It’s how storytelling works. It’s the fundamental unit of how humans transmit ideas through narrative. Literature, children’s education, satire, and state-sponsored information operations all use the same compression algorithm. The only difference is intent. Propaganda is just storytelling with a handler.

In 1668, Jean de La Fontaine published his Fables, animal allegories that were, beneath the surface, pointed political commentary on Louis XIV’s court. The lion was the king. The fox was the courtier. The wolf preyed on the weak. “The Animals Sick of the Plague” was a thinly veiled critique of how the powerful scapegoat the powerless. Louis XIV understood exactly what La Fontaine was doing and froze him out of the Academie Francaise for it.

The form is structurally identical to what Iran and CCTV are producing today: animal characters carrying political messaging. The difference between La Fontaine’s fables and Iran’s Lego videos is not the medium; it’s the intent and the apparatus behind it. La Fontaine was an individual using satire to expose power from below. State information operations use the same form to project power outward. Satire wants you to see through the allegory. Propaganda wants you to share it before you think about it.

But the parallel runs deeper than form. La Fontaine’s Fables helped shape the intellectual climate of the Age of Enlightenment. They taught generations to question authority through narrative, to see political structures as contingent rather than natural. The printing press made that possible by democratizing the distribution of ideas. We may be living through an analogous transformation. AI is doing to content creation what the printing press did to text: collapsing the cost of production so dramatically that it reshapes who gets to participate in the discourse and how fast ideas propagate. The Enlightenment was, among other things, a consequence of a new distribution technology meeting a backlog of suppressed ideas. If AI and social media are this era’s printing press, the question is what kind of intellectual transformation (or manipulation) follows.

War propaganda posters in WWI, WWII, and the Cold War were the memes of their day: simple, visual, emotionally charged, designed to spread a message through repetition and appeal. Uncle Sam pointing at you. Rosie the Riveter. Soviet constructivist posters. They were short-form, high-impact, shareable (literally, they were printed and plastered everywhere). The medium changes, the mechanics don’t.

In the 1990s, USAID funneled $6 million into adapting Sesame Street for post-Soviet Russia. The story is extensively documented in Natasha Lance Rogoff’s Muppets in Moscow (Smithsonian). The result was Ulitsa Sezam (Улица Сезам), a Russian-language version of the show designed not just to teach kids the alphabet, but to promote democratic values to an entire generation of post-Soviet children. The Muppets taught sharing, tolerance, civic participation, and individual agency to kids growing up in the rubble of the Soviet Union. Catchy songs. Lovable characters. An ideology baked into every episode so subtly that it didn’t feel like ideology at all.

This was, by any honest definition, a state-funded information operation. A very successful one. The US spent the Cold War perfecting the art of cultural influence (Radio Free Europe, Voice of America, Hollywood as soft power projection) and Ulitsa Sezam was the logical extension: start them young, make it fun, let the message ride on the entertainment.

From pin-ups to archetypes 🔗

Characters aren’t the only vehicle. Propaganda has always exploited whatever captures attention, and for much of modern history, that meant people.

During WWII, the US military distributed millions of pin-up photos to troops overseas. Betty Grable’s studio alone printed five million copies of a single image. The stated purpose was morale, but the subtext was clear: remind young men what they’re fighting for. The pin-up was a recruitment and retention tool dressed up as entertainment.

That mechanic never went away. It just migrated to new platforms. As early as 2007, the Israeli consulate in New York partnered with Maxim magazine for a feature called “The Chosen Ones” (Jerusalem Post), displaying female IDF soldiers in a deliberate hasbara campaign targeting young American men. The shoot included a then-unknown Gal Gadot. The consulate was explicit about the intent: young American males had no feelings toward Israel, and attractive female soldiers in various states of undress were the solution. That campaign evolved into what analysts now call “Combat Cuties”: female IDF soldiers posting dancing videos, thirst traps, and fitness content on TikTok and Instagram. Duke professor Rebecca Stein has described this as “entertainment militarism”, using attractive women in combat gear to humanize military operations, boost Israel’s image in public diplomacy, and make acts of violence appear justified or necessary. Academics have framed this more sharply as “sexist colonial feminism” and “imperial feminism”, the co-optation of women’s empowerment narratives for militarized propaganda, where feminism itself becomes the delivery vehicle for normalizing occupation.

Then AI entered the pipeline. “Jessica Foster”, a “beautiful Army blonde,” amassed over a million Instagram followers in three months with pro-Trump, pro-military content. She wasn’t real. The entire persona was AI-generated images controlled by an anonymous operator funneling conservative men toward an OnlyFans page. The Army confirmed they had no record of her. The photos were forged (incorrect American flags, bizarre uniform details) but none of that mattered. The audience wanted to believe.

And then the vehicle shifted again, from fake people to characters entirely. Iran’s Lego rap videos are structurally identical to what the US was doing with Sesame Street Muppets in Moscow. Cute characters. Catchy music. An ideological payload delivered through a medium that disarms the audience’s critical filters. The format is the psyop. The content is secondary to the distribution mechanism.

The full pipeline, unfolding in real time: static images (war posters, pin-ups) to produced video (TikToks, female IDF “Combat Cuties” dance clips) to AI-generated images (Jessica Foster) to fully GenAI-generated video (Iran’s Lego characters, CCTV’s Persian cats). Each step lowered the production cost, raised the output tempo, and made the origin harder to trace.

That last step, from people to characters, isn’t a downgrade. It might be the most significant shift of all. Carl Jung’s concept of the collective unconscious offers a framework for why. Jung argued that beneath individual consciousness lies a shared psychic layer populated by archetypes, primordial templates (the hero, the trickster, the tyrant) that recur across every culture’s myths, dreams, and symbols. These archetypes aren’t learned; they’re inherited. They’re the reason a cartoon eagle dominating other animals reads instantly as imperialism to anyone, anywhere, without a single word of explanation. Characters tap into universal narrative structures that bypass the critical filters real people activate.

Neuroscientist Anil Seth extends this from a different direction. In his framework, what we experience as “reality” is a controlled hallucination: the brain doesn’t passively receive the world but actively generates conscious experience through top-down predictions, constrained by sensory input. When enough individual hallucinations align, they form the consensual fabric we call shared reality. Jung’s archetypes, in this light, function as deep evolutionary priors, shared templates that bias how each brain constructs its model of the world, explaining why certain mythic motifs resonate universally even as each person’s experience remains a personalized simulation.

The implication for information operations is significant. Every group, tribe, or political faction is already living inside a partially distinct controlled hallucination, a reality shaped by its own priors, narratives, and in-group symbols. Propaganda has always worked by hacking these shared templates. But characters and archetypes are a more direct route to the collective unconscious than real people are. A dancing IDF soldier can be fact-checked, contextualized, criticized. A Lego Trump or a Persian cat allegory operates at the level of myth. It slots into pre-existing narrative structures before the conscious mind has a chance to evaluate it. The shift from imperial feminism to cartoon characters isn’t just cheaper. It’s deeper.

And the economics make it trivial to produce at scale. Ulitsa Sezam required $6 million in USAID funding, professional puppeteers, a production studio, broadcast distribution deals, and years of development. A single episode took weeks to produce. Distribution meant negotiating with TV networks for airtime in a single country. Iran’s operation requires a GenAI video tool, a laptop, and a social media account. They have been publishing new Lego videos almost daily, sometimes responding to events the same day they happen. That kind of turnaround used to require an entire studio and weeks of lead time. Now it takes an afternoon. And distribution is instant, global, and free: post it on Twitter/X and Telegram and the algorithm does the rest.

Deeper AND cheaper. Social media made distribution virtually free. GenAI made production virtually free. The combination means that the throughput of an information operation is no longer bottlenecked by budget or infrastructure. It’s bottlenecked by how fast you can come up with the next idea.


And there’s a full circle here worth noticing. The most technologically advanced propaganda pipeline of 2026 (GenAI video, algorithmic distribution, real-time production) landed on the exact same form La Fontaine used in 1668: animal characters carrying political allegory. The lion, the fox, and the wolf became the eagle, the Persian cat, and the Lego minifigure. Three and a half centuries of technological progress, and the most effective vehicle for ideological messaging is still a character in a fable.

Memes as the unit of propaganda 🔗

Frank Herbert wrote in Dune: “Who controls the memes, controls the universe.” He meant it in Dawkins’ original sense (units of cultural transmission) but the line reads differently in 2026. Elon Musk put it more bluntly in 2023: memes are “the most information-dense form of communication.” Both were right, and both were describing the same weapon.

Susan Blackmore formalized the idea in The Meme Machine (1999): memes (ideas, behaviors, cultural units) replicate and evolve through imitation the same way genes do through biology. The ones that survive are the ones best adapted to spread: catchy, simple, emotionally resonant. She was writing about culture in general, but the framework maps perfectly onto information operations. A successful psyop is just a meme with a handler.

The -maxxxing suffix itself is proof of concept. It jumped from niche forums to mainstream internet slang to, now, a framing device for geopolitical analysis. Marc Andreessen, co-founder of a16z and one of the most influential VC firms in AI, recently endorsed “retardmaxxing” on a podcast, describing it as his new life philosophy. When a suffix born on self-improvement forums ends up in the mouth of a billionaire venture capitalist with significant AI investments, that’s not cultural drift. That’s a meme completing its replication cycle. The trajectory (subculture to mainstream to serious discourse to Silicon Valley boardroom, carried entirely by humor and repetition) is exactly the vector that state actors are learning to exploit. Meme culture is how ideologies spread now. The format is the delivery system.

Propaganda got democratized 🔗

In the early 2000s, French internet comedian Remi Gaillard built a following on viral prank videos and coined the slogan “C’est en faisant n’importe quoi qu’on devient n’importe qui”, or “it’s by doing anything that you become anyone.” It was a manifesto for the first generation of internet virality: one person with a camera and zero budget could become famous by being outrageous enough. With GenAI, the script has flipped. It’s no longer about anyone becoming someone by doing anything; it’s that anyone can now do anything. The creative constraint is gone. The production bottleneck is gone. What used to require talent, equipment, and time now requires intent and a prompt.

In 2020, I wrote about the tradecraft behind state-sponsored information operations on Twitter and Facebook. The modus operandi back then was networks of fake accounts, coordinated inauthentic behavior, and bulk amplification, essentially astroturfing at scale. Platforms would periodically purge these networks and publish transparency reports. The operations were labor-intensive, detectable, and expensive to sustain.

That playbook is becoming legacy. GenAI changes the economics of every step. Content creation that required teams of operators now requires a prompt. Persona management that required maintaining hundreds of accounts now requires generating hundreds of synthetic voices. And the shift from fake-accounts-pushing-talking-points to entertaining-content-that-spreads-organically makes platform detection dramatically harder. You can’t flag a Lego rap video as coordinated inauthentic behavior. It’s just a video. The inauthenticity is in the intent, not the content.

Noam Chomsky and Edward Herman described the machinery of narrative control in Manufacturing Consent (1988): mass media as a system of filters that shape public perception in service of elite interests. Their model assumed a concentrated media landscape where a handful of institutions controlled the pipeline. That concentration was the chokepoint, and the chokepoint was the leverage. GenAI blew the chokepoint open. The filters Chomsky described haven’t disappeared, but they’ve been joined by a flood of competing narratives that no single institution controls. Manufacturing consent used to require owning the media. Now it requires owning the algorithm, or just being better at feeding it.

The US spent decades building an information operations capability that required state-level resources: budgets, institutions, broadcast infrastructure, cultural expertise. That capability has been commoditized. The tooling is commercial. The distribution is free. The feedback loop (engagement metrics, shares, virality) is instantaneous. Propaganda still requires intent and coordination, but the barrier to producing it at tempo and at scale has dropped by orders of magnitude.

Did they get mogged by their own playbook? 🔗

There is an irony here that is hard to ignore. The US spent $6 million and years of development to use Muppets as a vehicle for promoting democracy in post-Soviet Russia. It worked. The approach became doctrine. And now the very thing that was being promoted (the democratization of tools, platforms, and access) is what made the playbook available to everyone else.

Propaganda for democracy fast-forwarded into the democratization of propaganda.

The specific concern isn’t that Iran made some Lego videos. It’s that the cost curve for information operations has crossed a threshold where the asymmetry that used to favor well-resourced democracies no longer holds. The US could outspend the Soviet Union on cultural influence. It cannot outspend the entire internet.

The Muppets just got open-sourced.

If the AI race is really a race for narrative control, and storytelling is the mechanism through which ideologies propagate, then whoever controls the AI controls the writing itself. The character is the delivery vehicle. The ideology is the payload. AI is the rootkit on the narrative layer: it operates below conscious discourse, shaping what stories get told, how they’re framed, and who sees them, without the audience ever knowing the kernel has been compromised.

But narrative control may not even be the endgame. They say victors write history, but you don’t need to rewrite history if nobody remembers it. Short-form content (TikTok meme videos, rapid-fire algorithmic feeds) is optimized for engagement, not retention. Musk himself called it one of the inventions that has “made humanity worse,” saying it seems to be “rotting people’s brains.” Earlier in this post, he was quoted calling memes “the most information-dense form of communication.” Both are true, and that’s the problem. Memes are the highest-bandwidth delivery mechanism and the lowest-fidelity storage format simultaneously. The information arrives perfectly compressed and then evaporates. What remains is not memory but a vague emotional residue: a feeling about an era, a gestalt impression of who the bad guys were. History becomes vibes. Causality collapses into aesthetic. When consumers can’t hold any narrative long enough to compare it against reality, their sense of the present becomes whatever the last few things they consumed told them it was. Narrative control is a renter’s game. Whoever controls memory controls everything. Short-form content is the real rootkit.