- Roko's Basilisk
- Posts
- AI Slopocalypse Now
AI Slopocalypse Now
Plus: Real headlines, slop survival tips, and the AI music money chase.
Here’s what’s on our plate:
🧠 We’re pulling back the curtain on AI slop.
📱 Vote in today’s poll and see how your doomscrolling compares.
⚡ Musk overhauls Grok, Google faces a search shakeup, and more.
😂 A meme that sums up the struggle of finding real news.
Let’s dive in. No floaties needed…

Build your store. Run your world.
Start your online business for free, then get 3 months for just $1. With Shopify, you don’t just build a website—you launch a whole brand.
Enjoy faster checkouts, AI-powered tools, and 99.99% uptime. Whether you’re shipping lemonade or scaling globally, Shopify grows with you. Trusted by millions in 170+ countries and powering 10% of US e-commerce, it’s your turn to shine!
Plus, you’ll have 24/7 support and unlimited storage as your business takes off.
*This is sponsored content

The Laboratory
How AI slop fuels misinformation
“A giant ball of cats is coming, and there is nothing we can do to stop.” If you have come across this post on any one of the social media platforms, congratulations, you’ve encountered AI slop. And to be honest, you would have to be living under a rock, devoid of access to the internet, to have to escape the growing tentacles of low-quality, lazy text generated by AI that seems to be taking over the internet.
And it is not just lazy and low-quality, some of the AI slop, while meaningless and devoid of sound reasoning and research, is increasingly becoming a tool to spread misinformation that can have real-world consequences. Recently, a wave of disinformation was unleashed on the internet after Israel began its strikes on Iran. The disinformation ranged from fake AI-generated videos boasting Iran’s military capabilities, images of downed Israeli aircraft, to AI-generated videos of protests shared as proof of growing dissent against the Iranian regime.
According to the BBC, an organization that analyses open-source imagery, described the volume of disinformation online as "astonishing" and accused some "engagement farmers" of seeking to profit from the conflict by sharing misleading content designed to attract attention online. So, how did we come from asking AI to generate videos of a giant ball of cats to using the tech to spread disinformation that could have real-world implications? Let us take a closer look.
What is AI slop?
The word slop, originally meaning liquid or food waste used to feed animals, was first used for AI-generated content on online messaging boards, according to New York Times journalist Benjamin Hoffman. It was used to exemplify low-quality content with minimal human input. Some of it is banal and pointless, at times, cartoonish images of celebrities, fantasy landscapes, and anthropomorphised animals. Images of women as virtual girlfriends that you cannot truly interact with or political fantasies playing out on video all constitute AI slop.
Some people even compare AI spam to email spam, hoping that platforms will become efficient at filtering them out. However, as of now, it would appear that social media platforms might not be in the mood to filter out slop as they have a lot to gain from it populating the internet; more on that later.
The concept of AI slop has existed since the launch of generative AI tools for the public. As AI companies continue to improve the capabilities of their model, removing hallucinations and enhancing the quality of generated content, AI slop is not just taking over the internet but is metamorphosing into disinformation.
The risk of careless speech
According to the Reuters Institute, one risk of slop is “careless speech”. According to Sandra Wachter, Brent Mittelstadt, and Chris Russell, AI-generated output that contains “subtle inaccuracies, oversimplifications or biased responses that are passed off as truth in a confident tone.”
The main point of this is to persuade the listener, regardless of whether what the speaker says is the truth or not. And while “careless speech” doesn’t intentionally try to spread misinformation or disinformation, it lays the ground for users to overlook mistakes that would otherwise have been noticed.
Careless speech can easily pass under the radar, and as AI models are trained to be engaging, human-sounding, helpful, and profitable, they aren’t designed to prioritize truthfulness. This makes content generated by them more likely to be skewed in its political and factual accuracy.
With generative-AI tools becoming more efficient, new business models emerged where people started using AI for content farming. And since exaggerated or sensationalised online material boosts engagement, creators started using real-world incidents to ensure continued engagement for their posts on social media platforms.
A far more dangerous use of AI-slop emerged when people started using generative AI to advance political agendas in local and business news organizations, distributing thousands of algorithmically generated articles. These websites may also be used to boost advertising revenue by producing sites optimized for SEO. Many of these websites produce low-quality clickbait content about celebrities, entertainment, and politics, but can also be used to spread disinformation in times when people are faced with binary choices, such as those raised by conflict and politics.
Clickbait gets an AI upgrade
Recently, it was reported that Facebook's parent company, Meta, has a new vision where characters powered by artificial intelligence will exist alongside actual friends and family on its social media platforms. And Meta is not the only one; most major social media platforms, including TikTok, X, and Instagram, have launched AI-powered features with an eye on boosting engagement.
And since AI content gets more eyeballs, it can garner more ad dollars. According to social media management firm Buffer, AI-assisted posts had a higher median engagement rate than regular content; their findings were based on 1.2 million posts published via its platform to sites like Facebook and LinkedIn.
Slop can warp reality
As AI-slop continues to cover more and more of the online space, it is being used to spread disinformation during times of conflict. AI-generated deepfake images and videos related to the conflict between Israel and Iran have flooded social media, which in turn is turning a blind eye due to its ability to generate user engagement.
Fake images have been circulating on social media platforms of downed F-35 jets and B-2 bombers, along with false videos depicting damage from an Iranian missile strike in Tel Aviv. While some of these posts have been flagged by social media platforms as AI-generated, most are tagged after having been viewed by millions of users.
However, disinformation in the form of fake images and videos is just one side of the coin. The other side is the long-term impact, with AI slop distorting even deeply serious content. The problem becomes even more precarious when serious global crises are reduced to memes and deepfakes. This results in public understanding being replaced by distraction, distortion, or worse, manipulation. The combination of algorithmic engagement loops and generative AI blurs the line between fact and fiction in ways that are difficult to detect and even harder to reverse.


Wednesday Poll

Quick Hits
Elon Musk is preparing a major revamp of his Grok chatbot, promising to cut through the AI “garbage” and take on the growing problem of misinformation.
Google may soon be required to overhaul its search business, potentially offering users more freedom to pick alternative services and challenging its dominance in the search market.
The music industry is developing new tech to track AI songs—not to stop them, but to find ways to profit from the wave of AI creativity.

Get daily marketing genius with the swipe’s creative inspiration.
Every issue of The Swipe dissects smart campaigns, viral stunts, and proven marketing techniques that push boundaries. From punchy slogans to immersive multimedia experiences, you’ll discover how top brands captivate audiences and spark conversation.
Skip the clutter of scattered social feeds: get a lean, focused view of innovative thinking. By translating each campaign’s core strengths into takeaways, The Swipe ensures you have a toolkit of imaginative ideas at your disposal.
*This is sponsored content

Brain Snack
![]() | 💡 There’s a growing demand for real-time, in-context fact-checking tools, especially those that can flag deepfakes or misleading AI-generated text. Prototype a browser extension or Slack bot that instantly highlights suspect content and links to trusted sources. |

Meme of the Day


Rate this edition
What did you think of today's email? |
