Is It Real Or Just AI?

Plus: Meta’s flirty bot, GPT-5’s manners, China’s robot games.

Here’s what’s on our plate today:

  • 🧠 Deepfakes are fueling a booming market for AI detection tools.

  • 🤖 GPT-5 chills out, Meta’s bot flirts, and robots go for gold.

  • 💡 A pro tip to sharpen your internal AI radar.

  • 🧪 Design your own AI forensics bot in today’s prompt.

Let’s dive in. No floaties needed…

Build your store. Run your world.

Start your online business for free, then get 3 months for just $1. With Shopify, you don’t just build a website—you launch a whole brand.

Enjoy faster checkouts, AI-powered tools, and 99.99% uptime. Whether you’re shipping lemonade or scaling globally, Shopify grows with you. Trusted by millions in 170+ countries and powering 10% of US e-commerce, it’s your turn to shine!

Plus, you’ll have 24/7 support and unlimited storage as your business takes off.

*This is sponsored content

The Laboratory

How deepfakes are driving the market for AI detection tools

Whenever a new technology is released for public use, threat actors find a way to misuse it for their benefit. Artificial Intelligence (AI) is no different in this area. Since the release of Large Language Models (LLMs) capable of generating code, images, and text, there have been fears that the technology would be misused for nefarious purposes. And these fears were not unfounded. Within months of ChatGPT’s launch, threat actors were found using the chatbot to generate malicious scripts. By 2024, deepfakes were being used in attempts to sway voters in elections, and video generators were now being accused of creating explicit images of celebrities.

On the surface, the problem may not look like much, but AI companies have struggled to contain misuse of actively promoted features that allow users to generate objectionable content.

OpenAI, for instance, has repeatedly updated its chatbots to curb misuse. Others, like Grok Imagine, offer features that allow users to generate explicit images. In attempts to control misuse and help identify AI-generated content, companies have also experimented with watermarking AI-generated content to make its origins easier to trace.

The need for AI detection tools

As AI models increase in capability, the very lines between real and AI-generated are increasingly blurred. This problem posed by deepfakes is complex and threatens the foundation of social structures, and tackling it requires more than just regulatory frameworks. The need for the detection of AI-generated content is increasing due to its ability to create high-fidelity fake text, audio, or video.

The increasing proficiency of AI models is intensifying threats such as identity fraud, voice-based scams, and disinformation campaigns. For instance, deepfake-based scams have already led to multi-million-dollar frauds and voice-cloning attacks. In a widely reported incident, a Hong Kong-based financial worker mistakenly paid out $25 million to scammers after they deepfaked the company's chief financial officer and other staff members in a video call.

Businesses and individuals increasingly rely on detection tools to verify authenticity, maintain trust, and prevent reputational or financial damage, especially as humans find it harder to distinguish real from fake. In short, the better AI gets at creating deceptive content, the more crucial detection tools become. The increase in online AI slop, a term used to describe low-quality AI content, has also increased the need for detection. All these together are driving the growth of the market for tools aimed at detecting AI-generated content and deepfakes.

Detection tech is becoming big business

According to Deloitte, analysts estimate that the global market for deepfake detection will grow by 42% annually, from $5.5 billion in 2023 to $15.7 billion in 2026. The growth is expected to be fuelled by media companies and tech providers investing in content authentication solutions and consortium efforts. Similar growth rates have been proposed by other industry reports.

Spherical Insights projects that the market will grow by 30–47% in the coming 5–10 years, signifying that the market is expanding rapidly, though absolute dollar forecasts vary because of differing definitions of deepfakes, detection tools, and time frames.

Some notable companies working on solutions to detect manipulated media across images, video, audio, and even AI-generated text include Breacher.ai, DuckDuckGoose AI, and Q Integrity.

While  DuckDuckGoose AI and Q Integrity offer solutions for detecting AI-generated deepfakes and can even scan printed documents for signs of forgery, Breacher.ai has taken a different approach. The startup simulates deepfakes and launches them at companies to expose vulnerabilities in both processes and people.

While these startups are yet to make it big, the projected growth in the AI detection market is expected to fuel further growth. And it is not just startups. Big tech companies are also investing in tools that can help enterprises and individuals discern whether a piece of content was generated by a human or by an AI tool.

Giants like Microsoft and Intel have released tools aimed at detecting AI-generated content online. Even OpenAI, at one point, had its AI classifier tool that tried to distinguish between human-generated and AI-generated text. The AI startup, however, had to take it offline due to its low rate of accuracy. OpenAI’s failed attempt at building a classifier highlighted a key challenge for AI detection tools: ensuring accuracy.

Why deepfake detection still falls short

The industry is evolving rapidly, with companies pushing to release more powerful models. As the industry evolves, AI detection tools have to evolve with it, making it difficult to ensure accuracy.

One of the biggest challenges to AI detection is the lack of generalization to new or unseen deepfakes. Most detectors are trained on specific datasets and falter when exposed to newer generative tech. Studies reflect the inability of detection tools to generalize to new datasets as they often latch onto the training data rather than the content itself.

Even if the tech evolves, real-world conditions and a lack of data sets for training pose a major problem. Detection tools often fail under real-world conditions like poor lighting, compression, or low-resolution media. According to a report from TechRadar, top deepfake detectors saw accuracy drop by up to 50% on real-world data, showing detection tools are struggling to keep up.

Then there are biases in datasets to contend with.

Many detection systems are trained on Western-centric datasets that predominantly consist of high-quality images of Caucasian faces and English-language content. In regions like the Global South, this leads to high rates of false positives and false negatives, especially with lower-quality media captured on budget devices.

What will it take to win against the rising tide of AI fakes?

With advances in generative AI, organizations need to understand that deepfakes are not a problem of the future. Organizations need to adapt their security protocols and reinforce their verification processes when granting access to sensitive information.

Organizations also need to understand that AI detection tools are not a silver bullet, and outright reliance on opaque detector outputs can undermine the capacity to determine the truth. Businesses will also have to adopt novel techniques to implement multi-layered defense, including presentation-attack detection, data provenance methods, or liveness checks to mitigate the threat posed by AI deepfakes.

The fight against deepfakes isn’t just a technological arms race; it’s a race to preserve trust in the digital age. As generative AI advances at breakneck speed, detection tools will remain essential, but their success depends on more than engineering. It will require transparent global standards, collaboration across industries, and a public that knows how to question what it sees. The next era of the internet will belong to those who can prove they’re real and prove it faster than the fakes.

Roko Pro Tip

💡 Train your eye.

Don’t just trust detection tools—develop your own radar.

Try this: Watch one real interview + one deepfake each morning.

Write down 3 subtle tells in each.

After 2 weeks, your instincts will improve dramatically.

Credit investor? You need a data strategy.

As AI rapidly advances, investors who fail to leverage its potential will soon find themselves struggling to compete against their peers.

But how do you integrate technology and automation into the traditional credit research process?

In this white paper, find out how investors are already unlocking greater performance by implementing a ‘credit data strategy’—complete with best-in-class examples.

*This is sponsored content

Prompt Of The Day

Prompt: “Anti-AI Forensic Bot”

Design an AI assistant that can sniff out AI-generated text, images, and audio >90% accuracy. What features would it need? What human cues would it need? How would it improve itself? 

Go wild with your answer—this might be your next startup pitch.

Bite-Sized Brains

Tuesday Poll

🗳️ Can You Spot a Deepfake? If shown a 30-second video, how confident are you in knowing it’s fake?

Login or Subscribe to participate in polls.

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.