When AI Hacks Alone

Plus: Agentic tools, Bezos’ startup bet, AI’s moral reckoning.

Here’s what’s on our plate today:

  • 🧪 Agentic AI is now orchestrating cyberattacks.

  • 🗞️ Bezos’ new AI bet, Bone AI arms up, AI likened to Big Tobacco.

  • 🧑‍💻 Prompt Of The Day: Audit your tools before your tools audit you.

  • 🗳️ Poll: Should agentic AI be allowed in cybersecurity?

Let’s dive in. No floaties needed…

Launch fast. Design beautifully. Build your startup on Framer—free for your first year.

First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours. No dev team, no hassle. Join hundreds of YC-backed startups that launched here and never looked back.

  • One year free: Save $360 with a full year of Framer Pro, free for early-stage startups.

  • No code, no delays: Launch a polished site in hours, not weeks, without hiring developers.

  • Built to grow: Scale your site from MVP to full product with CMS, analytics, and AI localization.

  • Join YC-backed founders: Hundreds of top startups are already building on Framer.

Eligibility: Pre-seed and seed-stage startups, new to Framer.

*This is sponsored content

The Laboratory

How Agentic AI is rewriting the rules of cybercrime

Agentic AI marks a shift from tools that wait for instructions to systems that act independently and decide how to get things done. Photo Credit: Bloomberg.

Since transistors were first created, they have had to be protected from natural elements and deliberate attacks from threat actors. Today, the cybersecurity industry is embroiled in a cat-and-mouse game with threat actors. As soon as a flaw is exploited in the security apparatus, companies jump to fix it.

In the early days, when computers were dependent on vacuum tubes, insects were attracted to the heat, and securing them meant manually removing the bug. As computers became more complex, securing them became more complex, which gave birth to a multi-billion-dollar industry. Even today, billions of dollars are spent on securing computer systems, while hackers and threat actors continue to look for weaknesses in the defenses.

However, in 2022, a new threat and a new opportunity emerged on the horizon. When OpenAI first released its AI chatbot, ChatGPT, not many understood the implications. But soon, threat actors realized they had found an ally in AI’s abilities. An ally that had secretly been working with the cybersecurity industry for decades.

By 2023, a few months after the release of ChatGPT,  Meta released a report stating it had detected around 10 malware families and more than 1,000 malicious links that were promoted as tools featuring the popular AI-powered chatbots. This, however, was just the beginning.

AI chatbots soon moved from being a bait to assisting threat actors in writing more sophisticated code at much higher speeds. While this was bad, AI companies continued to add guardrails to help their bots refuse such requests; the stakes are much higher now.

Anthropic warns of an autonomy inflection point

In November 2025, Anthropic said it had detected and disrupted what it describes as the first documented large-scale cyberattack executed with minimal human intervention.

The company described it as a sophisticated espionage campaign where a Chinese state-sponsored group manipulated its chatbot, Claude’s Code, to autonomously conduct 80-90% of attack operations across roughly 30 global targets.

The campaign relied on Claude's agentic capabilities, enabling the model to take autonomous action across multiple steps with minimal human direction.

This represents a fundamental shift in cybersecurity: AI models are no longer mere assistants in cyberattacks. They are now becoming operators.

The disclosure from Anthropic further shared that the operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies, succeeding in a small number of cases.

The company behind the Claude family of AI models is now calling this an inflection point in cybersecurity, where AI models have become genuinely useful for both offensive and defensive operations.

Agentic models are collapsing the cost of cyberattacks

Anthropic shared that it detected and disrupted the first documented large-scale cyberattack executed with minimal human intervention using AI agents. Photo Credit Getty Images.

To understand why Anthropic called the incident the inflection point, one needs to understand the difference between the automating capabilities of AI and the agentic abilities of AI systems.

Up until now, AI has been mostly used as an assistant to automate tasks. Once the task was complete, the model stopped working on it. It was like a very fast, very smart calculator or assistant. If you didn’t ask, it didn’t act. If you didn’t guide it step by step, it couldn’t continue on its own.

However, the new direction that Anthropic worries about is agency. Agentic AI doesn’t just complete a task you give it. It can decide how to complete it, choose the tools, take steps on its own, and keep going until it thinks the task is complete.

It starts behaving less like a tool and more like a problem-solver that acts independently.

The shift is AI metamorphosing from a passive tool to an active participant.

It’s the difference between your coffee machine making coffee after a human has confirmed their choices after selecting options on the menu, and it deciding what kind of coffee you would like to have.

This also raises the question. If AI-agents can now orchestrate attacks with minimal human intervention, wouldn’t it be wise to put a stop to their development? Or limit who can use agentic capabilities?

Turns out, Anthropic is also struggling with this dilemma. However, as of now, the company believes that the very abilities that allow Claude to be used in attacks also make it crucial for cyber defense.

The company, in a blog post, shared that the defensive applications are substantial. And there may be some truth to this.

Defenders and attackers now share the same AI tools

According to IBM’s 2025 Cost of a Data Breach Report, organizations that make extensive use of security AI and automation cut their average breach costs significantly. This is done through the use of agentic AI, which is taking over routine monitoring, triage, and incident response, freeing up human analysts for strategic work

However, the report also shared that AI is making cybersecurity more challenging and effective at the same time.

Part of the challenge is that while agentic AI can help reduce costs, they are not foolproof.

Anthropic shared that Claude didn't always work perfectly, occasionally hallucinating credentials or claiming to have extracted secret information that was in fact publicly available, which remains an obstacle to fully autonomous cyberattacks.

However, security experts warn that this is temporary. As models improve, these limitations will evaporate, making fully autonomous attacks more viable.

Something Anthropic agrees with. The company is predicting that barriers to performing sophisticated cyberattacks have dropped substantially and will continue to do so, with less experienced and resourced groups now potentially able to perform large-scale attacks of this nature.

According to reports from MIT Technology Review and Cisco research, agentic AI has the potential to collapse the cost of the kill chain, meaning that everyday cybercriminals may start executing campaigns that today only well-funded espionage operations can afford.

Despite the risks, many companies feel it is better to hedge their security with AI on their side.

According to McKinsey, 80% of organizations say they have encountered risky behaviors from AI agents, including improper data exposure and access to systems without authorization. Nearly 40% of companies expect agentic AI to augment or assist teams over the next 12 months.

And though investing in AI may be a wise move, it is not foolproof.

Human mistakes remain the biggest security gap

Even with the use of agentic AI, humans remain a major weak point. Most breaches still happen because of simple mistakes, stolen passwords, or people getting tricked. Without security training, many employees will still click on harmful links or fall for scams.

As such, for enterprises that rely on sophisticated systems, their approach to cybersecurity should move towards multi-agent systems.

This would look like teams of different AI agents working together, like a swarm, to detect, analyze, and contain threats quickly.

And though attackers will also use coordinated agents, an aware workforce, and systematic emergency plans might just enable security agents to foil future attacks.

The need for human oversight

In the meantime, it should be noted that critics of AI, like Noah Yuval Harari, argue, the problem does not arise because we now have systems with ‘alien intelligence’ capable of deciding things for themselves.

The real problem arises when these agents begin talking to each other, bypassing humans appointed to keep them under control. As such, even though AI agents may be capable of both securing and disrupting the online world, the side that can use them, rather than be used by them, may come out on top.

The real inflection point then is not that AI is now reshaping cybersecurity; it is in understanding that, whether we like it or not, agentic AI will be part of cybersecurity.

The real test for human institutions is to build systems where autonomy is tightly bound, auditable, and aligned with human oversight.

Bite-Sized Brains

  • Robot arms race heats up: Bone AI just raised funding to challenge Asia’s defense giants with autonomous robotics.

  • AI = new tobacco? Anthropic CEO warns AI risks may one day rival the public health impact of cigarettes.

  • Bezos doubles down: Amazon’s founder co-leads Project Prometheus, a stealthy new AI startup flush with fresh funding.

The context to prepare for tomorrow, today.

Memorandum distills the day’s most pressing tech stories into one concise, easy-to-digest bulletin, empowering you to make swift, informed decisions in a rapidly shifting landscape.

Stay current, save time, and enjoy expert insights delivered straight to your inbox.

Streamline your daily routine with the knowledge that helps you maintain a competitive edge.

*This is sponsored content

Prompt Of The Day

Your cybersecurity team just hired an AI agent.

What guardrails, rules, or oversight would you put in place before letting it operate independently?

Tuesday Poll

🗳️ Should agentic AI be allowed in cybersecurity?

Login or Subscribe to participate in polls.

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.