• Roko's Basilisk
  • Posts
  • AI Is Your New Coworker... And Your Biggest Security Risk

AI Is Your New Coworker... And Your Biggest Security Risk

OpenAI launches a new agent, EU starts enforcing its AI Act, and hackers hit Taiwan chipmakers. Plus: how your AI tools might be your biggest cybersecurity risk.

Here’s what’s on today’s plate:

  • 🔐 OpenAI’s latest agent draws headlines—so do AI-fueled hacks. We explore the double-edged sword of deploying smarter systems.

  • 🗳️ Poll: Are your AI tools actually making you more secure?

  • 🧠 Bite-Sized Brains: AI policy drama, Reddit’s agent ambitions, Meta tests AI “Truth Layers”

  • 💡 Roko Pro Tip: Patch the people, not just the code

  • 🫠 Meme of the Day: Error 404, IQ not found

Let’s plug in (safely)...

Build your store. Run your world.

Start your online business for free, then get 3 months for just $1. With Shopify, you don’t just build a website—you launch a whole brand.

Enjoy faster checkouts, AI-powered tools, and 99.99% uptime. Whether you’re shipping lemonade or scaling globally, Shopify grows with you. Trusted by millions in 170+ countries and powering 10% of US e-commerce, it’s your turn to shine!

Plus, you’ll have 24/7 support and unlimited storage as your business takes off.

*This is sponsored content

The Laboratory

Is your AI strategy opening the door to cybercrime?

Businesses worldwide are working at breakneck speed to integrate artificial intelligence into their workflows to enhance productivity. At the same time, companies developing AI tools are working to release more products that can speed the process. Recently, OpenAI launched an artificial intelligence agent for ChatGPT that is touted as capable of completing complex tasks without human intervention. The chatbot reportedly accomplishes this by utilizing its own virtual computer, equipped with various tools that enable interaction with the web. This is just one example of how AI can be utilized to automate tasks. And businesses see a lot of value in deploying AI agents capable of reducing their operating costs. However, while discussions in boardrooms and within teams focus on how to implement AI in workflows, there is little conversation around the hidden challenges of integrating AI within organizational structures. The challenge: cybersecurity.

Malicious actors are known to be early adopters of the latest technologies, leveraging them to find novel ways to steal data, halt operations, and steal critical information. With AI, they don’t just have an assistant that can help them find the chinks in security, fast-track development of malware, increase frequency of attacks, but also a new weapon to find novel ways to target businesses.

Before businesses, malicious actors found use for AI chatbots

Soon after ChatGPT’s release in late 2022, reports emerged that cybercriminals were using the chatbot to create malware, raising concerns around the technology’s misuse. Within months of its launch, cybercriminals had figured ways to bypass ChatGPT's safeguards, highlighting the urgent need for stronger security measures.

By 2024, a year after its release, the FBI began warning individuals and businesses about the threat posed by generative AI tools. The FBI warned of increasing use of these tools by cyber criminals to conduct sophisticated phishing/social engineering attacks and voice/video cloning scams.

The warning went on to share that cybercriminals were using publicly available and custom-made AI tools to orchestrate highly targeted phishing campaigns, exploiting the trust of individuals and organizations alike.

Why AI is a weapon of choice for cybercriminals

At their core, AI tools attempt to determine the best way to achieve an outcome or solve a problem. They typically do this by analyzing enormous amounts of training data and then finding patterns in the data to replicate in their decision-making. This makes them ideal for cybercriminals looking to enhance the speed, scale, stealth, and sophistication of their attacks.

Cybercriminals have been known to use AI chatbots to create hyper-personalized phishing campaigns, luring individual workers into revealing sensitive information or login credentials. These are then used to spread malware within an organization’s systems or to access data that can be leveraged for ransom.

AI can also quickly scan for vulnerabilities, map out targets, and collect information from public and dark web sources much faster than a human could. And with more sophisticated AI tools, malicious actors can generate fake audio and videos impersonating executives, bypassing biometric verification in some systems.

Cybercriminals are also reported to have used generative AI to create their custom modules similar to ChatGPT, but designed for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors, which lowers the barrier to entry in cybercrime.

Malicious actors are also known to “poison” or alter the training data used by an AI algorithm to influence the decisions it ultimately makes. In short, the algorithm is being fed with deceptive information, and bad input leads to bad output.

Open source vs proprietary: What’s safer?

For businesses, the threat from AI tools is two fold. While on one hand cybercriminals are using AI tools to increase the scale and scope of attack, implementing AI tools, without proper security, can also weaken defenses.

According to a study of more than 700 technology leaders and senior developers across 41 countries, conducted by McKinsey, the Mozilla Foundation, and the Patrick J. McGovern Foundation, over 50% of the respondents said their organizations are using open source AI technologies (often alongside proprietary tools from players such as Anthropic, OpenAI, and Google).

When choosing between open-source and proprietary AI, businesses must weigh cost benefits against security implications. Open-source AI, while cost-effective and flexible, can expose organizations to greater cybersecurity risks due to vulnerabilities that attackers may exploit in openly accessible codebases. Proprietary AI solutions, though typically costlier, often feature enhanced security protocols, regular updates, and stronger intellectual property protections. Businesses should assess these trade-offs carefully to determine which model aligns best with their risk profile.

Securing your business in the age of AI

While organizations race to implement AI tools, regulators are considering ways to develop AI in a way that maximizes benefits and reduces the likelihood of negative impacts. But regulations may not be enough. While cybercriminals adapt to new tools, businesses can also make use of similar AI tools to augment security teams, particularly in automating routine tasks like threat detection and analysis working in tandem with human expertise.

Measures that businesses can adopt include deploying comprehensive cybersecurity platforms that can analyze abnormal user activity or unexpected changes within the environment that may indicate an attack, and develop an incident response plan that would include preparation, detection, containment, and recovery.

Businesses can also implement employee awareness training programs with a focus on realistic and convincing AI-enabled attack techniques as they relate to social engineering techniques and deepfake chat and audio-based attacks. Additionally, businesses will have to invest in leveraging AI tools to automate security-related tasks, including monitoring, analysis, patching, prevention, and remediation.

Securing the future of AI-powered workplaces

As artificial intelligence becomes more deeply embedded in modern business operations, organizations must recognize that the conversation around AI cannot remain limited to productivity and cost-efficiency. While the adoption of AI tools brings undeniable advantages it also introduces a parallel set of vulnerabilities that cybercriminals are already exploiting with increasing sophistication.

The very capabilities that make AI valuable to businesses are also what make it dangerous in the hands of malicious actors. And threats are not limited to external attacks, deploying AI tools without adequate oversight, security protocols, or understanding of how these tools may interact with existing infrastructure can significantly increase risks around data poisoning, compliance, and IP.

As such, the goal for businesses should be to ensure that the integration of AI tools is done thoughtfully and securely. With the right investment in tools, training, governance, and culture, organizations can harness the transformative power of AI without falling prey to its darker applications.

Bite‑Sized Brains

Monday Poll

Are AI tools strengthening or weakening your company’s cybersecurity posture?

Login or Subscribe to participate in polls.

Roko Pro Tip

💡 

Your firewall won’t save you from Cheryl in accounting.

Make AI security human-first: teach people to spot AI-driven phishing, fake calls, and cloned exec voices. Tech is sharp, but people still click.

Great AI starts with great people.

AI isn’t built by tools—it’s built by teams. Athyna finds you the right people to power your roadmap, from frontier model builders to infrastructure engineers.

Our talent is sourced globally and matched with AI-assisted precision, then hand-vetted to ensure technical depth and cultural fit. Most roles are filled in under 5 days. Whether you’re scaling models, shipping features, or fixing bottlenecks, we’ll help you build the team to get it done.

*This is sponsored content

Meme of the Day

Rate this edition

What did you think of today's email?

Login or Subscribe to participate in polls.