The Hidden Cost Of AI Errors

Plus: Snap-to-order on Uber, AI eyeing hoops talent, and leaks shake TSMC.

Here’s what’s on our plate today:

  • ⚠️ AI slip-ups cost real money—governance or chaos ahead.

  • 🏀 AI scouts NBA talent, crunching pro-level moves frame-by-frame.

  • 🍔 Uber Eats tests photo-to-order: snap menu, get dish details.

  • 💰 MENA fintech Alaan lands $48M as TSMC battles IP leaks.

Let’s dive in. No floaties needed…

Make your AI roadmap a reality.

Turning your AI strategy into shipped features requires more than ambition—it takes the right people.

Athyna helps you find and onboard high-performing AI/ML engineers, data scientists, and product talent from deep global talent pools. Every candidate is hand-vetted for skill and fit, ready to work across time zones, and matched in under 5 days.

No upfront costs. No drag. Just fully supported hiring that lets you move from roadmap to real-world results.

*This is sponsored content

The Laboratory

What happens when AI makes mistakes?

In 2018, the BBC published an interesting article titled “The commas that cost companies millions.” Within the article, the publication listed some of the most famous instances where the omission or placement of a comma was interpreted differently by different parties, leading to confusion and monetary losses. This underscores the importance of attention to detail; even minor variations, not just outright mistakes, can lead to significant losses.

Back in 2018, confusion could be blamed on human error. But in 2025, AI hallucinations and system errors are becoming a growing concern. AI is increasingly driving decisions in sectors like healthcare, law, finance, transportation, and customer service. But what happens when these systems, often treated as authoritative, make mistakes? The consequences are no longer theoretical; they’re real, measurable, and increasingly costly.

Real-world mistakes made by AI

Recently, The Verge reported that Google’s healthcare AI, Med‑Gemini, in its promotional research paper, identified an “old left basilar ganglia infarct.” However, there is no such anatomical structure as the “basilar ganglia.” The intended term was basal ganglia, a real brain region responsible for motor control and cognition. The mistake was the result of either a hallucination or a typo by the AI.

And it is not the only case.

In July 2025, a security breach was discovered in Amazon Q, Amazon’s AI coding assistant plugin for VS Code. The breach occurred when a hacker gained access through a GitHub token and injected a malicious prompt resembling normal code that instructed the AI to wipe local systems or delete AWS resources. The incident exposed a deep security flaw in AI tools: even seemingly harmless code submissions can carry destructive AI instructions.

In February, over-reliance on AI also cost two New York lawyers $5,000 for citing cases that were invented by AI in a personal injury case against an airline. However, the cost of mistakes made by AI systems is not limited to monetary losses.

While AI companies would like to highlight the positives of AI integration, they often fail to communicate the risks associated with misleading consumers. A prime example of this was seen when a jury found Tesla partially liable for a fatal 2019 crash in Florida involving Autopilot misuse.

Tesla was ordered to pay $243 million, highlighting its failure to restrict the use of Autopilot on unsuitable roads and misleading driver expectations. This raises the question: Should enterprises shy away from implementing AI workflows?

The limits of automation

Implementing AI can enhance productivity, and companies around the globe are working to implement these systems in their workflows. However, they should not be looking to replace trained staff entirely. A Swedish fintech, Klarna, learned this the hard way when it replaced over 700 employees with AI (in partnership with OpenAI), aiming to reduce costs.

The experiment backfired when, despite the early optimism, the company admitted that AI negatively impacted service quality, triggering customer frustration and public backlash. Klarna announced plans to rehire human staff and restore human oversight.

Before offloading decisions to AI, users must understand the types of mistakes these systems can make and why they happen.

Understanding the root causes of AI failures

One of the main problems with current AI systems is their tendency to hallucinate. This happens when AI confidently generates false or fabricated information, as seen in the case of Google Med-Gemini’s “basilar ganglia” error.

The second is when threat actors manipulate AI behavior through embedded prompts or model vulnerabilities. Then there is miscommunication about AI capabilities that leads to fatal consequences and could lead to the degradation of services.

Internal guidelines are a must for AI

As the world moves toward automation, in both cognitive and manual tasks, McKinsey & Company estimates that the long-term AI opportunity stands at $4.4 trillion in added productivity growth potential from corporate use cases. And with 92 percent of companies planning to increase their AI investments over the next three years, there is a need for robust regulatory frameworks that can be implemented by regulators as well as internally by businesses.

The EU’s AI Act is one such step that looks to categorize risks and demands higher accountability for high-risk AI systems. The U.S. has also passed executive orders requiring companies to improve transparency and implement safety benchmarks for AI systems in critical sectors.

While they provide the wider framework for implementing AI systems, enterprises with different requirements and different levels of AI integration also need to come up with their internal guidelines.

When creating these guidelines, companies must consider digital amplification, algorithmic bias, cybersecurity, and data privacy.

The Harvard Business School, in its blog, describes digital amplification as AI enhancing the reach and influence of digital content. It explains how AI when used in news media to recommend articles to readers can suggest certain news stories over others. This can lead to readers developing a lopsided view of developments. An example of this was witnessed when Apple started rolling out AI-generated news summaries to its users. However, the company had to roll back the feature after it repeatedly produced inaccurate headlines, and attributed it to reputed media organizations like the BBC and the Washington Post.

Enterprises must also recognize and mitigate algorithmic bias. Algorithms are the backbone of AI’s ability to streamline and optimize business operations. However, they can lead to possible bias that can negatively impact operations and employees.

In July 2023, Workday, a leading provider of HR software, faced a class action lawsuit challenging its use of AI screening, which, according to the plaintiffs, discriminates based on race and age. The incident highlights the need for a robust legal framework and human oversight.

The case’s outcome could set the precedent for employers that they can be held legally liable if they fail to prevent screening software from having a discriminatory impact.

Building accuracy, oversight, and trust in AI

The growing integration of AI into enterprise workflows promises unprecedented efficiency, scalability, and innovation. But as the examples above show, this transformation is not without pitfalls. From hallucinations in healthcare models to security flaws in coding assistants, AI systems are capable of producing mistakes that carry significant financial, legal, and reputational consequences. These aren’t just edge cases; they are warnings.

When businesses replace trained staff without implementing safeguards or overestimate the reliability of AI, the cost of error multiplies.

To mitigate these risks, enterprises must shift from viewing AI as a magic solution to treating it as a powerful, but fallible tool. This means investing not just in technology, but in AI governance: ethical frameworks, human oversight, compliance with evolving laws like the EU AI Act, and rigorous internal testing.

Whether it’s a misleading comma or a hallucinated brain region, the lesson remains the same: accuracy, accountability, and oversight are non-negotiable. In the race to adopt AI, businesses that prioritize responsible deployment will be the ones who ultimately build trust, retain customers, and avoid becoming cautionary tales.

Quick Bits, No Fluff

Diversify your retirement with bourbon barrels.

CaskX allows investors to diversify with aging bourbon barrels using a Self-Directed IRA from Directed Trust. As bourbon matures in the barrel, it develops greater complexity and has historically appreciated in value.

Backed by America’s bourbon legacy and growing global demand, this tangible asset offers an alternative path for retirement growth with potential tax advantages.

*This is sponsored content

Brain Snack (for Builders)

Perform a daily “AI smoke test.” Feed each model 3-5 known answer prompts (edge cases included). Log discrepancies, then auto-flag any sudden drift for human review before it reaches prod. Two minutes that can save millions.

Wednesday Trivia

❓ Google’s Med-Gemini demo hallucinated a non-existent brain structure. Which term did it invent?

Login or Subscribe to participate in polls.

Meme of The Day

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.