AI Eyes Everywhere

Plus: Apple sues Oppo, YouTube TV drama, and mental health fears.

Here’s what’s on our plate today:

  • 📸 AI is changing how governments watch, track, and control.

  • 🗳️ Should governments have access to AI-powered facial recognition?

  • ⚡ Apple vs. Oppo, Fox vs. YouTube TV, and a chatbot crisis for teens.

  • 🧠 Why smart surveillance demands even smarter safeguards?

Let’s dive in. No floaties needed…

Invest your retirement in bourbon.

Looking for an innovative way to diversify your retirement portfolio? Investors can now invest in bourbon barrels on the CaskX platform using a Self-Directed IRA from Directed Trust. This strategy allows portfolios to capitalize on an asset that naturally appreciates over time.

With each passing year, the bourbon inside the barrel increases in complexity and flavor. Backed by America's bourbon legacy and rising international demand, this alternative investment offers tangible asset diversification with a history of long-term appreciation.

*This is sponsored content

The Laboratory

How AI is reshaping surveillance

Back in 2023, Microsoft co-founder Bill Gates called artificial intelligence the most important technological advance in decades. At the time, few understood AI’s transformative power, and even today, many still confuse it with text, image, and video generation tools. However, these are just the consumer-facing uses of the technology. Though powerful, they are a mere representation of AI capabilities and use cases.

At its core, the power of AI lies in its ability to analyze enormous volumes of data far beyond human capacity. Whether it’s spotting cancerous cells in medical scans, detecting fraudulent financial transactions, or predicting equipment failures in factories, AI’s potential applications are vast.

In essence, AI acts as a multiplier for human intelligence. While businesses are working on implementing AI tools to increase efficiency, government agencies are doing the same. AI’s ability to improve efficiently means it can be a powerful tool in detecting and deterring crime, and in assisting authorities to nab criminals.

In August 2025, in a bid to push neighborhood policing, the U.K. government announced it would deploy 10 new mobile facial recognition units. These units were to be equipped with live facial recognition (LFR) tech, powered by AI-driven facial recognition algorithms that measure biometric features (like the distance between eyes, jawline shape, etc.) and check them against stored digital profiles. This was not the first time AI tech was being deployed by authorities. And it is also not the only way.

How AI helps

In 2024, during the Summer Olympics, France used AI-driven video surveillance technology to look for crowd surges, abnormally heavy crowds, abandoned objects, the presence or use of weapons, a person on the ground, a fire breaking out, or contravention of rules on traffic direction.

Ever since, AI has been deployed by numerous government agencies around the world due to its ability to improve enforcement without adding pressure on existing law enforcement agencies.

In the U.S., the Transportation Security Administration (TSA) has tested/expanded facial recognition at checkpoints despite pushback from members of Congress who want the practice halted or tightly constrained. Beyond physical surveillance, AI is also being harnessed for financial and cyber monitoring. The U.S. Internal Revenue Service says it is using AI/advanced analytics to pick complex partnership audits. Based on the results, the Internal Revenue Service (IRS) said it opened audits of 76 of the largest partnerships in the U.S.

Other than active use in facial recognition, AI tools are also deployed by enforcement agencies for various natural language processing tasks such as text summarization, translation, question answering, sentiment analysis, and text generation. They can also be deployed to curate information from different sources, like social media, online forums, camera feeds, and banking history, to locate criminals.

Marc Evans, the founder of Fraud Hero, a firm specializing in fraud consulting and training services, told Thomson Reuters that the use of AI is not limited to facial recognition, object recognition, and license plate tracking. AI is particularly beneficial for law enforcement, as it analyzes vast datasets because crimes often recur, sometimes in varying locations or across different types of crime.

AI is also key to safeguarding the online security of users. The tech is also used in digital forensics to create transcripts of translation, chat/audio summarization, CSAM hashing, and now AI‑assisted scanning to spare analysts and speed victim ID.

Beyond day‑to‑day policing, states are modernizing sensor networks and automated analysis. Example: the U.S. is overhauling its integrated undersea surveillance system using new sensing and AI‑driven analytics.

But the same qualities that make AI powerful in law enforcement also open the door to misuse.

How AI hurts

While government agencies highlight the benefits of using AI, they often ignore its limitations or downplay misuse. For instance, Dame Diana Johnson, the U.K.’s Home Office minister, told the BBC that facial recognition was "a powerful tool for policing" and it would only be used in "a very measured, proportionate way" to find individuals suspected of serious offenses. However, government agencies have reportedly used the tech to target ticket touts in Wales, without any prior announcement that automatic facial recognition was deployed at the stadium and Cardiff’s central railway station.

Other than a lack of transparency around the use of AI in government agencies, governments have also used the tech to curb opposition and suppress dissenters.

According to Human Rights Watch, Russian authorities are using Moscow’s video surveillance system with facial recognition technology to track down and detain draftees seeking to evade mobilization for the country’s war on Ukraine.

Similarly, Chinese firms are reportedly building software that uses artificial intelligence to sort data collected on residents, amid high demand from authorities seeking to upgrade their surveillance tools. According to a Reuters report, the system makes it easier to track individuals by tapping into big data and AI.

These examples clearly underscore the importance of ensuring robust guidelines and reasonable restrictions around the use of AI tech by government agencies. However, despite robust laws, AI is not infallible. The technology can make mistakes, and when used for surveillance and crime control, incorrect information can have far-reaching consequences.

The hidden dangers

Rights groups argue that AI surveillance enables a “dragnet” approach, where entire populations are monitored. Campaigners like Big Brother Watch in the U.K. describe police use of live facial recognition vans as “alarming” and a “significant expansion of the surveillance state.”

Even if one were to assume that government surveillance would help bring down crime rates. Studies and real-world deployments show that facial recognition systems have higher error rates for women and people of color. For instance, the U.S. National Institute of Standards and Technology (NIST) found racial and gender disparities in false positives, raising risks of wrongful stops and arrests.

A real-world consequence of how bias in AI systems can lead to wrongful arrests was witnessed in the U.S., where multiple Black men were wrongly arrested due to faulty facial recognition matches, including cases in Detroit and New Jersey.

Critics of AI use for surveillance argue that labeling AI as ‘data-driven’ can obscure biases already present in society. Human rights experts also warn that constant AI monitoring of protests or public spaces can deter people from exercising freedom of assembly and expression. Amnesty International has campaigned against AI-driven facial recognition in cities like New York, citing the risk to democratic freedoms.

What comes next

As of 2025, the governments around the world are looking at the development of AI, both as a challenge and an opportunity. The challenge is how to regulate this new technology, while the opportunity lies in using it to increase compliance and reduce dissent. Here, big tech companies play an important role in how they will shape the future of AI in surveillance, be it for good or bad.

In February 2025, Google's parent company, Alphabet, dropped its promise not to use artificial intelligence for purposes such as developing weapons and surveillance tools. The tech giant updated its ethical guidelines around AI to protect “national security.” This was a major shift for the company and reflected the industry sentiment on the use of AI.

Palantir, the data analytics and defense software firm, raised its annual revenue forecast for the second time within a year in August 2025. It now guides $4 billion in annual revenue with U.S. government sales reflecting sustained demand for investigative AI.

AI can enhance human ability. The technology can be used to crack down on crime and improve the living conditions of millions, or critics warn, it could lead to a surveillance state even larger than the one Snowden highlighted in 2013.

Wednesday Poll

🗳️ Should governments have access to AI-powered facial recognition tools?

Login or Subscribe to participate in polls.

Why U.S. companies are hiring accountants in LATAM.

Is the accounting talent shortage slowing your team down?

The US Accounting Talent Shortage report explains why hiring CPAs is harder than ever—and what you can do about it.

Learn why over 300,000 accountants have left the profession, and how that’s putting your finance team at risk.

Discover why firms are hiring in Latin America to fill critical roles with top-tier, bilingual accountants.

Download the report and protect your finance operations.

*This is sponsored content

Quick Bits, No Fluff

  • Fox vs. YouTube TV: A carriage dispute threatens to pull Fox channels—including sports—from YouTube TV, just ahead of NFL kickoff.

  • Apple sues Oppo: Apple has filed a lawsuit accusing Oppo of stealing trade secrets related to Apple Watch components and design.

  • Teens and AI chatbots: Experts warn that therapy chatbots could worsen teen mental health, calling for stricter guidelines and oversight.

Brain Snack (for Builders)

The government wants AI to make them omniscient. 

As builders, it’s on us to bake ethics into the code, because when governments skip guardrails, the public pays.

Meme Of The Day

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.