Airports, Algorithms, And You

Plus: Investors eye an AI bubble, 2026’s pragmatic turn, and Gmail’s new AI inbox.

Here’s what’s on our plate today:

  • 🧪 How AI and biometrics quietly rewire borders.

  • 🧠 Bubble fears, pragmatic AI, and Gmail’s AI inbox.

  • 🗳️ Monday Poll: Who should govern biometric and border algorithms?

  • 💡 Roko’s Pro Tip: Set your own airport data red lines.

Let’s dive in. No floaties needed…

Launch fast. Design beautifully. Build your startup on Framer—free for your first year.

First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours. No dev team, no hassle. Join hundreds of YC-backed startups that launched here and never looked back.

Key value props:

  • One year free: Save $360 with a full year of Framer Pro, free for early-stage startups.

  • No code, no delays: Launch a polished site in hours, not weeks, without hiring developers.

  • Built to grow: Scale your site from MVP to full product with CMS, analytics, and AI localization.

  • Join YC-backed founders: Hundreds of top startups are already building on Framer.

Eligibility: Pre-seed and seed-stage startups, new to Framer.

*This is sponsored content

The Laboratory

How AI and biometrics are redefining global travel

AI tools now assess travelers’ risk long before they reach U.S. border checkpoints. Photo Credit: Getty Images.

Human history is full of examples of groups travelling across continents in search of greener pastures and better lives. The constant movement of humans not only allowed civilizations to intermingle but also cultures to learn from one another. It is estimated that in 2022, around 280 million people left their country of origin, driving economic growth, dynamism, and understanding between diverse cultures.

Today, that movement is being quietly re-engineered. Borders are no longer defined solely by passports and visas, but by biometric scans, algorithmic risk scores, and automated decision systems that evaluate travelers long before they reach a checkpoint.

Facial recognition, iris scanning, AI-driven risk scoring, and automated immigration checks are becoming the default infrastructure of global travel. The scale is staggering: by early 2025, more than 2,100 CAT-2 devices with facial recognition capabilities had been deployed across over 250 federalized airports in the U.S. alone.

Effective December 26, 2025, U.S. Customs and Border Protection began collecting facial biometrics from all noncitizens upon entry and exit at airports, land ports, seaports, and other authorized points of departure, removing prior exemptions for diplomats and most Canadian visitors.

Meanwhile, the European Union launched its Entry/Exit System on October 12, 2025, with 29 European countries introducing the EES gradually or in full at their external borders over a period of six months. By April 2026, traditional passport stamps are expected to become a part of history, replaced by biometric records stored for years.

Beyond verification: AI risk scoring

Biometric identification is only part of how modern borders work. Increasingly, AI systems assess travelers long before they reach a checkpoint.

In the U.S., immigration agencies are already using AI at scale. A 2024 Department of Homeland Security inventory identified 105 active AI use cases across major immigration bodies. Some of these tools go far beyond identity checks. Babel, for example, searches and aggregates social media posts, translates documents, and uses image and object recognition to flag potential risks.

Immigration and Customs Enforcement uses tools such as the Hurricane Score to predict whether someone released from detention will comply with check-ins, alongside a Risk Classification Assessment that estimates flight risk and public safety risk.

Canada has taken an even more expansive approach. Its Travel Compliance Indicator draws on five years of Canada Border Services Agency data to analyze travel history, vehicle information, identification type, and other variables in real time. Travelers flagged by the system may be sent for secondary inspection. The system is currently deployed at six land border crossings, with plans to expand it nationwide by 2027.

These systems raise difficult questions. They are used to establish identity, assign risk scores, and ultimately decide who can travel or enter a country. Because of that, their potential to cause harm is significant.

The accuracy debate

Supporters point to strong performance metrics. U.S. Customs and Border Protection says its facial comparison technology verifies travelers’ identities with over 98% accuracy, far outperforming manual biographic checks. Testing by the National Institute of Standards and Technology found that the best algorithms can achieve false positive rates below 0.001%.

But real-world conditions are messier than laboratory tests. An independent review of live facial recognition trials conducted by London’s Metropolitan Police found that out of 42 system matches, only eight could be confirmed as fully accurate.

Accuracy issues also affect certain groups more than others. NIST has found that Asian, African American, and American Indian individuals generally experience higher false positive rates than white individuals. While the most advanced systems show minimal demographic differences, many algorithms currently in use still display significant disparities.

When these systems fail, the consequences can be severe. At least seven people in the U.S. have been falsely arrested after police relied on facial recognition matches without independent verification.

Robert Williams spent 30 hours in a Detroit jail after being wrongly identified as a shoplifting suspect. His case eventually led to some of the strongest restrictions on police use of facial recognition in the country, but only after years of legal action.

In another case, LaDonna Crutchfield was arrested at her home in January 2024 and accused of attempted murder. Investigators already knew the identity of their real suspect and could have easily seen that Crutchfield was several inches shorter and significantly younger. She still spent hours in jail before the mistake was corrected.

“These aren’t isolated incidents,” Williams later said. “Once the facial recognition software told them I was the suspect, it poisoned the investigation”.

The illusion of choice

To address growing concerns, citizens are told that the use of such technology is optional and that they can opt out. However, in practice, that choice is often unclear or unavailable.

A July report by the Algorithmic Justice League found that airport staff offered just one percent of travelers the option to opt out. The Privacy and Civil Liberties Oversight Board similarly found that TSA signage and officer instructions frequently fail to explain that opting out is even possible.

Travelers who do attempt to opt out often report hostile treatment. One described an agent raising his voice and insisting that photos had already been taken everywhere in the airport. Another said they were treated with “extreme rudeness” throughout the process.

Oversight remains limited. As of now, TSA has not published a comprehensive Privacy Impact Assessment covering its facial recognition programs. A May 2025 report from the Privacy and Civil Liberties Oversight Board issued 13 recommendations, including keeping facial recognition voluntary for all travelers and requiring a clear demonstration that benefits outweigh privacy risks before any expansion.

There is also no explicit congressional authorization for TSA’s biometric use on domestic travelers. Members of Congress have repeatedly asked the agency to pause deployment.

Industry vs privacy: the divergent narratives

Industry groups argue that biometrics are essential to modern air travel. They claim the technology increases airport throughput by up to 30%, cuts boarding times by a similar margin, and reduces overall wait times by as much as 60%.

When proposed legislation threatened to require explicit opt-in consent, major aviation groups warned it would undermine security and derail years of digital modernization.

However, privacy advocates see a different future. They warn that facial recognition databases could be linked to widespread surveillance networks, creating an infrastructure for mass tracking that would be difficult to dismantle.

With few U.S. regulations governing biometric data and growing pressure to connect government datasets, critics fear mission creep is not just possible, but likely. And though some political and economic entities like the EU have created laws governing the use of AI in biometrics, they have a long way to go before they can guarantee citizens the liberty to opt out of sharing their biometric data.

Humanity has always been driven by the need and desire to travel in search of safety, opportunity, and better lives. While modern travel is far removed from the physical hardships faced by earlier generations, movement today comes with a new kind of friction in the form of continuous digital scrutiny.

Nation-states argue that these systems are necessary to protect citizens and manage borders at scale, and technology has undeniably made identification faster and more efficient. But the question is no longer whether AI can improve border security.

The real question is whether states will stop at identifying those who cross their borders, or whether the same systems will be used to monitor, score, and constrain not just undocumented migrants, but citizens themselves, especially those who challenge authority or fall outside statistical norms.

Bite-Sized Brains

  • AI bubble hangover incoming: Investors are quietly bracing for an AI correction as infra spend and sky-high valuations outrun today’s $50–100B in real revenue, with more people treating ‘AI bubble reckoning’ as baseline, not black swan.

  • 2026: year of boring AI: TechCrunch says this year’s shift is from shiny demos to “does it actually move a KPI?”, with more agentic workflows, vertical tools, and consolidation, and far fewer one-feature wrappers riding foundation models for vibes only.

  • Gmail becomes your AI chief of staff: Google’s new AI Inbox replaces the classic message list with auto-generated to-dos, thread summaries, and suggested replies for consumer accounts, effectively putting a model between you and your email, whether you asked for it or not. 

Roko Pro Tip

💡 Before you fly, assume every frictionless lane is trading speed for biometric data. If you can, practice opting out once this year: ask for manual ID checks, note how staff react, and decide now where your own red lines are on face scans and risk scoring, before those systems become the default everywhere.

Hire globally. Pay smarter.

Why limit your hiring to local talent? Athyna gives you access to top-tier LATAM professionals who bring tech, data, and product expertise to your team—at a fraction of U.S. costs.

Our AI-powered hiring process ensures quality matches, fast onboarding, and big savings. We handle recruitment, compliance, and HR logistics so you can focus on growth. Hire smarter, scale faster today.

*This is sponsored content

Monday Poll

🗳️ Who should set the limits on AI-powered border tech like facial recognition and risk scores?

Login or Subscribe to participate in polls.

Meme Of The Day

Rate This Edition

What did you think of today's email?

Login or Subscribe to participate in polls.