Is Your AI Secretly Red or Blue?

Plus: AI regs, weekend AI experiments & our chatbot-trust poll.

Here's what's on our plate today:

  • 📰 Today, we unpack a question: Can AI ever be politically neutral?

  • 🏛️ From Capitol Hill subpoenas to anti-woke executive orders, and more.

  • 🔎 Need-to-know AI headlines, a quick poll for you, and three new tools.

Let’s dive in. No floaties needed…

Build your store. Run your world.

Start your online business for free, then get 3 months for just $1. With Shopify, you don’t just build a website—you launch a whole brand.

Enjoy faster checkouts, AI-powered tools, and 99.99% uptime. Whether you’re shipping lemonade or scaling globally, Shopify grows with you. Trusted by millions in 170+ countries and powering 10% of US e-commerce, it’s your turn to shine!

Plus, you’ll have 24/7 support and unlimited storage as your business takes off.

*This is sponsored content

The Laboratory

Will AI swing left or right? Decoding the political bias in machine learning

“We’re not programming these systems. We’re raising them, just like we raise children.” These are the words of computer scientist De Kai, shared during an interview with IBM Think.

His metaphor departs from the idea that AI should focus solely on technical advancements and economic development. Instead, it supports the growing belief that AI reflects the society it is embedded in.

These are not just the musings of a computer scientist; they raise an urgent question: If AI reflects society, how do we ensure it upholds modern values?

Kai argues that using AI is akin to offloading cognitive decision-making. If users increasingly rely on AI for decisions, how can we ensure those decisions are free from political, economic, cultural, and social bias? More importantly, is AI already biased?

The Republican Party in the U.S. believes it is. And they are blaming it on not just the Democrats, but also companies that develop these technologies.

The right’s accusation: AI is a leftist tool

If we go by the current U.S. political dispensation, AI companies have a left-wing bias.

In March, the Republican Chairman of the House Judiciary Committee sent subpoenas to sixteen major tech companies, asking whether the federal government had pressured them into using artificial intelligence to censor unlawful speech. The subpoenas were dispatched with a letter, asking companies to preserve all documents between them and the earlier Biden administration.

The aim, it seems, is to understand whether AI algorithms can be used to discriminate against right-wingers not just online, but in any everyday use case for AI, like hiring practices. And did companies collude with the earlier administration to suppress right-wing speech?

And the Trump-led Republican administration is not done yet.

This month, Missouri’s Republican attorney general, Andrew Bailey, opened an investigation into whether Google, Meta, Microsoft, and OpenAI are leading a new wave of censorship by training their AI systems to give biased responses to questions about President Trump. The President has also issued an executive order on what he called “woke A.I.,” with the motive of getting rid of wokeness once and for all.

Trump’s fears of wokeness in AI models are based on past incidents when AI models generated inaccurate images of historical figures, including the Pope, the Founding Fathers, and Vikings.

However, it is not just the conservatives who think AI is biased.

The left’s concerns

During its time in office, the Biden administration also passed executive orders and included a blueprint for the AI Bill of Rights. Through the executive order, the Biden administration mandated civil rights oversight for AI use in hiring, housing, policing, and even required the Department of Justice to address bias in legal decision-making AI. The administration also laid down principles for protection from algorithmic bias in systems.

Democratic efforts to control AI bias are not unfounded either.

The earliest incident of an AI system spewing hate was reported in 2016, way before AI chatbots had reached the number of users they do now.

In 2016, when Microsoft released its Tay chatbot on Twitter, now X, the chatbot started parroting hateful comments before being shut down less than a day later. Just a year later, its new chatbot Zo faced criticism when its remarks reflected religious bias.

And the problem continues to this day. Recently, Elon Musk’s chatbot from xAI made repeated references to a white genocide in South Africa, an extremist trope. Musk later acknowledged the comments were caused by unauthorized changes to system prompts. xAI promised to monitor future outputs more closely.

Research speaks

The actions of administrators can be viewed through the lens of pandering to their voters and trying to influence AI models based on their ideological standings. But one cannot deny that their attempts to sway AI chatbots one-way-or-another are not without reason. And, research backs their fears.

A 2023 research from the University of East Anglia found that ChatGPT consistently favored liberal parties across the U.S. (Democrats), the U.K. (Labour), and Brazil (Lula). Researcher, however, used Political Compass-style prompts to make this determination.

Another study conducted by MIT’s Center for Constructive Communication (CCC) found that even reward models trained for truthfulness leaned left, especially on topics like climate and labor unions, and that bias persisted even with objective data.

And it is not just the left that models prefer. According to a study published in the journal Humanities and Social Sciences Communications, said that models of ChatGPT, while maintaining libertarian values,  showed a significant rightward tilt in how they answered questions over time.

Why ideological bias happens in AI

There is no short answer to whether AI models exhibit ideological biases. AI learns from the data it is trained on, including text, video, and images, and continuously adjusts based on user feedback. Since much of this training data is publicly available, AI tends to mirror societal biases embedded in that content.

Additionally, there is the problem of perceived bias, which can vary based on the model used.

A Stanford survey (May 2025) analyzed 24 popular LLMs across eight companies. They found that some of the most popular LLMs have left-leaning political slants. The researchers then show that with just a small tweak, many models can be prompted to take a more neutral stance that more users trust.

Can an unbiased AI ever exist?

The existence of an unbiased AI, as of now, seems like a problem that is waiting to be solved. As shown by the Stanford research, models can be prompted to take a more neutral stance.

However, there are risks. Since AI models are reflective of the data they are trained on, enterprises and users should be conscious of the model they are using and the data it is trained on. The problem for enterprises of ensuring they toe the regulatory line, while delivering the key metrics, boils down to having a thorough understanding of how not just how AI models can be implemented in workflows, but also how they can help tweak models so they align with company policies.

If we go by Kai’s understanding of AI, then enterprises and users will have to work on nurturing these algorithmic ‘children’ based on values they cherish, to make the most of this emerging technology.

Roko’s Pro Tip

💡 Audit your prompts like a fact-checker: run the same question through two different models, then reconcile gaps—you’ll spot hidden tilt in seconds.

Great AI starts with great people.

AI isn’t built by tools—it’s built by teams. Athyna finds you the right people to power your roadmap, from frontier model builders to infrastructure engineers.

Our talent is sourced globally and matched with AI-assisted precision, then hand-vetted to ensure technical depth and cultural fit. Most roles are filled in under 5 days. Whether you’re scaling models, shipping features, or fixing bottlenecks, we’ll help you build the team to get it done.

*This is sponsored content

Quick Monday Poll

🗳️ Do you think today’s top AI chatbots show a political bias?

Login or Subscribe to participate in polls.

Headlines You Actually Need

Meme Of The Day

Rate this edition

What did you think of today's email?

Login or Subscribe to participate in polls.