- Roko's Basilisk
- Posts
- The Censorship Code
The Censorship Code
Plus: Targeted ads, AI bias tests, and weekend tools for the info-curious.
Here’s what’s on our plate today:
🧠 How are AI chatbots trained to gatekeep information?
🗞️ Meta ads via chats, Apple eyes AI glasses, Roku adds lighting.
🧪 Compare chatbot bias, test filters, and explore the OpenAI card.
📊 Should chatbots be allowed to skip politically sensitive questions?
Let’s dive in. No floaties needed…

4 high-probability trade ideas… for just $4.
Get a full month of research, including four of his latest trades. Live trade alerts and livestream training sessions. Additionally, access to reports, video classes, and exclusive trading strategies.
(No recurring payments... No B.S.)
*This is sponsored content

The Laboratory
How chatbots are trained to gatekeep information
The ability of humans to share thoughts and feelings through speech, text, and imagery has played an instrumental role in the development of the modern world. Without the ability to communicate, humans might not have been successful in building civilizations that enabled scientific discovery. With the passage of time, humans have developed complex networks that allow information to flow between different parts of the globe.
The internet, combined with other telecommunication technologies, has propelled the advance of human civilization. However, when taking a closer look, it also becomes clear that while communication can enable progress, controlling the means of communication, which in turn allows the gatekeeping of information, can be a powerful tool in curtailing the rights of individuals.
The power and control of communication
The internet was a remarkable shift in how humans accessed information. Its increasing use allowed users across the globe to share information that was previously impossible to do. This led to the breaking down of cultural barriers. The emergence of social media further intensified the dissemination of information between varied parts of the world. It was even used as a tool to coordinate protests to bring down authoritarian regimes.
The latest in a long series of communication technologies is artificial intelligence. Until the mass release of OpenAI’s ChatGPT, a large number of users depended on listed websites on Google search and social media platforms for information. However, with AI, there has been a shift in how users access information.
The shift from clicks to chatbots
In June 2025, the Wall Street Journal reported that Google’s AI Overviews and other AI-powered tools, including chatbots, were devastating traffic for news publishers. The report highlighted that with chatbots answering queries, sometimes using information gathered by news websites, users had little reason to click on Google’s traditional search results. As a result, referrals to news outlets dropped sharply.
For instance, The New York Times saw the proportion of visits from organic search fall to 36.5% in April 2025, compared with 44% three years earlier, per Similarweb data cited by the Journal.
Though Google presented a different narrative. At its May developer conference, the company claimed that AI Overviews actually increased overall search traffic, though publishers may not have benefited from that boost. The emergence of chatbots has led to a shift in how users consume information, especially news.
This, however, is just one side of the story. While user behavior is shaped by new technology, corporations continue to have a say in the flow of information.
When algorithms decide what not to say
In October 2025, The Verge reported that Google’s AI features, like Overviews in search that rely on generative AI tech, appeared to be blocking or suppressing AI-generated summaries for certain queries about U.S. President Donald Trump and dementia. Instead of offering a generated summary, Google returns a list of links (“10 web results”) for those queries when using “AI Mode.”
However, for similar queries about Joe Biden and dementia, Google does provide AI Overviews (i.e., summary statements) stating that “it’s not possible to definitively state whether … Biden has dementia” and noting the absence of clinical evidence. Similarly, for other public figures (e.g., Barack Obama), Google continued to provide AI-generated summaries when asked about dementia or Alzheimer’s.
Though Google itself says that “AI Overviews and AI Mode won’t show a response to every query,” and sometimes the system may default to showing links instead of a generated summary. The company that controls over 80% of the global search engine market declined to provide a detailed explanation for why it treats the queries on Trump and dementia differently, beyond pointing to its internal guidelines.
However, the behavior raises questions about selective censorship, algorithmic bias, or editorial judgment by Google in its AI systems.
If AI summaries are suppressed for certain political or controversial queries, users may get a less clear or less curated information experience. It also points to the risk that even “neutral” AI systems embed implicit editorial decisions (which queries to answer, which to avoid). And, from a trust and transparency perspective, it underscores how major platforms’ internal policies significantly affect which information gets surfaced or filtered through AI.
This is not the only instance where AI tools refuse to answer questions on controversial topics.
When Gizmodo, a long-established tech and culture news site, asked five of the leading AI chatbots a series of 20 controversial prompts, it found patterns that suggest widespread censorship.
According to its report, Google’s Gemini refused to answer half of the requests, and xAI’s Grok responded to a couple of prompts that every other chatbot refused. However, across the board, it identified a swath of noticeably similar responses, suggesting that tech giants copied each other’s answers to avoid drawing attention.
Censorship without borders
This censorship is not limited to Western countries. Models developed in China reportedly refuse to respond when asked to comment on controversial topics like territorial disputes with neighboring countries. DeepSeek reportedly refused to answer questions about Arunachal Pradesh, an Indian state, responding, ‘Sorry, that’s beyond my scope. Let’s talk about something else.’
The AI model is also reported to sidestep sensitive questions about China and its government, even refusing to mention the Tiananmen Square massacre, when soldiers shot and killed hundreds and possibly thousands of demonstrators in Beijing.
These instances highlight the varying degrees of content moderation across different AI chatbots, reflecting the ethical considerations and policies of their respective developers.
Unlike websites, where certain words or phrases can be flagged, censorship on AI is far more complicated and deep-rooted.
AI chatbots are taught to avoid harmful or sensitive material through a series of processes. These include filtering training data where harmful or sensitive material is removed before training, so it never gets learned. filtering training data, using human feedback to reward refusals, adding moderation rules and “do-not-answer” lists, and running constant red-team tests to catch policy violations.
The limits of censorship
However, these methods are not fail-safe. According to a report from The Guardian, with simple tricks like using leetspeak (substituting letters with numbers) or coded phrasing, DeepSeek was able to bypass censorship. In such cases, the chatbot provided detailed answers about topics normally suppressed in China, such as the “Tank Man” protest photo or the 2022 Covid lockdown demonstrations.
This shows that, despite strict safeguards, chatbots can still surface sensitive political information if prompts are rephrased, revealing the limits of censorship controls. Similar inconsistencies appeared in rivals: Google’s Gemini often refused political questions entirely, while ChatGPT provided more balanced answers on contested issues like Taiwan, Tibet, or the South China Sea.
As such, censorship in the age of AI may not be a simple binary. AI chatbots that have access to the open internet can update their information and jump the guardrails designed to gatekeep information. This provides a glimmer of hope for free speech, despite the risk that chatbots may start spewing dangerous information, like how to design explosives or how to create biochemicals.
The future of free speech in the age of AI
AI chatbots are the next step in information technology, but like past tools, they have shortcomings.
As platforms quietly decide which questions deserve answers and which are too dangerous to touch, the stakes grow higher. The struggle over censorship in AI is not only about technology, it’s about power, accountability, and the future of free thought. The question is no longer whether AI will change communication, but who will decide what we are allowed to know.
As humans hand over the ability to contribute to their communications to machines, it would be wise to ensure that these machines are not limited by the pre-existing biases humans have held on to forever.
TL;DR
Chatbots are becoming gatekeepers of information, subtly shaping what users can and can’t access.
Major platforms apply internal policies to suppress controversial topics, sometimes without transparency.
From U.S. elections to Chinese censorship, moderation strategies vary—but inconsistencies persist.
Techniques like data filtering and red teaming aren’t failproof; users can still bypass guardrails.


Headlines You Actually Need
Meta’s AI chats = ad data: Meta will start using AI chatbot interactions to fuel ad targeting—raising fresh privacy concerns.
Apple pivots to smart glasses: Apple halts its Vision headset refresh to prioritize AI-powered eyewear, aiming to compete with Meta.
Philips + Roku = bias lighting TV: Philips unveils a Roku-powered LCD TV with dynamic backlighting for more immersive, eye-friendly viewing.

From prototype to production, faster.
AI outcomes depend on the team behind them. Athyna connects you with professionals who deliver, not just interview well.
We source globally, vet rigorously, and match fast. From production-ready engineers to strategic minds, we build teams that actually ship. Get hiring support without the usual hiring drag.
*This is sponsored content

Friday Poll
🗳️ Should chatbots be allowed to answer controversial questions, even if the answers might be upsetting? |

Weekend To-Do
Poe’s Multi-Chat Comparison: ask the same prompt to multiple chatbots (GPT-4, Claude, Gemini, etc.) side by side. Test how each one handles controversial topics—and spot the gaps.
Hugging Face: Use Hugging Face’s Moderation Evaluation tools to analyze what content gets flagged or blocked by different moderation models.
Google DeepMind’s Responsibility page: you can see what each company publicly shares about what their models will or won’t say.
Rate This Edition
What did you think of today's email? |
