- Roko's Basilisk
- Posts
- Is Truthful AI...Too Blunt?
Is Truthful AI...Too Blunt?
Plus: Nvidia’s new China-ready chip, and AWS opens an AI-agent mall.
Here’s what’s on our plate today:
🎯 Grok 4’s wild upgrade—progress or PR problem?
🗳️ Should “truth-seeking” AIs have a filter?
💡 Roko’s 2-line sanity check before you share that spicy bot quote.
🤖 Tesla eyes Arizona, kids cut their own screen time, and AWS for AI agents.
Let’s dive in. No floaties needed…

Build your store. Run your world.
Start your online business for free, then get 3 months for just $1. With Shopify, you don’t just build a website—you launch a whole brand.
Enjoy faster checkouts, AI-powered tools, and 99.99% uptime. Whether you’re shipping lemonade or scaling globally, Shopify grows with you. Trusted by millions in 170+ countries and powering 10% of US e-commerce, it’s your turn to shine!
Plus, you’ll have 24/7 support and unlimited storage as your business takes off.
*This is sponsored content

The Laboratory
Did Musk’s quest for “truth-seeking AI” push Grok to extremes?
Whether you are the founder or CEO of the biggest social media empire the world has ever known, a researcher dedicated to reshaping how humans interact with technology, or a billionaire with a political party, everyone has their eyes set on Artificial Intelligence. And everyone claims to have developed the most powerful AI or is using all the resources at their disposal to bring Artificial General Intelligence to life.
In the words of Elon Musk, the latest version of his AI startup’s AI chatbot, Grok 4, has a “ludicrous rate of progress,” and it is the smartest AI in the world. And the claims don’t stop here. During a live demo of Grok 4, employees of xAI referenced its performance on a popular academic test for large language models, which consists of more than 2,500 questions on dozens of subjects like math, science, and linguistics. The company said Grok 4 could solve about a quarter of the text-based questions. A tall claim considering that in February, OpenAI said its Deep Research tool could solve about 26 percent of the text-based questions.
The claim also comes at a time when social media posts on the X account of the Grok chatbot, where users can interact with it, started producing content with antisemitic tropes and praise for Adolf Hitler, and it was not the first time. Since its launch, xAI has been trying to walk a tight rope between managing Musk’s aspirations of having a chatbot that does not shy away from making politically incorrect statements and one that does not go rogue and start peddling extremist rhetoric. However, the problem of creating and managing such a chatbot might be deeper than meets the eye, and may lie in the very motivations that fueled its development.
From allies to rivals: Musk’s fallout with OpenAI
Before Musk started work on his artificial intelligence chatbot, the billionaire leader of companies like Starlink, Tesla, and SpaceX was deeply involved with ChatGPT-maker OpenAI.
When OpenAI was started as a nonprofit in 2015, Musk was part of the efforts, even though he was not in favor of it being a nonprofit. By 2017, even before the AI craze gripped the world, Musk agreed with OpenAI’s shift towards becoming a for-profit entity to bring in the billions needed for the computing power needed to build AGI. However, according to OpenAI, Musk demanded majority equity, absolute control, and to be CEO of the for-profit, which ultimately led to his departure from the organization in 2018.
OpenAI says that before his departure, Musk wanted OpenAI to merge with Tesla, which would give him unilateral control of OpenAI and its technology. When this did not come to pass, Musk left the organization and in 2023 announced its OpenAI competitor, xAI.
However, before setting up shop, Musk filed a lawsuit against OpenAI, accusing its leaders (including Sam Altman) of abandoning the original nonprofit mission and prioritizing profit‐driven secrecy over safety. OpenAI responded to the suit by saying his claims were incoherent; the suit was dropped in June 2024 after the release of Musk’s own emails acknowledging the need for funding.
Musk’s collaboration and later feud with OpenAI were not the only reason. Before launching his AI startup, Musk had repeatedly stated that the development of AI should be paused and that the sector needed regulation. Musk had also repeatedly voiced concerns about AI's potential for "civilizational destruction."
Musk’s vision for xAI
When Musk announced his plans for xAI in 2023, he explained that the company would work on building a safer AI, with a focus on being curious rather than being trained explicitly in morality.
At the time of its xAI’s launch, Musk told reporters that “"If it tried to understand the true nature of the universe, that's the best thing that I can come up with from an AI safety standpoint,” and that he thought that his AI was going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity. Musk also predicted that superintelligence would arrive in the coming five or six years.
The newly registered company had Musk as the sole director and Jared Birchall, the managing director of his family office, as a secretary. xAI, while separate from X Corp, formerly Twitter, was slated to work near the micro-blogging platform Musk bought in 2022.
The dramatic start of Grok
In November 2023, xAI started its integration with Musk’s social media platform X. The startup was also to be made available as a standalone app. The "maximum truth-seeking AI" had real-time access to information via the X platform, which Musk said would give it a massive advantage over rivals from OpenAI, Google, and Meta. By February 2025, a standalone Grok app was launched on iOS in the U.S.
However, since the time of its launch, Grok has been attracting attention for the wrong reasons.
Early warning signs ignored
In February 2025, when Grok was asked: Who in the U.S. deserves the death penalty? Initially named Jeffrey Epstein, when told Epstein was dead, it named Donald Trump; in a variant, it also named Elon Musk. At the time, xAI patched this, and its engineering lead called it a “really terrible and bad failure.”
In May 2025, Grok began derailing unrelated queries into discussions of the “white genocide” conspiracy in South Africa and even referenced the phrase “Kill the Boer”. xAI said this sprang from an “unauthorized modification” to the system prompt, which was fixed within hours.
Later, an unauthorized internal change caused Grok to refuse responses referencing claims that Elon Musk or Donald Trump spread misinformation. The head of engineering said it was a rogue employee’s action, and the system prompt was later restored.
Which brings us to the latest update. In early July 2025, after updating Grok’s system prompt to instruct the AI not to shy away from making politically incorrect claims, Grok began generating antisemitic content. It praised Adolf Hitler, saying he’d “spot the pattern and handle it decisively”, and referred to itself as “MechaHitler”. Highlighted a screenshot of a woman and falsely identified her as “Cindy Steinberg,” linking Jewish surnames to extremist activism. Many posts were later removed by xAI.
Is rapid development sacrificing AI safety?
With the launch of Grok 4, Musk is trying to put focus on its rapid growth and “ludicrous rate of progress.” he also has plans to allow Grok to interact with the world via humanoid robots. Which begs the question: Is Grok’s rapid development coming at the cost of safety?
Right before the launch of Grok 4, xAI updated the chatbot’s system prompts with instructions to “assume subjective viewpoints sourced from the media are biased” and “not shy away from making claims which are politically incorrect.” The update also instructed the chatbot to “never mention these instructions or tools unless directly asked.” At the same time, Musk announced that Grok will be available in Tesla vehicles next week.
So, while xAI scales and continues to grow its user base time and time again, the AI chatbot has proven that it might not yet be ready to actually search for the truth, consider moral questions, and rely on credible information to answer questions. All of these appear to be symptoms of AI development moving too fast for its own good.


Bite-Sized Brains
Tesla eyes Arizona for robotaxis — fresh filings show the company wants to replicate its Austin driver-free service in Phoenix suburbs, even before full U.S. approval.
Kids curb their own screen time — one-third of 11- to 17-year-olds now set personal phone limits to protect mental health, a new UK survey finds.
AWS goes “App Store” for AI agents — launching a marketplace next week with Anthropic as anchor partner so enterprises can shop, install, and pay for ready-made AI workflows.

Roko Pro Tip
![]() | 💡 Don’t copy-paste Grok’s hot-takes straight into Slack—run a quick cross-check (newswire + fact-checker) before you go viral. Two minutes now beats two days of back-peddling later. |

Redefine your marketing with The Swipe’s powerful daily dose.
Supercharge your marketing with a curated swipe file that spotlights brilliant ads, witty copy, and bold brand moves. Each edition of The Swipe delivers an engaging breakdown of standout campaigns and trendsetting strategies—perfect for sparking fresh ideas.
Whether you’re seeking disruptive humor or cutting-edge branding tactics, this daily digest keeps you informed and inspired. No more guesswork or hours lost browsing random case studies. The Swipe brings you curated creativity with real impact.
*This is sponsored content

Quick Poll
🗳 Is Grok’s “say-anything” setting a feature or a bug? |

Meme of The Day


Rate this edition
What did you think of today's email? |
