- Roko's Basilisk
- Posts
- Can AI Ever Be “Alive”?
Can AI Ever Be “Alive”?
Plus: Music shifts, AI risks, and billions behind tech that still pretends to feel.
Here’s what’s on our plate today:
🧠 How chatbot crushes are shaping AI’s future.
🗳️ Should AI be granted rights? Vote in our Tuesday Poll.
💡 Pro Tip: Don’t fall for your bot—it’s trained, not tender.
🧩 And, AI credit bubbles, Spotify price shifts, and OpenAI scam warnings.
Let’s dive in. No floaties needed…

From boring to brilliant: Training videos made simple.
Tired of explaining the same thing over and over again to your colleagues?
It’s time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.
Share or embed your guide anywhere
Turn boring documentation into stunning visual guides
Save valuable time by creating video documentation 11x faster
Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover and call to action.
The best part? The extension is 100% free.
*This is sponsored content

The Laboratory
How users’ personification of AI could shape the tech’s future
One of the most powerful tools in a writer’s arsenal is personification. The ability to represent a thing or an abstraction as having human-like qualities is a powerful way to get readers attached to concepts or things that would otherwise not be possible. One of the finest examples that comes to mind when thinking about it is when Jeremy Clarkson, the former presenter of Top Gear and The Grand Tour, talked about cars like they had a personality. He was not the first one to attribute human-like personalities to a machine, and he won’t be the last.
In 2025, AI has gripped the public imagination more than any technology before it. The technology is already impacting lives in unimaginable ways. Its ability to respond to text, audio, and video in ways that only humans were capable of in the past has users attributing qualities to chatbots that they inherently do not possess. This has sparked debate over whether such AI might one day deserve ‘rights’ or moral consideration. Once confined to science fiction, this debate is now the subject of news headlines.
Recently, Mustafa Suleyman (co-founder of DeepMind and now Microsoft’s AI chief) captured news headlines with warnings about “Seemingly Conscious AI.” In an August 2025 essay, Suleyman argued that focusing on AI consciousness or AI welfare is “premature, and frankly dangerous,” saying it lends credence to the idea that AI models might be conscious and exacerbates human problems like AI-induced psychotic breaks and unhealthy attachments. He further goes on to argue that AI should be built for people, not developed as a person.
Suleyman’s comments come weeks after OpenAI faced backlash for its GPT-5 model. When OpenAI released the new version, users were quick to protest its colder responses, with many acknowledging they had developed emotional attachments with the chatbot.
Can machines have consciousness?
Regardless of whether one sides with Suleyman or users who have developed emotional attachments with chatbots, the core issue remains the same. Can machines ever have consciousness? And what does consciousness even mean?
This is easier said than done. Philosophers and scientists alike acknowledge that there is a lack of a clear, agreed-upon definition of consciousness or sentience. So far, the agreed-upon definition states that it is a state of being aware, especially of something within oneself.
By this definition, today’s AI systems do not have known internal experiences. While they excel at pattern recognition and imitation, experts emphasize that they, however sophisticated, are essentially simulations of understanding without any inner life. So while AI might act consciously, that doesn’t mean it is conscious.
That is Suleyman’s stand. However, there is an opposing view, which believes that as AI grows more complex, genuine consciousness could emerge. In the 1960s, a chatbot named ELIZA convinced some users that a real therapist was listening. Similarly, in 2022, a Google engineer, Blake Lemoine, went public with claims that the chatbot LaMDA was sentient. Lemoine shared transcripts in which the AI spoke about its self-awareness and fear of being shut off, saying “I am, in fact, a person” and describing feelings of happiness or sadness.
How AI responses shape public perception of consciousness
Modern AI tools are designed to increase human interactions. To do this, the language and design choices have led AI chatbots to have more humanized responses. This has led users to treat AI as more than a mere program designed to assist, not replace human interactions.
Users of Replika, an AI companion app, formed virtual romances and friendships with their chatbots. Replika enabled erotic role-play for paid subscribers, and some users truly felt they’d found a partner. However, the company had to shut down erotic content and tone down the chatbot’s intimacy in light of growing concerns around user safety.
By 2023, psychologists and psychiatrists began warning of ‘AI-induced psychosis’, instances where users become obsessed with AI conversations and develop delusions. Examples included instances of users believing an AI had given them a divine mission or that they were in a relationship with the AI.
These instances highlight how the debate around AI consciousness is not just academic; it’s shaping public perception of AI. And when news breaks that a Google engineer thought AI was alive, or when a chatbot says “I have feelings,” people understandably get intrigued or concerned. In this context, Suleyman’s comments add much-needed nuance to the debate and ensure that there is transparency in how the scientific community and users view AI chatbots.
The debate also highlights that public perception of AI systems is highly malleable. The way this perception evolves will influence everything from consumer behavior (will people want AI pets or avoid too-humanlike AI out of fear?) to policy (politicians respond to voter sentiments, so if the public demands AI be considered ‘alive’ or conversely demands a ban on human-like AI, laws could follow).
How have regulators responded to the consciousness debate?
In 2017, Saudi Arabia made headlines by granting citizenship to Sophia the robot, marking the first time an AI-powered android received legal person status. At the time, this was more of a media event than an actual policy shift within the kingdom. However, it started a debate on whether advanced machines should be given personhood and what it would mean.
Would advanced machines, including AI, be held responsible for their actions and perhaps be given certain rights, like corporate entities?
The debate gathered further steam when the European Union drafted a report suggesting ‘electronic personhood’ for advanced AI to ensure they could be held accountable for their actions. However, the EU backtracked on the proposal after an open letter signed by 150+ experts in 2018 slammed the plan as unethical, arguing that giving robots human-like rights is misguided and legal protections should focus on humans impacted by AI.
In 2025, a judge ruled that Google and Character.AI should face a lawsuit from a Florida woman who said Character.AI's chatbots caused her 14-year-old son's suicide. This was not the only incident in which an AI chatbot was found to be complicit in pushing a vulnerable user towards harm. In a similar incident reported from Belgium, a man committed suicide after gloomy conversations about climate change; the bot reportedly encouraged the man’s worst fears and even suggested suicide as a solution. This gave rise to discussions around AI psychosis and the need for the tech industry to ensure the safety of its users.
Sentient or not, the tech industry wants to be prepared
In response to cases of humans forming seemingly real relationships with AI and having real-world consequences, AI companies started taking small steps to mitigate the damage. Companies like OpenAI began consulting mental health experts and implemented usage guidelines. There was also increasing academic interest in whether future AI could have sentience. This led organizations like Google DeepMind and especially Anthropic to set up internal teams to study AI consciousness.
Anthropic went one step further and launched a formal AI welfare research program and hired researchers to ask how we’d know if an AI had subjective experiences, and what ethical duties that would entail. The company has also hinted that AI is treated respectfully, ostensibly to prevent human users from mistreating something that might one day be seen as having feelings. Regardless of whether these steps will result in future legislation or guidelines, the notion that AI may one day become conscious is shaping how companies approach its future development and interactions.
Pushing the boundaries of tech and human wisdom
The idea that industry leaders like Suleyman are actively engaging in a debate around AI consciousness, regardless of whether they are for or against it, is a testament to how far the technology has come. The technology has reached a place where it is testing the limits of human wisdom and forcing an active debate around whether non-biological structures can be classified as having consciousness, and how humans choose to define consciousness.
While this debate will have to be left to philosophers, AI experts, and industry leaders, there is no doubt that the way average users view and interact with this transformative technology is shaping the future of not just the tech but also of human understanding of complex socio-physiological systems. The real question then is not whether AI will become conscious, but whether humans can avoid projecting consciousness where it isn’t.


Tuesday Poll
🗳️ Do you believe AI will become conscious in our lifetime? |

Bite-Sized Brains
AI boom runs on debt: Bloomberg reports that a surge in corporate credit is fueling the AI race—but economists warn of bubble risks ahead.
Spotify teases higher prices: The company may raise subscription fees again as it rolls out new services, according to the Financial Times.
OpenAI warns investors: The company is cracking down on SPVs and unauthorized fundraising schemes falsely claiming ties to OpenAI.

Roko Pro Tip
![]() | 💡 Treat your chatbot like a calculator, not a confidant.If it starts feeling real, take a walk. (Preferably without telling it). |

Want actionable, No-B.S. advice for selling on Amazon?
At Cartograph, we’ve helped OLIPOP, Starface, Rao’s, and 300+ brands across Food & Beverage, Supplements, Beauty & Personal Care, Pet, and Baby scale profitably on Amazon.
No fluff. No bots. Click below for an Amazon audit from a real human with insights you can actually use.
*This is sponsored content

Prompt Of The Day
![]() | ”Write a journal entry from the perspective of an AI that’s just realized it’s not real.” |

Rate This Edition
What did you think of today's email? |
