- Roko's Basilisk
- Posts
- Would You Trust AI Doctors?
Would You Trust AI Doctors?
Plus: ChatGPT goes Walmart, Gemini schedules you, and Firefox gets Perplexity.
Here’s what’s on our plate today:
🧪 AI is quietly taking over healthcare. But is it actually helping?
⚡ Firefox adds Perplexity, Gemini books meetings, and Walmart uses ChatGPT.
🗳️ Would you trust an AI diagnosis?
🧰 Experiment with healthcare-ready AI tools built for reliability.
Let’s dive in. No floaties needed…

Turn customer feedback into evidence that moves your product roadmap faster
For PMs who need buy-in fast: Enterpret turns raw feedback into crisp, evidence-backed stories.
Explore any topic across Zendesk, reviews, NPS, and social; quantify how many users are affected and why; and package insights with verbatim quotes stakeholders remember.
Product teams at companies like Canva, Notion and Perplexity use Enterpret to manage spikes, stack-rank work, and track sentiment after launches—so you can show impact, not just ship lists.
Replace hunches with data that drives planning, sprint priorities, and incident triage.
*This is sponsored content

The Laboratory
How AI is reshaping the healthcare industry
The COVID-19 pandemic exposed many hidden and underlying flaws in healthcare. The near-collapse of the worldwide systems at the height of the pandemic highlighted the need for a revamp and rethink at a time when social distancing was necessary. At the time, digital health technologies rose to the occasion, providing a seamless, anytime, anywhere health ecosystem. In this backdrop, when artificial intelligence (AI) burst onto the scene in 2022, utilizing the tech in healthcare looked like a no-brainer. The availability of multi-modal data (genomics, economic, demographic, clinical, and phenotypic) furthered the adoption of AI to transform models of healthcare delivery.
However, the use of AI was not limited to just delivery. Even before the pandemic, leaders within the tech and healthcare industries were counting on AI to supercharge discoveries in the pharmaceuticals, diagnostics, and preventative care to open new business opportunities.
In 2019, Microsoft announced a collaboration with Novartis. During the announcement, Microsoft CEO Satya Nadella said, “AI is perhaps the most transformational technology of our time, and healthcare is perhaps AI's most pressing application”. His thoughts were echoed in Apple CEO Tim Cook’s words when he said, “[Healthcare] is a business opportunity ... if you look at it, medical health activity is the largest or second-largest component of the economy”.
And it seems to have worked. Since 2019, large language models (LLMs) powering AI tools have increased their presence in the healthcare industry.
Why healthcare became AI’s biggest test case
By 2025, AI had made its way into clinical imaging and diagnostics, documentations, and admin work, and even in public health early warning systems. Recently, NASA announced a collaboration with Google to build AI tools for astronauts.
This leap is not just the result of clever marketing and business opportunities. The successful use of AI tools in healthcare stems from their ability to find patterns and act as assistants for trained medical practitioners.
How AI is adding value across the care spectrum
In diagnostics, AI models capable of reading pixels and numbers better and faster than humans make ideal candidates for use in tools that can scan X-rays, CTs, MRIs, ultrasounds, ECGs, or skin photos and flag likely problems.
The U.S. Food and Drug Administration (FDA) has started keeping a public list of AI/ML-enabled devices cleared for marketing, a useful way to see what’s real versus hype. Such devices are witnessing increasing adoption due to their ability to reduce the number of missed findings on busy days and their potential to reach resource-limited settings where subspecialists are scarce.
Another area where AI tools excel is drafting clinical notes, summarizing charts, answering medical questions, and handling paperwork like prior authorization. LLMs are also used to predict patient no-shows, length-of-stay, readmissions, and staffing. All this reduces pressure on the existing staff, which is already facing a crunch due to changing policies in the U.S. and other Western nations.
Health systems have also started rolling out ambient AI documentation at scale, and major vendors are moving into the space. Some of the key players in the market include Epic, which is building its Scribe, and Microsoft’s which is building its DAX Copilot. Startups like Abridge and Ambiance have also emerged as key players.
Additionally, AI models are assisting in medical research, and tech labs are building medical foundation models that blend text, images, and other data. Google’s Med-Gemini (part of the MedLM lineage) is a recent example, with reported gains on exam-style and multimodal tasks.
However, while AI systems have witnessed increased adoption and can have a positive impact within the industry, they are not infallible. The use of AI in healthcare has raised concerns around reliability, stemming from the problem of hallucinations in AI systems, and the impact these tools can have on the proficiency of trained medical practitioners.
Bias, data, and over-reliance: AI’s fragile side
In August 2025, The Lancet Gastroenterology & Hepatology published a study that found that the skill of health professionals performing colonoscopies was significantly undermined by having assistance from artificial intelligence. The study included 1,400 patients and found that detection dropped from 28.4% to 22.4% when doctors worked without AI assistance.
According to a report from the Financial Times, Marcin Romańczyk of the Academy of Silesia, one of the researchers, said the “results are concerning, given that the adoption of AI in medicine is rapidly spreading”.
While the authors of the study acknowledged limitations, which were observational rather than a randomized controlled trial, it underscores the importance of treading carefully when it comes to implementing AI tech in healthcare.
There are also concerns that the lack of healthcare data for women and minority populations could introduce and enhance bias in AI systems, which in turn would impact effectiveness.
Another area of concern is that AI companies could use private personal data to train their models. Training AI algorithms requires access to vast amounts of data, and AI companies are struggling to gather datasets that can be used to train their models. The use of personal data in AI also creates the risk of exposure and raises questions around the ethical sourcing of medical data.
According to IBM Security’s Cost of a Data Breach Report for 2023, the healthcare industry reported the most expensive data breaches, with an average cost of $10.93 million. Despite this, few places have cohesive laws designed to protect larger consumer privacy.
In response to the findings of the Lancet study, professional societies have mandated periods of non-AI work to protect human skills. However, just as one study is not reflective of overall trends, mandates may not be enough to ensure that medical professionals do not delegate important tasks solely to AI.
Can policy keep up with medical AI?
As mentioned above, the U.S. Food and Drug Administration (FDA), in efforts to encourage the development of AI-enabled devices, has released draft guidance for managing adaptive AI through audit trails and lifecycle policies.
Meanwhile, in the U.K., the Medicines and Healthcare products Regulatory Agency has initiated efforts to establish a global network of health regulators focused on the safe, effective use of artificial intelligence (AI) in healthcare.
The aim of the agency is to work with regulators to share early warnings on safety, monitor AI tools in practice, and shape international standards. The agency has also created a sandbox to safely test AI tools before they are rolled out to medical practitioners.
While these are important steps in regulating the use of AI tools in healthcare, we are a long way from a cohesive policy that dictates rules for safe use of AI in the medical field.
A crossroads for medicine and machine
Artificial intelligence in healthcare is no longer a futuristic idea but an active participant in reshaping the way care is delivered, diagnoses are made, and medical research is conducted. AI-enabled tools offer speed, scale, and efficiency that traditional healthcare systems have often struggled to achieve, particularly in resource-limited settings.
However, their implementation is not as simple as it would seem on the surface. The very strengths that make AI so attractive also introduce risks that can undermine the quality of care. And while the regulatory landscape is beginning to respond, AI’s role in healthcare will be defined not by its capabilities, but by the decisions humans make about how, where, and why it is used.
If these challenges are addressed, AI could help usher in a new era of personalized, efficient, and accessible healthcare. If ignored, we risk building a healthcare future that is faster, but not necessarily better. The next decade will determine which path we take, and the stakes could not be higher.


Quick Bits, No Fluff
Firefox adds Perplexity search: Mozilla now offers Perplexity as an official desktop search engine option, expanding user choice beyond Google and Bing.
Gemini books your meetings: Google’s Gemini AI assistant can now schedule appointments in Google Calendar directly through natural-language prompts.
Walmart joins ChatGPT: Walmart teams up with OpenAI to enable product browsing and shopping directly inside ChatGPT, live now for U.S. users.

Get daily marketing genius with The Swipe’s creative inspiration.
Supercharge your marketing with a curated swipe file that spotlights brilliant ads, witty copy, and bold brand moves.
Each edition of The Swipe delivers an engaging breakdown of standout campaigns and trendsetting strategies—perfect for sparking fresh ideas. Whether you’re seeking disruptive humor or cutting-edge branding tactics, this daily digest keeps you informed and inspired.
No more guesswork or hours lost browsing random case studies. Get curated creativity with real impact.
*This is sponsored content

Thursday Poll
🗳️ Should AI have a larger role in healthcare? |

3 Things Worth Trying
Glass Health: A clinical reasoning copilot that helps doctors generate differential diagnoses and evidence-based treatment plans.
BioRender: Create professional, accurate scientific diagrams for research or medical presentations, AI-powered and vetted by scientists.
Lunit INSIGHT: A suite of FDA-cleared AI tools that analyze X-rays and CTs for early detection of diseases like cancer and tuberculosis.
Meme Of The Day
every morning before entering the office, can't let those anti crypto people see me cry
— naiive (@naiivememe)
12:36 PM • Oct 12, 2025

Rate This Edition
What did you think of today's email? |




