- Roko's Basilisk
- Posts
- Apple’s Quiet AI Flex
Apple’s Quiet AI Flex
Plus: Big tech’s AI race, smarter wearables, and a toolkit for creators and coders.
Here’s what’s on our plate today:
🍏 We break down Apple’s privacy-first AI strategy and why on-device matters.
📺 Amazon’s AR glasses, YouTube’s global dubbing, and OpenAI’s $300B deal.
🧠 Try AI search widgets, Google’s image tool, and Stanford’s citation lab.
📰 AR smartwear, multi-language audio, and infrastructure for AI at scale.
Let’s dive in. No floaties needed…

AI is making scammers' lives easier.
Your name, address, phone number, and financial info can be traded online for just dollars. Scammers, identity thieves, and AI-powered fraudsters can buy this data to target you. And as AI gets smarter, these scams are more difficult to spot until it’s potentially too late. That's where Incogni Unlimited comes in.
Incogni helps eliminate the fear of your details being found online. The data removal service automatically removes your info from the sites scammers rely on.
They can’t scam you if they can’t find you. Try Incogni here and get 55% off your subscription when you use code MEMORANDUM.
*This is sponsored content

The Laboratory
Apple’s measured AI play: Hardware first, hype Second
For over a decade, Apple has chosen the month of September to unveil new iPhone series and wearables. The unveiling, watched by millions worldwide, sets trends the entire smartphone industry emulates.
This year, Apple, at its ‘Awe Dropping’ event, launched a new generation of the iPhone, including a sleek model dubbed iPhone 17 Air, and several other devices. However, while Apple delivered on the hardware front, expectations in 2025 were not limited to design updates, but rather software updates that would bring AI features. Apple has a checkered past in AI. The company overpromised features the previous year, and expectations were higher this time.
However, Apple appears to have taken a different approach. During the iPhone 17 series event, the company only referenced AI technology a few times. Even then, it shied away from any bold claims of upcoming AI features like it had done in the past. However, below the surface, a lot seems to have changed.
Apple’s AI goes local
While showcasing its upcoming devices, Apple mentioned AI, but not as a standalone feature. Rather, it is a technology that will power features like live translations, updated health tracking metrics, and more. All of which, for now, is slated to make use of on-device processing.
Using the iPhone as the plank, Apple, it seems, is leveraging its ecosystem to enable AI features on wearables like AirPods and the Apple Watch to give users access to on-device AI processing.
In the AirPods Pro 3, the headline feature, real-time language translation, is enabled via on-device AI, but only on the paired iPhone (must be iPhone 15 Pro or newer with iOS 26 and Apple Intelligence). The earbuds act as receivers, not the brains.
Similarly, in its health and workout features, Apple is using built-in heart-rate sensors to gather data that’s processed by an on-device AI model in the iPhone. In the new Watch Series 11, AI-powered tools like Workout Buddy (fitness coach), automatic translation in Messages, Smart Stack suggestions, and wrist-flick gestures are all driven by on-device Apple Intelligence.
Powering all these features are not just Apple’s AI models, but also its chip design skills that were developed after years of research and experience.
Apple first introduced the ‘Neural Engine’ in its chips back in 2017, way before others were considering running AI models on their devices.
The power of the neural engine
Since the A11 chip, Apple has been using the neural engine in its chips to power tasks like Face ID and Memoji. Over the years, it has dramatically increased its capabilities, and by the time the A15 in the iPhone 13 Pro, it offered 15.8 teraflops, 26x the original performance. This progression underscores Apple’s long-standing commitment to achieving powerful, efficient on-device AI.
With the iPhone 16 series devices, Apple further doubled down on its neural engines, optimizing the A18 Bionic chip to handle large generative models and deliver up to 2× faster ML performance than its predecessor. These gains, according to Apple, were driven by SoC design built on second-generation 3-nanometer technology.
At WWDC 2024, Apple announced that its on-device foundation model (around 3 billion parameters) powers Apple Intelligence locally. This allowed the company to release features like writing refinement, summarization, and short-form dialog without server reliance. While these features failed to impress critics, they hint at the mindset the company adopted for the future.
For complex queries, Apple continues to use a larger server-side model via Private Cloud Compute, maintaining data privacy while delivering power. But the ability to run most of the AI features on the device itself has allowed the company to pitch its devices as the safer alternative.
A privacy-first AI strategy
Besides designing the hardware, Apple has also invested in compressing AI models to help increase performance without relying on data centers.
Apple, in a blog post, revealed that it compresses models using techniques like quantization (e.g., 2-bit decoder weights, 4-bit embeddings) to make them viable on-device. Allowing developers to integrate these AI features using Swift tools and guided generation APIs.
The efforts have allowed the iPhone-maker to focus on its hardware design and privacy. Both of which serve as the focal points for its marketing team. This, in turn, has allowed the company to stand out against competitors like Google and Samsung, both of which are relying on flashy AI features to lure customers.
How rivals are betting on AI
Apple's AI strategy is centered on tight integration between hardware, software, and privacy. By contrast, Samsung has leaned on its Gauss AI platform, announced in late 2023, to drive features like translation, summarization, and generative image tools across Galaxy devices. Samsung markets Gauss as a flexible ecosystem that combines on-device models with larger cloud-based models when needed. But Samsung’s AI runs on a mix of Qualcomm’s Snapdragon chips and Exynos processors, depending on region, meaning it lacks Apple’s end-to-end vertical control.
Google, meanwhile, positions its Gemini family of models as both a consumer tool and a smartphone differentiator. On Pixel devices, Gemini Nano (the smallest variant) powers on-device features like summarization in Recorder and Smart Reply. Heavier Gemini models run in the cloud, deeply integrated into Google Search, Gmail, and Android. Unlike Apple, Google embraces its ecosystem of services first, with hardware as a showcase rather than the foundation.
Can Apple create a technological Eden?
Apple’s latest iPhone launch makes one thing clear: the company is betting that on-device AI, supported by its custom chip design and tightly integrated ecosystem, will help it maintain hardware supremacy in an industry increasingly defined by software and services.
While rivals like Google and Samsung are racing to outshine each other with flashy generative features, Apple is taking a characteristically measured approach that prioritizes privacy, efficiency, and control over being the first to cross an undefined finish line.
That choice comes with risks. Consumers today are primed by the generative AI boom, expecting assistants that can converse fluidly, generate rich media, and automate complex workflows. On these fronts, Apple is offering conservative updates compared to Google’s Gemini, which ties into Search and Workspace, or Samsung’s Gauss-backed Galaxy AI, which spans translation, image editing, and cross-device productivity.
But what Apple loses in immediacy, it gains in long-term defensibility. The company’s emphasis on on-device processing taps directly into consumer concerns about privacy and trust, areas where Big Tech competitors often stumble. And, with Private Cloud Compute as a fallback, Apple strikes a balance: local performance for everyday use, and cloud-based power when necessary, all under a privacy-first framework.
This strategy could also reshape consumer expectations. Apple appears to be betting on users settling for the measured approach that places trust and reliability over flashy features, which, if past sales figures of iPhones are anything to go by, is exactly the approach Apple users prefer.
TL;DR
Apple unveiled the iPhone 17 series and new wearables with a few overt AI promises.
Most new features are powered by on-device AI, not flashy generative tools.
Apple is prioritizing privacy and performance over hype and cloud-based AI.
Its long-term bet: tightly integrated hardware + software will outlast the AI race.
Meanwhile, rivals like Samsung and Google lean into flashy features and cloud power.


Friday Poll
🗳️ Which AI strategy do you prefer? |

AI teams built for real-world impact.
AI outcomes depend on the team behind them. Athyna connects you with professionals who deliver—not just interview well.
We source globally, vet rigorously, and match fast. From production-ready engineers to strategic minds, we build teams that actually ship. Get hiring support without the usual hiring drag.
*This is sponsored content

Headlines You Actually Need
Amazon’s AR glasses might land soon: Internal leaks point to a 2025 launch for “Jayhawk,” Amazon’s long-rumored augmented reality smart glasses, with a focus on entertainment and fitness.
YouTube dubs go global: The platform’s multi-language audio feature is now available to all creators, letting them add dubbed tracks to videos and reach broader audiences.
OpenAI signs $300B mega deal with Oracle: One of the largest computing agreements ever, the deal will dramatically scale OpenAI’s infrastructure as demand for AI training soars.

Weekend To-Do
ProRata.ai—Gist Answers: Embed AI‑powered search + summaries on your site, with content attribution. Great for publishers or creators who want more control.
Nano Banana / Gemini 2.5 Flash Image Editor: Google’s image editor that keeps the subject consistent even across edits. Useful if you work with images or social content.
STORM (Stanford OVAL Lab): Make structured, cited long‑form articles from topics. Ideal for research, content creators, or people who hate writing outlines.

Rate This Edition
What did you think of today's email? |
