- Roko's Basilisk
- Posts
- Meta Mines Its Workers
Meta Mines Its Workers
Plus: bioterror AI alarm, Jensen on AI jobs, and Musk-Altman trial begins.
Here’s what’s on our plate today:
🧪 Inside Meta's plan to turn work into training data.
📰 AI's bioterror risk, Jensen's job-creation claim, OpenAI's courtroom drama.
🛠️ Three tools worth trying: Rize, Gemini Computer Use, and Crowdin.
🗳️ Poll: Where do you land on Meta's worker tracking?
Let’s dive in. No floaties needed…

How Jennifer Aniston’s LolaVie brand grew sales 40% with CTV ads
For its first CTV campaign, Jennifer Aniston’s DTC haircare brand LolaVie had a few non-negotiables. The campaign had to be simple. It had to demonstrate measurable impact. And it had to be full-funnel.
LolaVie used Roku Ads Manager to test and optimize creatives — reaching millions of potential customers at all stages of their purchase journeys. Roku Ads Manager helped the brand convey LolaVie’s playful voice while helping drive omnichannel sales across both ecommerce and retail touchpoints.
The campaign included an Action Ad overlay that let viewers shop directly from their TVs by clicking OK on their Roku remote. This guided them to the website to buy LolaVie products.
Discover how Roku Ads Manager helped LolaVie drive big sales and customer growth with self-serve TV ads.
The DTC beauty category is crowded. To break through, Jennifer Aniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.
*This is sponsored content

The Laboratory
TL;DR
Surveillance as training data: Meta began capturing keystrokes, mouse movements, and screenshots from U.S. employees’ laptops to train AI agents that replicate human computer use, with no opt-out.
Two continents, two rules: EU privacy law would likely block this outright, but no U.S. federal statute prohibits it on company-owned devices, so Meta harvests American worker data for products it sells globally.
Layoffs sharpen the contradiction: The program launched the same week Meta announced 8k job cuts, even as the company posted $201B in 2025 revenue and nearly doubled its AI infrastructure budget.
Labor as raw material: If this model scales, workers across industries face a future where how they work becomes a commercial asset they never agreed to sell and have no leverage to reclaim.
Inside Meta’s plan to turn work into training data
There was a time, not very long ago, when getting hired at a major Silicon Valley company felt like winning a golden ticket. Big Tech campuses had rock-climbing walls and nap pods, and many served free meals prepared by professional chefs.
The culture was built on a premise of trust: hire smart people, give them autonomy, and get out of their way. The appeal was strong enough to spawn movies and TV shows about life inside these offices and to make an entire generation of workers reorganize their careers around the hope of getting in.
However, that version of Big Tech has been fading for a while. Since the pandemic, the industry has cycled through waves of mass layoffs, aggressive return-to-office mandates, and performance-based terminations that sometimes caught employees with strong reviews.
But a development at Meta in April 2026 marked something qualitatively different: the company began installing software on its U.S. employees’ work laptops that captures mouse movements, keystrokes, click locations, and periodic screenshots. The program, internally known as the Model Capability Initiative (MCI), is not designed to measure productivity or flag misconduct. It is designed to feed all of that behavioral data into Meta’s AI training pipeline, teaching the company’s models how humans actually use computers so that AI agents can eventually do the same work on their own.
The tracking covers a pre-approved list of work-related applications and websites, including Gmail, Google Chat, developer tools such as VS Code, and Meta’s internal AI assistant, MetaMate. It applies to full-time employees and contingent workers nationwide. When an employee posted the most upvoted comment on Meta’s internal discussion thread asking how to opt out, CTO Andrew Bosworth replied that there is no opt-out on a company-provided laptop.
If anything, the timing made it worse. The announcement landed in the same week that Meta disclosed plans to cut roughly 8k jobs, about 10% of its workforce, beginning May 20, a juxtaposition that was quickly felt both inside the company and beyond.
The data engine inside the office
MCI exists because Meta is trying to solve a very specific technical problem. The company is racing to build AI agents that can operate a computer the way a person does, clicking through menus, switching between windows, and moving data across applications. These are the mundane, almost reflexive actions that knowledge workers perform hundreds of times a day without thinking, yet they remain surprisingly difficult for AI systems to replicate. Even OpenAI’s Computer-Using Agent, one of the more advanced efforts in this direction, has managed only a 38.1% success rate on benchmarks measuring full computer-use tasks.
Improving that performance requires a different kind of training data. Not static corpora like documents or code repositories, but a continuous, real-time record of how humans actually navigate software. Synthetic datasets and scripted contractor demonstrations tend to miss the nuance of real workflows, the hesitation before a click, the back-and-forth between tools, the improvisation that defines everyday work. Meta bets that its own employees, simply by doing their jobs, generate exactly the kind of behavioral signal its models need.
That bet sits within a broader strategic push. In June 2025, Meta invested $14.3B in Scale AI and brought its CEO, Alexandr Wang, in to lead a new division called Meta Superintelligence Labs. Scale’s core business has long been preparing training data for AI systems, and Wang’s mandate is to help Meta close the gap with rivals like OpenAI, Anthropic, and Google, all of which benefit from vast streams of behavioral data generated by their consumer products.
Meta lacks a dominant search engine, a ubiquitous productivity suite, or the most widely used AI chatbot. What it does have is a workforce of roughly 72k employees interacting with a wide range of enterprise tools every day.
There is, however, a limit to how far this strategy can go. MCI applies only to employees based in the United States, while European employees are exempt entirely, a distinction driven not by technical constraints but by regulation.
One program, two continents, different rules
The European Union’s General Data Protection Regulation (GDPR) imposes strict requirements on workplace monitoring. It ensures that any surveillance must be proportional to a legitimate business need, grounded in a valid legal basis, and preceded by a formal impact assessment. Consent in the employment context is generally not a valid legal basis because regulators recognize the power imbalance: an employee cannot refuse their employer’s request without risking consequences.
In the United States, the picture is almost the opposite. As Ifeoma Ajunwa, a Yale University law professor and author of The Quantified Worker, told Reuters, “On the U.S. side, federally, there is no limit on worker surveillance.” State-level laws in places like New York and Connecticut require written notification, and California restricts the monitoring of personal devices. Still, no federal statute prohibits the kind of keystroke and screen-capture tracking Meta is deploying on company-owned equipment, provided the company discloses it.
Valerio De Stefano, a law professor at York University in Toronto who studies technology and comparative labor law, told Reuters that European law would likely prohibit such monitoring outright, noting that in Italy, electronic monitoring of employee productivity is explicitly illegal. At the same time, German courts have heavily restricted keystroke logging.
The regulatory asymmetry creates a peculiar dynamic. Meta can harvest behavioral data from its American workforce to train AI agents it may eventually sell to customers globally, including in European markets, where the same data-collection method would be illegal.
The consent question no one can resolve
Inside Meta, the reaction to MCI was immediate, and internal discussion threads, reported by Reuters, showed that the dominant emoji response to the announcement was an angry face. Bosworth’s no-opt-out response drew even sharper reactions, with employees crying and looking shocked. The discomfort was unmistakable, even as the company held its line: Meta has monitored employee activity on work devices in some capacity for years, and MCI, it argued, is simply an extension of existing policy rather than a break from it.
That framing, however, glosses over a more consequential shift. Traditional workplace monitoring is designed to verify that employees are doing their jobs. MCI is designed to capture how those jobs are done, so that an AI system can learn to replicate them. The distinction matters since this is not data collected for oversight; it is data collected for imitation.
Ajunwa told NPR that there is significant concern that, because no regulation governs how employers use these technologies, there is ample opportunity for misuse. Meta has said MCI data will not be used for performance evaluations. Still, the same behavioral signals that train an AI agent, including keystroke speed, application-switching patterns, and task-completion sequences, are inherently useful for assessing productivity.
And since nothing is stopping Meta from changing its policies in the future, a change in leadership could one day decide that this data can be used for performance insights to determine who stays and who does not.
The layoff context sharpens that tension further. Meta reported $201B in revenue for 2025, up 22% year over year, which makes it clear this is not a company cutting costs under financial strain. At the same time, its capital expenditure guidance for 2026 is between $115B and $135B, nearly double the previous year and directed almost entirely toward AI infrastructure.
Seen together, the layoffs and the surveillance begin to look less like separate decisions and more like expressions of the same strategic shift. Meta is reorganizing itself around AI, and in that process, human labor is being treated less as the end product of the company and more as an input into the systems it is building.
The office that watches back
The moves from Meta have not come in isolation. In January 2026, Wired reported that OpenAI, working with training data firm Handshake AI, had been asking third-party contractors to upload real work products from past and current jobs, including documents, spreadsheets, presentations, and code, to help train models for office tasks.
However, the crucial difference is consent. OpenAI’s contractors participate voluntarily. Meta’s employees do not have that option.
The broader market is already moving in this direction, and employee monitoring software is projected to grow from $3.89B in 2025 to $8.29B by 2030, according to Research and Markets, while Gartner has predicted that 70% of large employers would be monitoring their workers by 2025.
MCI, however, represents a distinct evolution within that trend. Monitoring is no longer primarily about oversight; it is about extraction, capturing how people work so that machines can learn to replicate it.
That shift begins to alter the nature of the employment bargain in ways existing legal and ethical frameworks are not fully equipped to address. An employee’s labor has always created value for the company. What is changing is that the process of that labor, the patterns and rhythms of how work is performed, is itself becoming a commercial asset. It becomes training data for products the company intends to sell, while the worker who generates it receives only their salary.
The Silicon Valley workplace, once held up as a more humane and trust-based alternative to traditional corporate structures, is beginning to look far more familiar, with one crucial difference: the surveillance is deeper, more continuous, and far more consequential. The people inside these companies can see the shift as it happens, not as an abstract trend but as a change in the terms of their own work. Whether they have any meaningful leverage to shape what comes next, in a labor market that has shed nearly 900k tech jobs since 2020 and an industry where AI investment continues to accelerate regardless of headcount, remains uncertain in a way that feels less temporary than structural.


Quick Bits, No Fluff
AI's bioterror risk grows: New research warns AI tools are now capable of helping bad actors design dangerous pathogens, raising fresh biosecurity alarms.
Jensen's job-creation claim: As workers brace for AI-driven layoffs, NVIDIA's Jensen Huang insists AI is generating "an enormous number" of new jobs, not destroying them.
OpenAI's courtroom drama: Musk, Brockman, and Altman are headed to trial in a case set to expose the messy origin story behind OpenAI's transformation from non-profit to power player.

Outperform the competition.
Business is hard. And sometimes you don’t really have the tools you need to be great at your job. Well, Open Source CEO is here to change that.
Tools & resources, ranging from playbooks, databases, courses, and more.
Deep dives on famous visionary leaders.
Interviews with entrepreneurs and playbook breakdowns.
Are you ready to see what’s all about?
*This is sponsored content

Thursday Poll

3 Things Worth Trying
Rize: an AI time-tracking tool that automatically categorizes your work, useful for understanding where your hours actually go without manual logging.
Gemini Computer Use: Google's agent that operates a browser and apps for you, a hands-on way to see how close computer-use AI agents are to handling real workflows.
Crowdin: Localization platform with built-in AI translation, a clean example of human-in-the-loop AI where workers train and refine the system rather than being replaced by it.

The Toolkit
Leonardo AI: AI image and video generator with fine-grained creative controls, built for designers, marketers, and game studios who need consistent style at scale.
Modal: Serverless cloud for running Python and AI workloads, lets you spin up GPUs in seconds without touching infrastructure.
Quillbot: AI writing assistant that paraphrases, summarizes, and rewrites text on demand, useful for tightening drafts or escaping your own voice.

Rate This Edition
What did you think of today's email? |






