Anthropic spent the last ten days building a new tool called Cowork. Although it's still a research project, it sparked intense reactions: hailed as the AI tool we've been waiting for, proof that agents are finally real, and an existential threat to enterprise software companies. Stock prices moved for some—not in a good way.
Cowork sits at the same level as Chat and Claude Code in the UI. It was inspired by the discovery that Anthropic developers were using Claude Code for non-programming tasks like organizing expense receipts.
Cowork is a task-oriented agent that operates at the file system level, which is where much knowledge work actually happens: producing spreadsheets, documents, and slide decks. You connect a local folder as a workspace. Files in that folder upload to Claude's virtualized, sandboxed environment for analysis. Based on your prompt, Claude creates and executes a plan to perform the task.
Originally targeted at Anthropic's top-tier "max plan," Cowork was soon extended to lower tiers, so I got to try it. It feels similar to Claude Code, and I enjoyed using it to prepare this newsletter. I believe the scaffolding—how we interface with an LLM—has become more important than the model itself. That's why people are excited: Anthropic has made Claude Code accessible to a broader range of knowledge workers. They have a knack for this scaffolding work.
The Problem That Won't Go Away
Connecting agents to the desktop could usher in a new model for knowledge work, one where desktop automation handles an increasing share of day-to-day tasks. But this success will be constrained by agents' ability to access corporate data in remote systems and by organizations' trust in controlling that access.
Prompt injection is the problem that won't go away. Anthropic built and released Cowork in ten days. It took less time for security researchers to find the first vulnerabilities by leveraging prompt injection to exfiltrate data from a Cowork session. Prompt injection stems from a fundamental design flaw in all AI systems: it's very hard to separate the data you're working with from instructions to the underlying model. We've already seen this problem slow adoption of AI browsers, and we may see the same here.
What to Make of All This
Will we abandon our browsers for desktop clients as the primary way of getting things done in 2026? A large part of me hopes so. But if that happens, what follows? Will knowledge workers become increasingly deskilled as intelligence transfers from carbon to silicon? Perhaps we overestimated how difficult our jobs were in the first place. Now that tokens serve as a rough proxy for intelligence, companies may start calculating how much they actually need, or wherre they need it to run their business efficiently.
Tokens are not talent. Talented people will always do more interesting things with tools than less talented people. The question that interests me most: how much will talented people need companies to create value—and get paid for it?