As many knowledge workers prepare to head back to work on Monday to wrestle the bear called January it is natural to wonder what the year might hold for us. From an AI perspective we saw focus shift from using AI as a companion in the form of a chatbot to its potential as something more autonomous: agents performing tasks and integrating into exisitng workflows as part of a broader strategy to automate knowledge work. Along the way we uncovered some significant challenges, corporate data resistant to our efforts to enrich our models and security challenges associated with agents interacting with external systems culminating in the understsanding of prompt injection as a systemic architectural weakness. Overhanging all this change is a simple question. How can I trust this stuff?
For the knowledge worker, 2026 and beyond is likely to be defined by the transition of knowledge work from one of creating indiviual artifacts to the orchrestration of agent teams to produce the same, essentially mimicing the standardization of physical craft that we saw during the industrial revolution as factories consistently out performed individuals by orders of magnitude.
From assistant to agent
The emerging architecture is one based on an "orchestrator" pattern where one agent directs smaller specialized agents with skills that mirror what is seen in functional silos like HR, finance or OT. This enables coordination across organizational boundaries with minimal supervision.
Companies are expected to hire "AI orchestrators" that manage multiple agents which take on mid-level specialized work. In this world opportunity is not distributed evenly as entry-level AI-savvy employees are able to punch above their weight while more senior employees can use the time freed up by standardized, automated workflows to pursue more valuable work. As ever the squeeze will be in the middle where mid-tier specialists face pressure from increasingly competent agents and greater competition for senior positions
The emerging AI attack surface
As enterprises deploy multi-agent systems where specialized AI agents coordinate autonomously, a single compromised agent can propagate malicious instructions throughout the ecosystem. The toxic combination of read capabilities (monitoring Slack) and write capabilities (GitHub commits) creates supply chain attack vectors.
Adversaries can exploit deserialization vulnerabilities by using prompt injection to trick LLMs into producing specially crafted output that mimics internal serialization formats, forcing systems to load malicious constructs and leak secrets. AI's reliance on dynamic data flows amplifies traditional deserialization vulnerabilities.