If there was one overarching technology trend for 2025 then the evolution of AI from glorified search tool to a partner that can actually perform useful tasks for us is probably a prime contender. It seems fitting that as the year draws to a close agents and their trustworthiness took up a lot of space in my newsfeeds this week.
Building trust through integrity
Integrity is a critical aspect of building trust in any system. If agents are going to act on our behalf then we need to be confident that they behave in ways we are comfortable with. This not just about technical considerations, but also encompasses social and ethical considerations. Bruce Schneier wrote a piece about the personal data stores that individuals control but which can also be used by AI systems as a way of building trust and allowing security and AI expertise to advance independently from each other.
Building trust through resilience
Resilience is another critical aspect of building trust in any system. If agents are to be trusted, they must be able to handle unexpected situations and recover from errors without compromising their integrity. To that end OWASP published its Top 10 for agentic applications which highlights the dangers of goal hijacking, tool misuse and context poisoning amongst others. The fundamental issue is that we need "full stack trust" across not just the models and agents themselves but also the tools they use and the data they work with.
Building trust through transparency
It is difficult to fully trust something if you don't actually have any insight into what is going on inside the box. One of the benefits of open standards is that is gives people who have the interest to get some insight into how things works under the covers. This week saw the announcement of the creation of the Agentic AI Foundation under the umbrella of the Linux Foundation. Anthropic, Open AI and Block are founding memebers and they donated Goose, MCP and Agents.md to support vendor neutral development of these core technologies.
In other news
Anthropic published a paper on maintaining progress across multiple context windows which is important if you want agents to effectively tackle complex tasks and workflows. The solution appears to be to combine well structured artifacts like source control history and project logs with iterative development techniques.