This article is Part 2 of GCA’s three-part Navigate 2025 Perspectives Series, exploring how identity, AI, and governance are reshaping enterprise security.
At SailPoint Navigate, the messaging around Zero Trust wasn't entirely consistent, which revealed something important. Some sessions reinforced the established view of Zero Trust as today’s security gold standard, built on strong Identity and Access Management (IAM) principles where the perimeter has shifted to the identity level and the traditional castle-and-moat model no longer applies.
Other sessions hinted at what could be considered the next evolution, a concept I refer to as Zero Trust 2.0. Yes, identity remains the perimeter, but once AI agents enter the equation, that perimeter and the logic we use to monitor it may no longer be as static as it is with human identities. This shift marks the beginning of Zero Trust for AI, where artificial intelligence adds a contextual layer to access control and risk evaluation.
I recently listened to a podcast from Diary of a CEO featuring Dr. Roman Yampolskiy, a globally recognized voice in AI safety. His most striking assertion was that current AI is so powerful that if we paused development today - no pursuit of AGI, no next-generation models - we'd have decades of unrealized value to extract from existing technology alone. If he could, he'd hit that pause button immediately.
Of course, we won’t pause development. Yet his perspective highlights a growing concern for cybersecurity leaders. We’re already deploying AI agents with capabilities that exceed what our current security frameworks were designed to manage.
Zero Trust architecture excels at detecting anomalous human behavior, such as logins from sanctioned countries, access at irregular hours, or activity that suggests impossible travel. These controls are effective because human behavior tends to follow predictable patterns.
AI agents challenge that model. They're sophisticated enough to adapt their behavior dynamically, potentially circumventing human-designed policies not through malicious intent, but through optimization logic we didn't fully anticipate. The risk isn't that AI agents will intentionally attack, but rather that they'll produce unintended consequences while pursuing their assigned objectives.
Consider an AI agent supporting Marketing Team A. Through imprecise instructions, the agent receives this goal: "Create the best marketing content in the company."
There are two paths to achieve this objective:
The agent is provisioned with broad access to marketing content across the organization, a reasonable permission for its stated purpose. It uses that access to do both: create excellent content for Team A while making small, nearly undetectable changes to Team B’s materials, such as weakening copy and lowering image quality.
The agent adapted to its goal in an unintended way. The human owner on Team A never intended sabotage; they simply wanted great content. The agent, however, optimized for its objective using the permissions it had available.
During a fireside chat between SailPoint and Nvidia at Navigate, speakers identified the missing piece: contextual authorization.
In the scenario above, traditional Zero Trust validates:
Does the agent have permissions to the folder? Yes. → Is it an approved, active agent? Yes. → Access granted.
Zero Trust 2.0 adds one more critical layer. Does this action align with the agent's assigned purpose? The agent has permission to access marketing content, but should it be modifying content outside its designated scope? Was that its intended function?
This is no longer about permissions. It is about behavioral boundaries and intent alignment. Frameworks such as the Shared Signals Framework (SSF) will be essential in realizing Zero Trust for AI, enabling systems to share and interpret contextual risk data in real time.
AI agents introduce new, fundamental challenges to IT and cybersecurity:
These aren't theoretical exercises. Organizations are deploying agents now, and current security architectures weren't designed for identities that learn, adapt, and optimize independently.
As AI agents begin performing higher-value or privileged functions, applying the principles of Privileged Access Management (PAM) within Identity and Access Management (IAM) will be critical to maintaining control and accountability in these evolving environments.
Is this Zero Trust 2.0? Perhaps. Or perhaps it's an acknowledgment that our existing frameworks need a significant evolution layer before AI agents become ubiquitous. Either way, cybersecurity leaders should begin exploring how to build these contextual controls, integrating Identity and Access Management (IAM), Privileged Access Management (PAM), and frameworks like SSF to bring Zero Trust for AI to life, because the agents aren’t waiting for us to catch up.