Architecture

Zero-Trust for AI: What the Architecture Actually Requires

March 20, 2026 • Elena Vasquez, CTO & Co-Founder

Zero-trust architecture for AI agents

Every major enterprise has a zero-trust initiative. Most of those initiatives were designed around humans authenticating to services. AI agents break four assumptions that zero-trust frameworks take for granted, and those breaks are not trivial to patch.

Assumption 1: Identity Is Stable

Zero-trust architectures treat identity as a stable anchor. An employee has a persistent identity that lives in your IdP. Devices have certificates that are valid for months or years. Services have service accounts that a human provisioned and a human can revoke.

AI agents are ephemeral. A busy orchestration system may spawn hundreds of agents per minute, each operating for seconds or minutes before terminating. The concept of a pre-provisioned service account does not map. By the time a human provisioned the credential, the agent that needed it has already exited. By the time a rotation event fires, the credential has been used in thousands of distinct agent instances.

The fix is just-in-time identity issuance at spawn time. The agent requests a short-lived certificate from the identity authority as part of its startup sequence. The certificate is valid only for the expected lifespan of that agent instance. No human provisioning, no long-lived credentials sitting in environment variables.

Assumption 2: Access Requests Come from a Known User

Traditional zero-trust policy asks "which user is making this request?" and checks that user's roles and groups. AI agents act on behalf of users, but the mapping is indirect and often ambiguous. A research agent processing a physician's query at a healthcare platform is acting within the physician's context, but the agent is not the physician. Assigning the physician's full permissions to the agent is a dangerous shortcut.

AI-aware zero-trust needs a delegated identity model. The physician authorizes the agent to act on a specific task, within a specific scope, for a specific duration. The agent's identity certificate encodes that delegation chain. When the agent calls a downstream service, the service can verify: this call comes from an agent, acting for physician X, within scope Y, authorized until time T. Any of those checks can fail independently.

Assumption 3: Network Position Is Meaningless But Fixed

Zero-trust discards the old notion that traffic inside the perimeter can be trusted. The modern version says: verify every request regardless of where it originates. That is correct. But implementations often still assume that source addresses are at least deterministic — that a given service runs at a predictable set of IPs or within a predictable namespace.

AI agents in Kubernetes environments can be scheduled to any node, may use shared IP space, and may coexist with other agent instances in the same pod or namespace. Source IP as a secondary verification signal becomes unreliable. Identity-based verification must be the only trust anchor, not a primary signal supplemented by network position.

Assumption 4: A Policy Can Be Reviewed Before It Is Enforced

Most zero-trust implementations use static access policies: a security team writes them, a review process approves them, and they are enforced until someone changes them. The assumption is that the set of legitimate access patterns is knowable in advance and changes infrequently.

AI agent workloads at a software company running LangGraph-based code agents might spawn agents with different access profiles depending on which repository they are analyzing, which user initiated the session, and which external tools the user authorized. Enumerating all valid policy combinations in advance is not feasible. The policy engine needs to evaluate access dynamically, at request time, based on the combination of identity, task context, and requested resource.

Open Policy Agent (OPA) is a reasonable starting point. The policy language supports contextual inputs: you can write a rule that says "an agent may read from database X if its identity certificate includes scope=database-X and its task context was created by a user with role=analyst." That rule is evaluated at each request, not pre-compiled into a static allowlist.

What Actually Has to Change

Zero-trust for AI is not a marketing rebrand of existing zero-trust products. It requires:

  • Just-in-time identity issuance tied to the orchestrator's spawn event, not to human provisioning workflows
  • Delegation chains that bind an agent's identity to the user task it is executing, not just to a generic service account
  • Certificate-based verification that treats source IP as untrusted and identity as the sole trust anchor
  • Dynamic policy evaluation at request time, using task context as a policy input alongside static roles
  • Tamper-evident audit trails that capture the full delegation chain for every access decision

Teams that try to stretch their existing zero-trust tools to cover AI agents often end up with either overly permissive policies (because granular dynamic policies are too complex to write in the legacy system) or operational friction that slows agent deployment to the point that teams route around the controls. Neither outcome is acceptable. The architecture needs to be designed for the workload, not retrofitted.


Elena Vasquez is CTO & Co-Founder of Riptides. Questions: hello@riptidesio.com

← Back to Blog