December 18, 2025 • Márton Sereg, CEO & Co-Founder
SOC 2 auditors spent 2023 and most of 2024 treating AI systems as a sub-component of existing controls: the AI calls an API, the API has access controls, so the access controls govern the AI. That framing is changing. The auditors we are seeing in Q4 2025 are treating AI agents as a distinct control domain with their own control objectives. If you have deployed AI agents in your production environment and your next SOC 2 audit is within six months, you have specific gaps to close.
1. "How do you know which AI agents have access to production data?"
This maps to CC6.1 (logical access) and CC6.3 (removal of access). The auditor is checking whether you have an inventory of AI systems that have been granted access to sensitive data, and whether you have a process for reviewing and revoking that access.
Most teams cannot answer this question cleanly because agents are spun up dynamically and may hold temporary credentials that are not tracked in the same way service accounts are. The correct answer involves showing that agents receive identity-bound credentials at spawn time, those credentials are tracked in a log, and the log is reviewed periodically for anomalous agent types.
2. "How do you prevent an AI agent from taking actions beyond what the task requires?"
This maps to CC6.6 (logical and physical access restrictions) and CC7.1 (detection of anomalies). Auditors are asking for documented access policies that constrain what each category of agent can do, and evidence that those policies are enforced technically, not just by convention.
A policy document without enforcement evidence does not satisfy this control. Evidence could include: runtime policy engine configuration files, screenshots of denied requests in an access log, or a demonstration of the policy blocking an unauthorized action in a test environment.
3. "What would you find in the audit log if an AI agent exfiltrated data?"
This maps to CC7.2 (monitoring of the system for anomalies) and CC7.3 (response to security incidents). The auditor is checking whether your audit log is granular enough to reconstruct what an AI agent did and when, and whether you have alert rules that would have caught an anomalous pattern.
The right answer is a live demo: pull up the audit log for a specific agent session, show the auditor the sequence of tool calls, point to the policy decision entries. If you cannot demonstrate this within two minutes of a question, the auditor notes a gap.
4. "How do you handle an AI agent that is compromised or acting anomalously?"
This maps to CC7.4 (response to security events) and A1.2 (availability and performance). Auditors want to see a defined incident response procedure for AI agents specifically: how is an anomalous agent detected, who is alerted, how is the agent terminated, and how is its access revoked before it terminates naturally.
The evidence bar for AI-related controls is currently lower than for traditional access controls, because auditors are still calibrating expectations. What we are seeing accepted:
That bar will rise. Audit firms are actively developing AI-specific control frameworks. The AICPA has published guidance on AI system trust services criteria. Teams that build the controls now, even at basic maturity, will be in a much better position than teams that wait until auditors start issuing qualified opinions on AI system controls.
Márton Sereg is CEO & Co-Founder of Riptides. Questions: hello@riptidesio.com