Technical writing from the Riptides team on AI agent security, runtime identity, and the evolving threat landscape for production AI workloads.
Most prompt injection defenses focus on filtering inputs. That addresses the symptom, not the cause. Fix the agent's permission boundary and the injection becomes irrelevant.
Read More →
Zero-trust frameworks were designed around human users and fixed services. Applying them to AI agents requires rethinking four core assumptions about identity stability, delegation, and dynamic policy.
Read More →
A 90-day API key rotation policy was designed for human developers. AI agents need credentials that match their lifetime — minutes to hours, not months.
Read More →A step-by-step walkthrough of what a compromised AI agent does in the first 60 seconds, and which runtime controls stop each step before it completes.
Read More →Multi-agent systems inherit trust implicitly between cooperating agents. That implicit trust is an attack surface. Orchestration frameworks like LangGraph and AutoGen leave it wide open.
Read More →Too narrow and agents fail on legitimate tasks. Too broad and you have no security boundary. Here is a practical framework for finding the right level for each agent type.
Read More →A log file an attacker can delete is not an audit trail. Here is what tamper-evident logging requires for AI agent workloads, from entry structure to storage architecture.
Read More →SOC 2 auditors are starting to treat AI agents as a distinct control domain. Four specific questions they are now asking, and what counts as sufficient evidence to satisfy each one.
Read More →Before you can detect anomalous agent behavior, you need a concrete baseline. Building that baseline for AI agents is harder than it sounds, and it matters more than for traditional services.
Read More →Rate limiting was built for high-volume bots. An AI agent can cause a significant breach while staying well within rate limits. The controls that actually work are different.
Read More →