Blog

Technical writing from the Riptides team on AI agent security, runtime identity, and the evolving threat landscape for production AI workloads.

Prompt Injection Is an Identity Problem
April 2, 2026

Prompt Injection Is an Identity Problem

Most prompt injection defenses focus on filtering inputs. That addresses the symptom, not the cause. Fix the agent's permission boundary and the injection becomes irrelevant.

Read More →
Zero-Trust for AI
March 20, 2026

Zero-Trust for AI: What the Architecture Actually Requires

Zero-trust frameworks were designed around human users and fixed services. Applying them to AI agents requires rethinking four core assumptions about identity stability, delegation, and dynamic policy.

Read More →
Credential Rotation for AI Agents
March 7, 2026

Credential Rotation for AI Agents: Why 90 Days Is Too Long

A 90-day API key rotation policy was designed for human developers. AI agents need credentials that match their lifetime — minutes to hours, not months.

Read More →
What a Compromised LLM Agent Does
February 22, 2026

What a Compromised LLM Agent Actually Does on Your Network

A step-by-step walkthrough of what a compromised AI agent does in the first 60 seconds, and which runtime controls stop each step before it completes.

Read More →
Agent-to-Agent Trust
February 8, 2026

Agent-to-Agent Trust: The Problem Nobody Is Talking About

Multi-agent systems inherit trust implicitly between cooperating agents. That implicit trust is an attack surface. Orchestration frameworks like LangGraph and AutoGen leave it wide open.

Read More →
Scoping AI Agent Permissions
January 25, 2026

How to Scope AI Agent Permissions Without Breaking Your Pipeline

Too narrow and agents fail on legitimate tasks. Too broad and you have no security boundary. Here is a practical framework for finding the right level for each agent type.

Read More →
Tamper-Evident AI Audit Trails
January 10, 2026

Building Tamper-Evident AI Agent Audit Trails

A log file an attacker can delete is not an audit trail. Here is what tamper-evident logging requires for AI agent workloads, from entry structure to storage architecture.

Read More →
SOC 2 Auditor Questions About AI
December 18, 2025

The SOC 2 Auditor's New Questions About AI

SOC 2 auditors are starting to treat AI agents as a distinct control domain. Four specific questions they are now asking, and what counts as sufficient evidence to satisfy each one.

Read More →
Behavioral Anomaly Detection for AI Agents
December 4, 2025

Behavioral Anomaly Detection for AI Agents: What Normal Looks Like

Before you can detect anomalous agent behavior, you need a concrete baseline. Building that baseline for AI agents is harder than it sounds, and it matters more than for traditional services.

Read More →
API Security for AI Agents
November 20, 2025

API Security for AI Agents: Beyond Rate Limiting

Rate limiting was built for high-volume bots. An AI agent can cause a significant breach while staying well within rate limits. The controls that actually work are different.

Read More →