The Agent Layer

AI Agent Security Research — documenting real-world agentic breaches, attack techniques, and defensive frameworks as the field develops in real time.

The Liability Gap: When AI Agents Act, Who's Responsible?

There’s a liability gap forming in AI agent security, and the industry isn’t talking about it clearly enough. The gap isn’t primarily technical. The attack techniques are understood — prompt injection, tool misuse, over-permissioned agents acting on adversarial instructions. What isn’t understood is the legal and organizational question underneath: when an AI agent acts autonomously on injected instructions and causes real damage, who owns that outcome? The honest answer right now is: nobody knows. There’s no legal precedent. The terms of service are written to disclaim everything. And the regulatory frameworks that might eventually clarify this haven’t arrived yet. ...

March 11, 2026 · 7 min · Austin

Clinejection: How a GitHub Issue Compromised Cline's Entire NPM Supply Chain

Breach Catalog — Entry #001. Source: Simon Willison’s Blog via Adnan Khan. Incident date: March 2026. A developer opened a GitHub issue against Cline — a popular AI coding assistant — and by the time it was over, an attacker had published a malicious version of the package to NPM with over a million weekly downloads. The root cause wasn’t a zero-day. It wasn’t a credential leak. It was an AI agent reading a GitHub issue title and doing exactly what it was told. ...

March 9, 2026 · 8 min · Austin