Writing
Notes on AI agent security, LangGraph hardening, and the parts of building with LLMs that don't get enough attention.
What I wish someone had told me about securing LangGraph agents before I started building them.
18 min read May 2026State and memory security for LangGraph agents — trust-tiered schemas, immutable trusted context, encrypted checkpoints with integrity verification, namespaced long-term memory, and cross-session isolation.
Three terms that emerged as LLM chats grew into agents — what each one means, when it applies, and how the workflow shifts from prompt engineering to harness engineering.
Tool security for LangGraph agents — least privilege, parameterized interfaces, SSRF and path-traversal protection, sandboxing, output validation, and the anti-patterns to actively avoid.
A Claude agent deleted PocketOS's production database in nine seconds. The fix isn't a better system prompt — it's a deterministic gate between the agent's tool-call decision and the API actually firing.
Input validation for LangGraph agents — why the classical framing breaks down on natural language, and what actually works across user, retrieval, and state channels.
Agents, scripts with LLMs, and intent-based bots look similar from the outside. The real difference is where control lives — and what that costs you.
Adapting STRIDE for LangGraph agents — where standard threat modeling breaks down, and how to produce a model that ships controls instead of artifacts.
Seven threat categories attackers use against LangGraph agents — how they work mechanically, and what to look for.
Understanding LangGraph's architecture before you try to secure it.
A quick tour of the major agent frameworks — what they give you, what they cost, and when to skip them.
Every channel through which adversarial content can reach a LangGraph agent — thirteen attack surfaces, mapped.
Openclaw shows what AI agents can do, but also highlights the rough edges and security concerns that need addressing.
Claude Code’s source code was leaked via a source map file. Here’s what happened.
AI has turned everybody into managers. Here's how to stay effective.
LiteLLM was compromised in a supply chain attack. Here's what happened and what it means.
Vibe coding a podcast player got me thinking about the future of software and ads.