James Fan
Researching how to make LLM agents safe to deploy — focusing on security gaps in AI agents, such as LangGraph agents.
Previously cofounded two AI startups, led Google Cloud Speech Group, taught at Columbia University and was one of the main inventors of the IBM Watson question answering system that beat the best human contestants on Jeopardy!. Now mostly thinking about what happens when you give an AI agent access to real tools.
State and memory security for LangGraph agents — trust-tiered schemas, immutable trusted context, encrypted checkpoints with integrity verification, namespaced long-term memory, and cross-session isolation.
Three terms that emerged as LLM chats grew into agents — what each one means, when it applies, and how the workflow shifts from prompt engineering to harness engineering.