AI Agent vs. Scripts vs. Intent-based Bots

AI Agent vs. Scripts vs. Intent-based Bots

Recently, someone asked if AI agents are just another name for scripts with LLM, and this made me think: what are AI Agents, and how are they different from scripts with LLM or intent based bots?

What are Agents, Scripts and Intent-based Bot Building Platforms?

Before we discuss the differences, let’s define them:

  • AI Agents: Autonomous, goal-directed systems that plan, act, and adapt using LLMs as a reasoning engine.
  • Scripts with LLM: Deterministic code that calls an LLM API at defined steps.
  • Intent based bot: Structured dialogue platforms (e.g. Dialogflow, LUIS, Rasa) that use LLMs for intent classification within a defined flow create bots that follow the flow.

What are the differences

The fundamental distinction is where control lives. In intent-based platforms, the developer defines every possible state upfront and the LLM only classifies. In scripts, the developer writes the program and the LLM is a smart subroutine.

In agents, the LLM itself decides what to do next — the developer just sets a goal and provides tools. This is a fundamental change in paradigm. The developer no longer has explicit control of the flow of the execution, and the agent decides the flow on its own. This makes agents more flexible, more autonomous, but less predictable and its behavior more complex. Agents can take on tasks that are open ended and not well defined, such as research competitors, write a summary, save to drive.

How to develop agents?

For the longest time, writing code is like writing a step by step cooking recipe in a language that the computer understands. The goal is to let computers follow the recipe to automate a specific workflow. If agents no longer have a fixed workflow, how to implement agents? It’s actually surprisingly straightforward. It basically does the following:

  • Query model
  • If there’s any tool call, then call the tools and append tool results

The LLM model decides what the next step should be based on what the user input is. It’s just a matter of giving the model enough tools to carry out different functionalities. This also gives rise to a host of security issues since an adversarial can trick the agent/LLM into completely unexpected behaviors. See my blog series on agent security for more details.