The ReAct Loop: Think, Act, Observe
The difference between a chatbot and an agent is a loop. Understanding the ReAct loop — Reason, Act, Observe, repeat — is understanding what makes modern AI capable of taking real action in the world instead of just generating text about it.
On this page
The Problem with Pure LLMs
An LLM by itself can only produce text. It can't browse the web, run code, read your files, or call APIs. It's a sophisticated text predictor with no hands — limited to working with whatever information was in its training data or in the current context.
The solution is to give the model the ability to request actions — tool calls — that the surrounding system executes and reports back. This is what transforms a language model into an agent.
The ReAct Framework
ReAct (Reasoning + Acting) is the foundational agent pattern. The loop has four steps: Think — the model reasons about what it needs to do. Act — it calls a tool (web search, code execution, file read, API call). Observe — it receives the result. Repeat — until the task is complete or it can answer directly from what it has learned.
That's the entire architecture. Every major AI framework — LangChain, CrewAI, Claude Code, AutoGPT — is a wrapper around this loop with robustness, tooling, and memory built on top. Learn the loop once and every new framework you encounter becomes easy to understand.
Agents = LLM + tools + loop. Every framework is a while loop with robustness built around it. Learn the loop and the frameworks get easy.
What Makes an Agent
An agent has three components: an LLM (the brain), tools (the hands), and a reasoning loop (the process that connects them). Remove any one and you no longer have an agent.
Tools are just functions with metadata: a name, a description, and an input schema. The model reads these descriptions to decide when and how to call each tool. This is why tool descriptions matter so much — they're the interface between the model's reasoning and the capabilities it has access to.
Tool Descriptions Are Everything
The most important part of a tool isn't its code — it's the description that tells the model when to use it. 'Searches the web' is useless. 'Search the web for current information not available in training data, such as recent events, current prices, or real-time data' tells the model exactly when to reach for the tool versus when to answer directly.
If an agent keeps selecting the wrong tool, rewrite the descriptions before you change anything else. Think of writing a tool description like onboarding a coworker: describe the situation that triggers using this tool, not just what the tool does.
When Not to Build an Agent
Agents add token cost, complexity, and failure modes. A single loop iteration typically costs 3-5x a simple prompt call. That cost has to buy a real capability gap over a well-designed single prompt.
Use a single LLM call for extraction, classification, generation, and summarization. Use an agent when the task requires multi-step decisions where the AI needs to reason about what to call and when — scheduling across multiple calendar APIs, researching a topic across multiple sources, debugging a system by reading logs and running tests.
Want to apply these frameworks to your business?