ReAct
A prompting framework that interleaves reasoning traces and action calls in a single LLM output, enabling agents to think through a problem step-by-step while simultaneously grounding reasoning in real-world observations.
Definition
ReAct (Reasoning + Acting) is a prompting paradigm introduced in the 2022 paper ReAct: Synergizing Reasoning and Acting in Language Models (Yao et al.). It structures the agent's output as an alternating sequence of:
- Thought — the model's internal reasoning about the current situation and what to do.
- Action — a specific tool call or operation to execute.
- Observation — the result returned by the tool or environment.
This cycle repeats until the agent reaches a final answer.
Example ReAct Trace
Thought: I need to find the current price of GPT-4o API calls.
Action: search_web("OpenAI GPT-4o API pricing 2025")
Observation: $2.50 per 1M input tokens, $10 per 1M output tokens.
Thought: I have the information I need.
Final Answer: GPT-4o is priced at $2.50/1M input tokens.
Why It Matters
Before ReAct, reasoning (chain-of-thought) and acting (tool calls) were treated as separate prompting strategies. ReAct's insight is that they are more powerful when interleaved: reasoning grounds the choice of action, and observations from actions correct and refine subsequent reasoning.
ReAct traces also significantly improve interpretability — each step of the agent's thought process is visible and auditable.
Limitations
ReAct can be verbose and token-expensive, and the reasoning quality is bounded by the underlying model. For more complex planning tasks, extensions like Tree-of-Thought or specialized planning algorithms are often combined with ReAct-style execution.