The Reasoning Loop: Plan -> Execute -> Synthesize
The "reasoning loop" is an advanced orchestration pattern designed to improve the reliability of Large Language Models (LLMs), especially smaller, local models, in executing complex, multi-step tasks.
The core idea is to break down a user's request into three distinct, manageable phases, managed by the application code (the "Orchestrator"). This shifts the responsibility from the LLM having to figure everything out at once to the code guiding the LLM through a structured process.
The Standard Approach (Single-Shot)
The conventional method for tool-use involves a single call to the LLM:
- Ask: The orchestrator takes the user's request (e.g., "analyze this project") and sends it to the LLM, along with a list of available tools.
- Hope: The orchestrator then relies on the LLM's intrinsic capabilities to:
- Understand the user's ultimate goal.
- Formulate an internal plan.
- Decide which tools to use.
- Call them in the correct sequence.
- Synthesize the results into a final, user-facing answer.
This works well for highly capable foundation models but often fails with smaller models, which may only execute one tool before stopping or losing track of the overall objective.
The Reasoning Loop Approach
The reasoning loop explicitly separates the process into three phases, with the orchestrator managing each one.
Phase 1: The "Plan" Step
The orchestrator's first job is to get a structured plan from the LLM.
- Orchestrator's Role (
work.ts): Call the LLM with a specific, narrow instruction.
- Prompt: "You are a planning module. Based on the user's request, generate a JSON array of the tool calls needed to accomplish the goal. Do not execute the tools. Do not write a summary. Just return the plan."
- LLM's Role: The LLM's only task is to think and create a structured plan.
- LLM Output (Example):
[
{ "tool": "listDirectory", "toolInput": { "directory": "." } },
{ "tool": "readFile", "toolInput": { "fileName": "package.json" } },
{ "tool": "readFile", "toolInput": { "fileName": "README.md" } }
]
Phase 2: The "Execute" Step
This phase is handled entirely by the application code. The LLM is not involved.
- Orchestrator's Role (
work.ts):
- Receive and parse the JSON plan from the LLM.
- Iterate through the plan, executing each tool call in sequence. For example, it calls the actual
listDirectory() function, then readFile(), etc.
- Collect the output from each tool call (e.g., the directory listing, the file contents) into a results object.
Phase 3: The "Synthesize" Step
With all the necessary information gathered, the orchestrator goes back to the LLM for the final step.
- Orchestrator's Role (
work.ts): Make a second LLM call with a different, focused instruction.
- Prompt: "You are a summarization module. The user asked to 'analyze the project'. I have executed the necessary tools and gathered the following information: [paste the directory listing result], [paste the package.json content], [paste the README.md content]. Based only on this provided information, provide a comprehensive, natural-language answer to the user's original request."
- LLM's Role: The LLM's task is now much simpler and less prone to error. It doesn't need to remember tools or a plan; it only needs to process the provided context and generate a user-friendly summary.
Key Advantages of the Reasoning Loop
- Increased Reliability: It dramatically improves the success rate for smaller, less capable LLMs by breaking a complex problem into simple, single-purpose tasks (Plan, then Synthesize).
- Improved Debugging: If the process fails, it's easy to pinpoint the source of the error. Did the LLM produce a bad plan? Or did it fail to summarize the results correctly? This allows for targeted prompt-tuning.
- Greater Control and Extensibility: The orchestrator has full control over the execution phase. This allows for:
- Validation: Checking the plan before execution.
- User Confirmation: Adding a confirmation step before executing destructive operations ("The AI plans to delete 3 files. Proceed? y/n").
- Enrichment: Adding extra information to the context before the synthesis step.
- Reduced Cognitive Load: The LLM is not required to hold a complex state in its context window for a long time. Each call is targeted and has a limited scope.
The Reasoning Loop: Plan -> Execute -> Synthesize
The "reasoning loop" is an advanced orchestration pattern designed to improve the reliability of Large Language Models (LLMs), especially smaller, local models, in executing complex, multi-step tasks.
The core idea is to break down a user's request into three distinct, manageable phases, managed by the application code (the "Orchestrator"). This shifts the responsibility from the LLM having to figure everything out at once to the code guiding the LLM through a structured process.
The Standard Approach (Single-Shot)
The conventional method for tool-use involves a single call to the LLM:
This works well for highly capable foundation models but often fails with smaller models, which may only execute one tool before stopping or losing track of the overall objective.
The Reasoning Loop Approach
The reasoning loop explicitly separates the process into three phases, with the orchestrator managing each one.
Phase 1: The "Plan" Step
The orchestrator's first job is to get a structured plan from the LLM.
work.ts): Call the LLM with a specific, narrow instruction.[ { "tool": "listDirectory", "toolInput": { "directory": "." } }, { "tool": "readFile", "toolInput": { "fileName": "package.json" } }, { "tool": "readFile", "toolInput": { "fileName": "README.md" } } ]Phase 2: The "Execute" Step
This phase is handled entirely by the application code. The LLM is not involved.
work.ts):listDirectory()function, thenreadFile(), etc.Phase 3: The "Synthesize" Step
With all the necessary information gathered, the orchestrator goes back to the LLM for the final step.
work.ts): Make a second LLM call with a different, focused instruction.Key Advantages of the Reasoning Loop