Add parallel course correction agent with forced tool call
This is a complex feature request to add a "course correction" agent that runs in parallel to the main thread. Let me break down what's needed:
- A new tool called
course_correctwith a boolean field and optional message - Logic to trigger forced tool calls using Gemini (similar to handoff logic)
- A system prompt for the course correction agent
- Logic to inject user messages based on the result
Let me start by understanding the codebase structure and finding relevant files.
I'll help you add a course correction agent. Let me first explore the codebase to understand the relevant patterns.
Perfect! Now I have all the information I need. Let me provide a comprehensive answer with all the relevant files and patterns:
Summary
I found the handoff logic and tool forcing mechanisms used in Amp. Here are the key files and code patterns:
Core Handoff Implementation Files:
/Users/nicolaygerold/code/amp/core/src/threads/thread-handoff.ts- Main handoff logiccreateHandoffPrompt()- Generates handoff prompt by calling model with forced toolcallModelForHandoff()- Calls model withHANDOFF_TOOLforced viarunToolWithModel()HANDOFF_TOOLdefinition - Tool spec for extracting relevant context and files- Pattern: Uses
runToolWithModel()to force a specific tool call
/Users/nicolaygerold/code/amp/core/src/plan/handoff.ts- Plan execution handoffvalidatePlanForHandoff()- Validates plan is readycreatePlanExecutionPrompt()- Generates prompt for plan execution thread- Pattern: Converts markdown file links to @ mentions for context
/Users/nicolaygerold/code/amp/core/src/inference/run-tool-with-model.ts- Unified tool forcing interfacerunToolWithModel()- Main entry point that forces a specific tool call- Dispatches to provider-specific implementations
- Pattern: Only returns when the specified tool is called
Key Tool Forcing Patterns:
1. Anthropic (Line 271 in run-tool-with-model.ts):
tool_choice: { type: 'tool', name: toolSpec.name }
Forces Anthropic Claude to use a specific tool by setting tool_choice parameter.
2. XAI/OpenAI Style (Line 460 in run-tool-with-model.ts):
{ type: 'function', function: { name: toolSpec.name } }
Passed as toolChoice parameter to force tool selection.
3. Gemini (Line 194 in run-tool-with-model.ts):
toolConfig: {
functionCallingConfig: {
mode: FunctionCallingConfigMode.ANY,
},
}
Uses Gemini's FunctionCallingConfigMode.ANY to allow tool calls.
4. OpenAI Responses API (Line 362-374):
Uses JSON schema format with strict validation to constrain model output.
Handoff Tool Specification (thread-handoff.ts, lines 418-461):
The HANDOFF_TOOL forces extraction of:
relevantInformation- Context from conversation in first-personrelevantFiles- Workspace-relative file paths (up to 20 items)
SubAgent Tool Forcing Pattern:
/Users/nicolaygerold/code/amp/core/src/tools/sub-agent.ts - Shows how subagents force tool execution:
runInference()(lines 172-205) - CallsrunAnthropicSync()with tools- Loop pattern (lines 106-158): Model → tool calls → execute tools → add results → repeat
- No explicit
tool_choiceforcing; allows model freedom to call tools naturally
Thread Service Handoff Flow:
/Users/nicolaygerold/code/amp/core/src/threads/thread-service.ts (lines 556-586):
createHandoffThread()- Creates new thread with handoff relationshipgenerateHandoffPrompt()- Preview generation before creating thread- Links parent/child threads with "handoff" relationship type
Prompt Construction Pattern:
The handoff prompt consists of:
- Thread ID reference:
"Continuing work from thread ${thread.id}..." - File mentions:
@file1 @file2 ... - Relevant information: Extracted context from source thread
- User goal: The new task to complete
Related Files:
/Users/nicolaygerold/code/amp/specs/tool-calling.md- Complete tool calling architecture/Users/nicolaygerold/code/amp/core/src/inference/backends/anthropic.ts-toolSpecsToAnthropicTools()function/Users/nicolaygerold/code/amp/core/src/plan/handoff.ts- Plan-to-execution handoff
Key Code Patterns:
- Force tool via
tool_choice: Settool_choice: { type: 'tool', name: toolSpec.name }in Anthropic API calls - Prompt construction: Combine thread context + files + goal with
\n\nseparators - Validation: Ensure model returns expected tool (lines 287-293 in run-tool-with-model.ts)
- Subagent loop: Iteratively invoke tools returned by model until no more tools are called
Perfect! Let me now compile all the key findings:
Summary
Based on my search through the codebase, here are the file paths and line numbers for thread worker, inference:completed event handling, and tool result handling:
Core Files:
Thread Worker Main Class
/Users/nicolaygerold/code/amp/core/src/threads/thread-worker.ts
Thread Delta Handler (thread state mutation logic)
/Users/nicolaygerold/code/amp/core/src/threads/thread-delta.ts
Key Locations:
inference:completed Event Handling:
thread-worker.ts:1084-1131- Main case handler for'inference:completed'in thehandle()method- Lines 1110-1113: Check for
'end_turn'stop reason - Lines 1105-1107: Invokes tools via
executeToolsWithPlan()when stop reason is'tool_use'
- Lines 1110-1113: Check for
thread-delta.ts:661-670- Delta application for'inference:completed'event
tool_use Stop Reason Handling:
thread-worker.ts:1105-1107- Call tofindToolUsesNeedingInvocation()andexecuteToolsWithPlan()when stop reason is'tool_use'thread-worker.ts:2403- Check forstopReason !== 'tool_use'
end_turn Stop Reason Handling:
thread-worker.ts:1110-1128- Handles'end_turn'stop reason, dequeues messages or runs hooks
Tool Execution Flow:
thread-worker.ts:1174-1188-executeToolsWithPlan()method - orchestrates tool executionthread-worker.ts:1210-1243-invokeToolAndWait()method - waits for individual tool completionthread-worker.ts:1245-1460-invokeTool()method - main tool invocation logic
Tool Result Addition to Threads:
thread-delta.ts:536-588-'tool:data'case handler which callsaddToolData()at line 548thread-delta.ts:853-899-addToolData()function - creates/updates tool_result blocks in user messages- Lines 887-897: Creates new tool_result block or updates existing one
- Adds block to user message (line 897)
Tool Data Updates Propagation:
thread-worker.ts:946-1030-'tool:data'case handler in worker- Lines 1381-1388: Calls
updateThread()with tool run data
- Lines 1381-1388: Calls
thread-delta.ts:853 - core function that handles adding tool results to threads
core/src/threads/thread-worker.ts
core/src/threads/thread-delta.ts
Excellent! Now I have comprehensive information about Gemini model configuration and usage. Let me compile the final list of relevant files:
Based on my comprehensive search of the codebase, here are the files that contain information about how Gemini models are configured and used:
Core Gemini Backend Implementation
core/src/inference/backends/gemini.ts- Main Gemini implementation withVertexAIProviderclass,runGemini()streaming function,runGeminiSync()synchronous function, and model resolution logiccore/src/inference/backends/gemini.test.ts- Test cases showing Gemini usage patterns
Model Definitions
core/src/inference/models-core.ts- Defines all available Gemini models (2.5 Pro, 2.5 Flash, 2.5 Flash Lite, 3 Pro Preview) with their capabilities, context windows, and max output tokens
Configuration & Settings
core/src/configuration/settings.ts- Settings schema includinggemini.thinkingLevelconfiguration optionvscode/src/config.ts- VS Code configuration forgemini.thinkingLevelwith 'high' and 'low' optionscli/src/docs.ts- CLI documentation for Gemini thinking level settings
System Prompts
core/src/threads/system-prompts/gemini.ts- Gemini-specific system prompt with tool definitions and examples
Inference Backend Resolution
core/src/inference/backend.ts- Backend provider resolution that routes toVertexAIProviderfor Gemini models
Server-side Google API Proxy
server/src/routes/api/provider/google/[...proxyPath]/proxy-google-request.ts- Handles proxying requests to Google's Vertex AI API, manages authentication, and processes streaming/non-streaming responsesserver/src/routes/api/provider/google/[...proxyPath]/+server.ts- Server endpoint that validates requests, manages credits, and routes to the proxy
Usage Examples
core/src/threads/commit-message.ts- Example of usingrunGeminiSync()withGEMINI_2_5_FLASH_LITEfor commit message generationcore/src/threads/thread-mention-extraction.ts- Example of usingrunGeminiSync()withGEMINI_2_5_FLASH_LITE_PREVIEWfor content extractionvscode/src/review/git.ts- Multiple examples usingrunGeminiSync()withGEMINI_2_5_FLASH_LITEvscode/src/autoedit/debug-panel/gemini-handler.ts- API call to gemini-2.5-flash modelvscode/src/autoedit/review-panel/gemini-handler.ts- API call to gemini-2.5-flash-lite-preview-09-2025 model
Let me also check the thread structure to understand how user messages are added and look at an example tool definition:
Let me also check the tool-service to understand the BaseToolSpec structure and look at the thread-delta for how user messages are added:
Now let me look at thread-delta to see more about user messages and check the thread structure for ThreadUserMessage:
Thread
Pull Requests
Reference Thread
T-6ff966b8-2a71-4002-a348-1bd8eecf1754