Human in the Loop
Human-in-the-Loop (HITL) workflows integrate human judgment and oversight into automated processes. These workflows pause at critical points for human review, validation, or decision-making before proceeding.
- Compliance: Regulatory requirements may mandate human approval for certain actions.
- Safety: High-stakes operations (payments, deletions, external communications) need oversight.
- Quality: Human review catches errors AI might miss.
- Trust: Users feel more confident when they can approve critical actions.
| Use Case | Example |
|---|---|
| Financial approvals | Expense reports, payment processing |
| Content moderation | Publishing, email sending |
| Data operations | Bulk deletions, exports |
| AI tool execution | Confirming tool calls before running |
| Access control | Granting permissions, role changes |
The Agents SDK provides five patterns for human-in-the-loop. Choose based on your architecture:
| Use Case | Pattern | Best For |
|---|---|---|
| Long-running workflows | Workflow Approval | Multi-step processes, durable approval gates that can wait hours or weeks |
| AIChatAgent tools | needsApproval | Chat-based tool calls with server-side approval before execution |
| Client-side tools | onToolCall | Tools that need browser APIs or user interaction before execution |
| MCP servers | Elicitation | MCP tools requesting structured user input during execution |
| Simple confirmations | State + WebSocket | Lightweight approval flows without AI chat or workflows |
Is this part of a multi-step workflow?├── Yes → Use Workflow Approval (waitForApproval)└── No → Are you building an MCP server? ├── Yes → Use MCP Elicitation (elicitInput) └── No → Is this an AI chat interaction? ├── Yes → Does the tool need browser APIs? │ ├── Yes → Use onToolCall (client-side execution) │ └── No → Use needsApproval (server-side with approval) └── No → Use State + WebSocket for simple confirmationsFor durable, multi-step processes with approval gates that can wait hours, days, or weeks. Use Cloudflare Workflows with the waitForApproval() method.
Key APIs:
waitForApproval(step, { timeout })— Pause workflow until approvedapproveWorkflow(workflowId, { reason?, metadata? })— Approve a waiting workflowrejectWorkflow(workflowId, { reason? })— Reject a waiting workflow
Best for: Expense approvals, content publishing pipelines, data export requests
For AIChatAgent tools that should pause for user confirmation before executing. Define needsApproval on the tool — it can be a boolean or an async predicate based on the tool arguments:
tools: { processPayment: tool({ description: "Process a payment", inputSchema: z.object({ amount: z.number(), recipient: z.string(), }), needsApproval: async ({ amount }) => amount > 100, execute: async ({ amount, recipient }) => charge(amount, recipient), });}tools: { processPayment: tool({ description: "Process a payment", inputSchema: z.object({ amount: z.number(), recipient: z.string(), }), needsApproval: async ({ amount }) => amount > 100, execute: async ({ amount, recipient }) => charge(amount, recipient), });}On the client, render pending approvals from message parts and call addToolApprovalResponse:
const { messages, addToolApprovalResponse } = useAgentChat({ agent });
{ messages.map((msg) => msg.parts .filter( (part) => part.type === "tool" && part.state === "approval-required", ) .map((part) => ( <div key={part.toolCallId}> <p>Approve {part.toolName}?</p> <button onClick={() => addToolApprovalResponse({ id: part.toolCallId, approved: true }) } > Approve </button> <button onClick={() => addToolApprovalResponse({ id: part.toolCallId, approved: false, }) } > Reject </button> </div> )), );}const { messages, addToolApprovalResponse } = useAgentChat({ agent });
{ messages.map((msg) => msg.parts .filter( (part) => part.type === "tool" && part.state === "approval-required", ) .map((part) => ( <div key={part.toolCallId}> <p> Approve {part.toolName}? </p> <button onClick={() => addToolApprovalResponse({ id: part.toolCallId, approved: true }) } > Approve </button> <button onClick={() => addToolApprovalResponse({ id: part.toolCallId, approved: false, }) } > Reject </button> </div> )), );}For custom denial messages, use addToolOutput with state: "output-error" instead of addToolApprovalResponse:
addToolOutput({ toolCallId: part.toolCallId, state: "output-error", errorText: "User declined: insufficient budget for this quarter",});addToolOutput({ toolCallId: part.toolCallId, state: "output-error", errorText: "User declined: insufficient budget for this quarter",});For tools that need browser APIs (geolocation, clipboard, camera) or user interaction before returning a result. Define the tool on the server without execute, then handle it on the client:
const { messages, sendMessage } = useAgentChat({ agent, onToolCall: async ({ toolCall, addToolOutput }) => { if (toolCall.toolName === "getLocation") { const pos = await new Promise((resolve, reject) => navigator.geolocation.getCurrentPosition(resolve, reject), ); addToolOutput({ toolCallId: toolCall.toolCallId, output: { lat: pos.coords.latitude, lng: pos.coords.longitude }, }); } },});const { messages, sendMessage } = useAgentChat({ agent, onToolCall: async ({ toolCall, addToolOutput }) => { if (toolCall.toolName === "getLocation") { const pos = await new Promise((resolve, reject) => navigator.geolocation.getCurrentPosition(resolve, reject), ); addToolOutput({ toolCallId: toolCall.toolCallId, output: { lat: pos.coords.latitude, lng: pos.coords.longitude }, }); } },});When autoContinueAfterToolResult is true (the default), the conversation automatically continues after the client provides the tool output.
For MCP servers that need to request additional structured input from users during tool execution. The MCP client renders a form based on your JSON Schema:
export class MyMcpAgent extends McpAgent { async init() { this.server.server.setRequestHandler( CallToolRequestSchema, async (request, extra) => { const result = await this.server.server.elicitInput({ message: "Please confirm the transfer details", requestedSchema: { type: "object", properties: { confirmed: { type: "boolean", description: "Confirm transfer?" }, notes: { type: "string", description: "Optional notes" }, }, required: ["confirmed"], }, });
if (result.action === "accept" && result.content?.confirmed) { return { content: [{ type: "text", text: "Transfer confirmed" }] }; } return { content: [{ type: "text", text: "Transfer cancelled" }] }; }, ); }}export class MyMcpAgent extends McpAgent { async init() { this.server.server.setRequestHandler( CallToolRequestSchema, async (request, extra) => { const result = await this.server.server.elicitInput({ message: "Please confirm the transfer details", requestedSchema: { type: "object", properties: { confirmed: { type: "boolean", description: "Confirm transfer?" }, notes: { type: "string", description: "Optional notes" }, }, required: ["confirmed"], }, });
if (result.action === "accept" && result.content?.confirmed) { return { content: [{ type: "text", text: "Transfer confirmed" }] }; } return { content: [{ type: "text", text: "Transfer cancelled" }] }; }, ); }}Best for: Interactive tool confirmations, gathering additional parameters mid-execution
In a workflow-based approval:
- The workflow reaches an approval step and calls
waitForApproval() - The workflow pauses and reports progress to the agent
- The agent updates its state with the pending approval
- Connected clients see the pending approval and can approve or reject
- When approved, the workflow resumes with the approval metadata
- If rejected or timed out, the workflow handles the rejection appropriately
Set timeouts to prevent workflows from waiting indefinitely:
const approval = await this.waitForApproval(step, { timeout: "7 days",});const approval = await this.waitForApproval(step, { timeout: "7 days",});Use scheduling for escalation:
await this.schedule(86400, "sendApprovalReminder", { workflowId });
await this.schedule(604800, "escalateToManager", { workflowId });await this.schedule(86400, "sendApprovalReminder", { workflowId });
await this.schedule(604800, "escalateToManager", { workflowId });Maintain immutable audit logs of all approval decisions using the SQL API. Record:
- Who made the decision
- When the decision was made
- The reason or justification
- Any relevant metadata
Human review processes do not operate on predictable timelines. A reviewer might need days or weeks to make a decision. Your system needs to maintain state consistency throughout this period — the original request, intermediate decisions, partial progress, and review history.
Human reviewers play a crucial role in evaluating and improving LLM performance:
- Decision quality assessment: Have reviewers evaluate the LLM's reasoning process and decision points.
- Edge case identification: Use human expertise to identify scenarios where performance could be improved.
- Feedback collection: Gather structured feedback that can be used to fine-tune the LLM. AI Gateway can help set up an LLM feedback loop.
Your system should gracefully handle reviewer unavailability, system outages, conflicting reviews, and timeout expiration. Implement clear escalation paths for exceptional cases and automatic checkpointing that allows workflows to resume from the last stable state after any interruption.