Skip to content
Cloudflare Docs

Build a Human-in-the-loop Agent

Build an AI Agent with Human Oversight

This guide will show you how to build and deploy an AI agent on Cloudflare Workers that implements human-in-the-loop functionality, allowing AI agents to request human approval before executing certain actions.

Your Human-in-the-Loop Agent will be able to:

  • Request human approval for sensitive tool executions
  • Stream real-time responses using WebSocket connections
  • Persist conversation state across sessions
  • Differentiate between automatic and confirmation-required tools

This pattern is crucial for scenarios where human oversight and confirmation are required before taking important actions like making purchases, sending emails, or modifying data.

You can view the full code for this example here.

Prerequisites

Before you begin, you will need:

1. Create your project

  1. Create a new project for your Human-in-the-Loop Agent:
Terminal window
npm create cloudflare@latest -- human-in-the-loop
  1. Navigate into your project:
Terminal window
cd human-in-the-loop
  1. Install the required dependencies:
Terminal window
npm install agents @ai-sdk/openai ai zod react react-dom

2. Set up your environment variables

  1. Create a .dev.vars file in your project root for local development secrets:
Terminal window
touch .dev.vars
  1. Add your credentials to .dev.vars:
Terminal window
OPENAI_API_KEY="your-openai-api-key"
  1. Update your wrangler.toml to configure your Agent:
{
"$schema": "./node_modules/wrangler/config-schema.json",
"name": "human-in-the-loop",
"main": "./src/server.ts",
"compatibility_date": "2025-02-21",
"compatibility_flags": [
"nodejs_compat",
"nodejs_compat_populate_process_env"
],
"assets": {
"directory": "public"
},
"durable_objects": {
"bindings": [
{
"name": "HumanInTheLoop",
"class_name": "HumanInTheLoop"
}
]
},
"migrations": [
{
"tag": "v1",
"new_sqlite_classes": [
"HumanInTheLoop"
]
}
]
}

3. Define your tools

Create your tool definitions at src/tools.ts. Tools can be configured to either require human confirmation or execute automatically:

TypeScript
import { tool } from "ai";
import { z } from "zod";
import type { AITool } from "agents/ai-react";
// Server-side tool that requires confirmation (no execute function)
const getWeatherInformationTool = tool({
description:
"Get the current weather information for a specific city. Always use this tool when the user asks about weather.",
inputSchema: z.object({
city: z.string().describe("The name of the city to get weather for")
})
// no execute function - requires human approval
});
// Client-side tool that requires confirmation
const getLocalTimeTool = tool({
description: "Get the local time for a specified location",
inputSchema: z.object({ location: z.string() }),
execute: async ({ location }) => {
console.log(`Getting local time for ${location}`);
await new Promise((res) => setTimeout(res, 2000));
return "10am";
}
});
// Server-side tool that does NOT require confirmation
const getLocalNewsTool = tool({
description: "Get local news for a specified location",
inputSchema: z.object({ location: z.string() }),
execute: async ({ location }) => {
console.log(`Getting local news for ${location}`);
await new Promise((res) => setTimeout(res, 2000));
return `${location} kittens found drinking tea this last weekend`;
}
});
// Export AI SDK tools for server-side use
export const tools = {
getLocalTime: {
description: getLocalTimeTool.description,
inputSchema: getLocalTimeTool.inputSchema
},
getWeatherInformation: getWeatherInformationTool,
getLocalNews: getLocalNewsTool
};
// Export AITool format for client-side use
export const clientTools: Record<string, AITool> = {
getLocalTime: getLocalTimeTool as AITool,
getWeatherInformation: {
description: getWeatherInformationTool.description,
inputSchema: getWeatherInformationTool.inputSchema
},
getLocalNews: {
description: getLocalNewsTool.description,
inputSchema: getLocalNewsTool.inputSchema
}
};

4. Create utility functions

Create helper functions at src/utils.ts to handle tool confirmations and processing:

TypeScript
import type { UIMessage } from "@ai-sdk/react";
import type { UIMessageStreamWriter, ToolSet } from "ai";
import type { z } from "zod";
// Approval constants
export const APPROVAL = {
NO: "No, denied.",
YES: "Yes, confirmed."
} as const;
// Tools that require Human-In-The-Loop confirmation
export const toolsRequiringConfirmation = [
"getLocalTime",
"getWeatherInformation"
];
// Type guard to check if part has required properties
function isToolConfirmationPart(part: unknown): part is {
type: string;
output: string;
input?: Record<string, unknown>;
} {
return (
typeof part === "object" &&
part !== null &&
"type" in part &&
"output" in part &&
typeof (part as { type: unknown }).type === "string" &&
typeof (part as { output: unknown }).output === "string"
);
}
// Check if a message contains tool confirmations
export function hasToolConfirmation(message: UIMessage): boolean {
return (
message?.parts?.some(
(part) =>
part.type?.startsWith("tool-") &&
toolsRequiringConfirmation.includes(part.type?.slice("tool-".length)) &&
"output" in part
) || false
);
}
// Weather tool implementation
export async function getWeatherInformation(args: unknown): Promise<string> {
const { city } = args as { city: string };
const conditions = ["sunny", "cloudy", "rainy", "snowy"];
return `The weather in ${city} is ${
conditions[Math.floor(Math.random() * conditions.length)]
}.`;
}

5. Create your Agent

Create your agent implementation at src/server.ts:

TypeScript
import { openai } from "@ai-sdk/openai";
import { routeAgentRequest } from "agents";
import { AIChatAgent } from "agents/ai-chat-agent";
import {
convertToModelMessages,
createUIMessageStream,
createUIMessageStreamResponse,
type StreamTextOnFinishCallback,
streamText,
stepCountIs
} from "ai";
import { tools } from "./tools";
import {
processToolCalls,
hasToolConfirmation,
getWeatherInformation
} from "./utils";
type Env = {
OPENAI_API_KEY: string;
};
export class HumanInTheLoop extends AIChatAgent<Env> {
async onChatMessage(onFinish: StreamTextOnFinishCallback<{}>) {
const startTime = Date.now();
const lastMessage = this.messages[this.messages.length - 1];
// Check if the last message contains tool confirmations
if (hasToolConfirmation(lastMessage)) {
// Process tool confirmations using UI stream
const stream = createUIMessageStream({
execute: async ({ writer }) => {
await processToolCalls(
{ writer, messages: this.messages, tools },
{ getWeatherInformation }
);
}
});
return createUIMessageStreamResponse({ stream });
}
// Normal message flow - stream AI response
const result = streamText({
messages: convertToModelMessages(this.messages),
model: openai("gpt-4o"),
onFinish,
tools,
stopWhen: stepCountIs(5)
});
return result.toUIMessageStreamResponse({
messageMetadata: ({ part }) => {
if (part.type === "start") {
return {
model: "gpt-4o",
createdAt: Date.now(),
messageCount: this.messages.length
};
}
if (part.type === "finish") {
return {
responseTime: Date.now() - startTime,
totalTokens: part.totalUsage?.totalTokens
};
}
}
});
}
}
export default {
async fetch(request: Request, env: Env, _ctx: ExecutionContext) {
return (
(await routeAgentRequest(request, env)) ||
new Response("Not found", { status: 404 })
);
}
} satisfies ExportedHandler<Env>;

6. Build the React frontend

Create your React chat interface at src/app.tsx:

import type { UIMessage as Message } from "ai";
import { getToolName, isToolUIPart } from "ai";
import { clientTools } from "./tools";
import { APPROVAL, toolsRequiringConfirmation } from "./utils";
import { useAgentChat, type AITool } from "agents/ai-react";
import { useAgent } from "agents/react";
import { useCallback, useEffect, useRef, useState } from "react";
export default function Chat() {
const messagesEndRef = useRef<HTMLDivElement>(null);
const scrollToBottom = useCallback(() => {
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
}, []);
const agent = useAgent({
agent: "human-in-the-loop"
});
const { messages, sendMessage, addToolResult, clearHistory } = useAgentChat({
agent,
experimental_automaticToolResolution: true,
toolsRequiringConfirmation,
tools: clientTools satisfies Record<string, AITool>
});
const [input, setInput] = useState("");
const handleSubmit = useCallback(
(e: React.FormEvent<HTMLFormElement>) => {
e.preventDefault();
if (input.trim()) {
sendMessage({ role: "user", parts: [{ type: "text", text: input }] });
setInput("");
}
},
[input, sendMessage]
);
// Scroll to bottom when messages change
useEffect(() => {
messages.length > 0 && scrollToBottom();
}, [messages, scrollToBottom]);
// Check if there's a pending tool confirmation
const pendingToolCallConfirmation = messages.some((m: Message) =>
m.parts?.some(
(part) => isToolUIPart(part) && part.state === "input-available"
)
);
return (
<div className="chat-container">
<div className="messages-wrapper">
{messages?.map((m: Message) => (
<div key={m.id} className="message">
<strong>{`${m.role}: `}</strong>
{m.parts?.map((part, i) => {
switch (part.type) {
case "text":
return (
<div key={i} className="message-content">
{part.text}
</div>
);
default:
if (isToolUIPart(part)) {
const toolCallId = part.toolCallId;
const toolName = getToolName(part);
// Show tool results for automatic tools
if (part.state === "output-available") {
return (
<div key={toolCallId} className="tool-invocation">
<span className="tool-name">{toolName}</span>{" "}
returned:{" "}
<span className="tool-result">
{JSON.stringify(part.output, null, 2)}
</span>
</div>
);
}
// Render confirmation UI for tools requiring approval
if (part.state === "input-available") {
const tool = clientTools[toolName];
if (!toolsRequiringConfirmation.includes(toolName)) {
return (
<div key={toolCallId} className="tool-invocation">
<span className="tool-name">{toolName}</span>{" "}
executing...
</div>
);
}
return (
<div key={toolCallId} className="tool-invocation">
Run <span className="tool-name">{toolName}</span> with
args:{" "}
<span className="tool-args">
{JSON.stringify(part.input)}
</span>
<div className="button-container">
<button
type="button"
className="button-approve"
onClick={async () => {
const output = tool.execute
? await tool.execute(part.input)
: APPROVAL.YES;
addToolResult({
tool: toolName,
output,
toolCallId
});
}}
>
Approve
</button>
<button
type="button"
className="button-reject"
onClick={() => {
const output = tool.execute
? "User declined to run tool"
: APPROVAL.NO;
addToolResult({
tool: toolName,
output,
toolCallId
});
}}
>
Reject
</button>
</div>
</div>
);
}
}
return null;
}
})}
</div>
))}
<div ref={messagesEndRef} />
</div>
<form onSubmit={handleSubmit}>
<input
disabled={pendingToolCallConfirmation}
className="chat-input"
value={input}
placeholder="Say something..."
onChange={(e) => setInput(e.target.value)}
/>
</form>
</div>
);
}

7. Test locally

Start your development server:

Terminal window
npm run dev

Your agent is now running at http://localhost:8787.

Test the approval flow

  1. Open http://localhost:8787 in your browser.
  2. Ask the agent about the weather: "What's the weather in San Francisco?"
  3. The agent will attempt to call the getWeatherInformation tool.
  4. You will see an approval prompt with Approve and Reject buttons.
  5. Click Approve to allow the tool execution, or Reject to deny it.
  6. The agent will respond with the result or acknowledge the rejection.

Test automatic tools

  1. Ask the agent for news: "What's the news in London?"
  2. The getLocalNews tool will execute automatically without requiring approval.

8. Deploy to production

  1. Before deploying, add your secrets to Cloudflare:
Terminal window
npx wrangler secret put OPENAI_API_KEY
  1. Build and deploy your agent:
Terminal window
npm run deploy

After deploying, you will get a production URL like:

https://human-in-the-loop.your-account.workers.dev

How it works

Tool approval flow

The human-in-the-loop pattern works by intercepting tool calls before execution:

  1. Tool invocation: The AI decides to call a tool based on user input.
  2. Approval check: The system checks if the tool requires human confirmation.
  3. Confirmation prompt: If approval is required, the UI displays the tool name and arguments with Approve/Reject buttons.
  4. User decision: The user reviews the action and makes a decision.
  5. Execution or rejection: Based on the user's choice, the tool either executes or returns a rejection message.

Message streaming with confirmations

The agent uses the Vercel AI SDK's streaming capabilities:

  • createUIMessageStream creates a stream for processing tool confirmations.
  • streamText handles normal AI responses with tool calls.
  • The hasToolConfirmation function detects when a message contains a tool confirmation response.

State persistence

Your agent uses Durable Objects to maintain conversation state:

  • Conversation history persists across browser refreshes.
  • Each agent instance has isolated storage.
  • Tool confirmation states are tracked in the message history.

Customizing your agent

Add more tools requiring confirmation

Add new tools to the toolsRequiringConfirmation array in src/utils.ts:

TypeScript
export const toolsRequiringConfirmation = [
"getLocalTime",
"getWeatherInformation",
"sendEmail", // Add your new tools here
"makePurchase"
];

Implement custom tool handlers

For server-side tools requiring confirmation, add execute functions in your agent:

TypeScript
if (hasToolConfirmation(lastMessage)) {
const stream = createUIMessageStream({
execute: async ({ writer }) => {
await processToolCalls(
{ writer, messages: this.messages, tools },
{
getWeatherInformation,
sendEmail: async ({ to, subject, body }) => {
// Your email sending logic
return `Email sent to ${to}`;
}
}
);
}
});
return createUIMessageStreamResponse({ stream });
}

Customize the approval UI

Enhance the confirmation interface with more context:

if (part.state === "input-available") {
return (
<div className="tool-approval-card">
<h3>Action Required</h3>
<p>
The AI wants to execute: <strong>{toolName}</strong>
</p>
<pre>{JSON.stringify(part.input, null, 2)}</pre>
<div className="approval-buttons">
<button className="approve" onClick={() => handleApprove(part)}>
✓ Approve
</button>
<button className="reject" onClick={() => handleReject(part)}>
✗ Reject
</button>
</div>
</div>
);
}

Use different LLM providers

Replace OpenAI with Workers AI:

TypeScript
import { createWorkersAI } from "workers-ai-provider";
export class HumanInTheLoop extends AIChatAgent<Env> {
async onChatMessage(onFinish: StreamTextOnFinishCallback<{}>) {
const workersai = createWorkersAI({ binding: this.env.AI });
const result = streamText({
messages: convertToModelMessages(this.messages),
model: workersai("@cf/meta/llama-3-8b-instruct"),
onFinish,
tools
});
return result.toUIMessageStreamResponse();
}
}

Best practices

  • Define clear approval workflows — Only require confirmation for actions with meaningful consequences (payments, emails, data changes).
  • Provide detailed context — Show users exactly what the tool will do, including all arguments.
  • Implement timeouts — Consider auto-rejecting tools after a reasonable timeout period.
  • Handle connection drops — Ensure the UI can recover if the WebSocket connection is interrupted.
  • Log all decisions — Track approval/rejection decisions for audit trails.
  • Graceful degradation — Provide fallback behavior if tools are rejected.

Next steps