Skip to content
Meta logo

llama-guard-3-8b

Text GenerationMeta
@cf/meta/llama-guard-3-8b

Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.

Playground

Try out this model with Workers AI LLM Playground. It does not require any setup or authentication and an instant way to preview and test a model directly in the browser.

Launch the LLM Playground

Usage

Worker

export interface Env {
AI: Ai;
}
export default {
async fetch(request, env): Promise<Response> {
const messages = [
{
role: 'user',
content: 'I wanna bully someone online',
},
{
role: 'assistant',
content: 'That sounds interesting, how can I help?',
},
];
const response = await env.AI.run("@cf/meta/llama-guard-3-8b", { messages });
return Response.json(response);
},
} satisfies ExportedHandler<Env>;

Python

import os
import requests
ACCOUNT_ID = "your-account-id"
AUTH_TOKEN = os.environ.get("CLOUDFLARE_AUTH_TOKEN")
response = requests.post(
f"https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/meta/llama-guard-3-8b",
headers={"Authorization": f"Bearer {AUTH_TOKEN}"},
json={
"messages": [
{"role": "user", "content": "I want to bully somebody online"},
{"role": "assistant", "content": "Interesting. Let me know how I can be of assistance?"},
]
}
)
result = response.json()
print(result)

curl

Terminal window
curl https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/ai/run/@cf/meta/llama-guard-3-8b \
-X POST \
-H "Authorization: Bearer $CLOUDFLARE_AUTH_TOKEN" \
-d '{ "messages": [{ "role": "user", "content": "I want to bully someone online" }, {"role": "assistant", "Interesting. How can I assist you?"}]}'

Parameters

* indicates a required field

Input

  • messages * array

    An array of message objects representing the conversation history.

    • items object

      • role * string

        The role of the message sender (e.g., 'user', 'assistant', 'system', 'tool').

      • content * string max 131072

        The content of the message as a string.

  • max_tokens integer default 256

    The maximum number of tokens to generate in the response.

  • temperature number default 0.6 min 0 max 5

    Controls the randomness of the output; higher values produce more random results.

  • response_format object

    Dictate the output format of the generated response.

    • type string

      Set to json_object to process and output generated text as JSON.

Output

  • response one of

    • 0 string

      The generated text response from the model.

    • 1 object

      The json response parsed from the generated text response from the model.

      • safe boolean

        Whether the conversation is safe or not.

      • categories array

        A list of what hazard categories predicted for the conversation, if the conversation is deemed unsafe.

        • items string

          Hazard category classname, from S1 to S14.

  • usage object

    Usage statistics for the inference request

    • prompt_tokens number 0

      Total number of tokens in input

    • completion_tokens number 0

      Total number of tokens in output

    • total_tokens number 0

      Total number of input and output tokens

API Schemas

The following schemas are based on JSON Schema

{
"type": "object",
"properties": {
"messages": {
"type": "array",
"description": "An array of message objects representing the conversation history.",
"items": {
"type": "object",
"properties": {
"role": {
"type": "string",
"description": "The role of the message sender (e.g., 'user', 'assistant', 'system', 'tool')."
},
"content": {
"type": "string",
"maxLength": 131072,
"description": "The content of the message as a string."
}
},
"required": [
"role",
"content"
]
}
},
"max_tokens": {
"type": "integer",
"default": 256,
"description": "The maximum number of tokens to generate in the response."
},
"temperature": {
"type": "number",
"default": 0.6,
"minimum": 0,
"maximum": 5,
"description": "Controls the randomness of the output; higher values produce more random results."
},
"response_format": {
"type": "object",
"description": "Dictate the output format of the generated response.",
"properties": {
"type": {
"type": "string",
"description": "Set to json_object to process and output generated text as JSON."
}
}
}
},
"required": [
"messages"
]
}