discolm-german-7b-v1-awq Beta
Text Generation • thebloke • HostedDiscoLM German 7b is a Mistral-based large language model with a focus on German-language applications. AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization.
| Model Info | |
|---|---|
| Deprecated | 10/1/2025 |
| Context Window ↗ | 4,096 tokens |
| More information | link ↗ |
| Beta | Yes |
Playground
Try out this model with Workers AI LLM Playground. It does not require any setup or authentication and an instant way to preview and test a model directly in the browser.
Launch the LLM PlaygroundUsage
export interface Env { AI: Ai;}
export default { async fetch(request, env): Promise<Response> {
const messages = [ { role: "system", content: "You are a friendly assistant" }, { role: "user", content: "What is the origin of the phrase Hello, World", }, ];
const stream = await env.AI.run("@cf/thebloke/discolm-german-7b-v1-awq", { messages, stream: true, });
return new Response(stream, { headers: { "content-type": "text/event-stream" }, }); },} satisfies ExportedHandler<Env>;export interface Env { AI: Ai;}
export default { async fetch(request, env): Promise<Response> {
const messages = [ { role: "system", content: "You are a friendly assistant" }, { role: "user", content: "What is the origin of the phrase Hello, World", }, ]; const response = await env.AI.run("@cf/thebloke/discolm-german-7b-v1-awq", { messages });
return Response.json(response); },} satisfies ExportedHandler<Env>;import osimport requests
ACCOUNT_ID = "your-account-id"AUTH_TOKEN = os.environ.get("CLOUDFLARE_AUTH_TOKEN")
prompt = "Tell me all about PEP-8"response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/thebloke/discolm-german-7b-v1-awq", headers={"Authorization": f"Bearer {AUTH_TOKEN}"}, json={ "messages": [ {"role": "system", "content": "You are a friendly assistant"}, {"role": "user", "content": prompt} ] })result = response.json()print(result)curl https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/ai/run/@cf/thebloke/discolm-german-7b-v1-awq \ -X POST \ -H "Authorization: Bearer $CLOUDFLARE_AUTH_TOKEN" \ -d '{ "messages": [{ "role": "system", "content": "You are a friendly assistant" }, { "role": "user", "content": "Why is pizza so good" }]}'Parameters
Input
stringrequiredminLength: 1The input text prompt for the model to generate a response.stringName of the LoRA (Low-Rank Adaptation) model to fine-tune the base model.objectbooleandefault: falseIf true, a chat template is not applied and you must adhere to the specific model's expected formatting.booleandefault: falseIf true, the response will be streamed back incrementally using SSE, Server Sent Events.integerdefault: 256The maximum number of tokens to generate in the response.numberdefault: 0.6minimum: 0maximum: 5Controls the randomness of the output; higher values produce more random results.numberminimum: 0.001maximum: 1Adjusts the creativity of the AI's responses by controlling how many possible words it considers. Lower values make outputs more predictable; higher values allow for more varied and creative responses.integerminimum: 1maximum: 50Limits the AI to choose from the top 'k' most probable words. Lower values make responses more focused; higher values introduce more variety and potential surprises.integerminimum: 1maximum: 9999999999Random seed for reproducibility of the generation.numberminimum: 0maximum: 2Penalty for repeated tokens; higher values discourage repetition.numberminimum: -2maximum: 2Decreases the likelihood of the model repeating the same lines verbatim.numberminimum: -2maximum: 2Increases the likelihood of the model introducing new topics.Output
Synchronous — Send a request and receive a complete response
stringThe generated text response from the modelobjectUsage statistics for the inference requestarrayAn array of tool calls requests made during the response generationStreaming — Send a request with `stream: true` and receive server-sent events
stringbinary