Example mitigation rules
The following example custom rule will block requests with an LLM prompt that tries to obtain PII of a specific category:
-
When incoming requests match:
Field Operator Value LLM PII Categories is in Credit CardIf you use the Expression Editor, enter the following expression:
(any(cf.llm.prompt.pii_categories[*] in {"CREDIT_CARD"})) -
Action: Block
The following example custom rule will block requests with an LLM prompt containing unsafe content of specific categories:
-
When incoming requests match:
Field Operator Value LLM Unsafe topic categories is in S1: Violent CrimesS10: HateIf you use the Expression Editor, enter the following expression:
(any(cf.llm.prompt.unsafe_topic_categories[*] in {"S1" "S10"})) -
Action: Block
The following example custom rule will block requests with an injection score below 20. Using a low injection score value in the rule helps avoid false positives.
-
When incoming requests match:
Field Operator Value LLM Injection score less than 20If you use the Expression Editor, enter the following expression:
(cf.llm.prompt.injection_score < 20) -
Action: Block
Was this helpful?
- Resources
- API
- New to Cloudflare?
- Directory
- Sponsorships
- Open Source
- Support
- Help Center
- System Status
- Compliance
- GDPR
- Company
- cloudflare.com
- Our team
- Careers
- © 2025 Cloudflare, Inc.
- Privacy Policy
- Terms of Use
- Report Security Issues
- Trademark
-