Skip to content
Cloudflare Docs

Example mitigation rules

Block requests with specific PII category in LLM prompt

The following example custom rule will block requests with an LLM prompt that tries to obtain PII of a specific category:

  • When incoming requests match:

    FieldOperatorValue
    LLM PII Categoriesis inCredit Card

    If you use the Expression Editor, enter the following expression:
    (any(cf.llm.prompt.pii_categories[*] in {"CREDIT_CARD"}))

  • Action: Block

Block requests with specific unsafe content categories in LLM prompt

The following example custom rule will block requests with an LLM prompt containing unsafe content of specific categories:

  • When incoming requests match:

    FieldOperatorValue
    LLM Unsafe topic categoriesis inS1: Violent Crimes S10: Hate

    If you use the Expression Editor, enter the following expression:
    (any(cf.llm.prompt.unsafe_topic_categories[*] in {"S1" "S10"}))

  • Action: Block

Block requests with prompt injection attempt in LLM prompt

The following example custom rule will block requests with an injection score below 20. Using a low injection score value in the rule helps avoid false positives.

  • When incoming requests match:

    FieldOperatorValue
    LLM Injection scoreless than20

    If you use the Expression Editor, enter the following expression:
    (cf.llm.prompt.injection_score < 20)

  • Action: Block