Skip to main content

LLM Prompt Scrubbing

Worka PII is commonly used as a safety layer in front of LLMs. The pipeline detects sensitive identifiers and applies deterministic operators before requests are sent to external models.

Pattern

  1. Capture user input.
  2. Run Analyzer::analyze on the text.
  3. Apply the anonymizer with your policy.
  4. Send the redacted text to the model.
  5. Log the audit items for compliance.

This keeps prompts safe while preserving semantics for downstream tasks.