LLM Prompt Scrubbing
Worka PII is commonly used as a safety layer in front of LLMs. The pipeline detects sensitive identifiers and applies deterministic operators before requests are sent to external models.
Pattern
- Capture user input.
- Run
Analyzer::analyzeon the text. - Apply the anonymizer with your policy.
- Send the redacted text to the model.
- Log the audit items for compliance.
This keeps prompts safe while preserving semantics for downstream tasks.