SanitizeAI

SanitizeAI

Stop leaking secrets to LLMs — local-first AI redaction

4 followers

SanitizeAI is a local-first safety layer for AI workflows. It automatically detects and masks sensitive data like API keys, PII, and secrets before prompts are sent to LLMs. Unlike cloud-based solutions, all processing happens locally — no servers, no data retention, and no trust required. It works seamlessly in both your browser (ChatGPT, Claude, Gemini) and your IDE (VS Code / Cursor).
Interactive
SanitizeAI gallery image
SanitizeAI gallery image
SanitizeAI gallery image
SanitizeAI gallery image
SanitizeAI gallery image
Free Options
Launch Team / Built With