
SanitizeAI
Stop leaking secrets to LLMs — local-first AI redaction
4 followers
Stop leaking secrets to LLMs — local-first AI redaction
4 followers
SanitizeAI is a local-first safety layer for AI workflows. It automatically detects and masks sensitive data like API keys, PII, and secrets before prompts are sent to LLMs. Unlike cloud-based solutions, all processing happens locally — no servers, no data retention, and no trust required. It works seamlessly in both your browser (ChatGPT, Claude, Gemini) and your IDE (VS Code / Cursor).

