QuiGuard - Selfhosted proxy; scrubs secrets from AI Agent tool calls.
by•
QuiGuard is a security layer for AI Agents. It sits between your agent and the LLM to ensure sensitive data (PII, API Keys) never leaves your network.
Core Features: Tool Call Scrubbing: Detects secrets inside JSON arguments before execution.
Inbound Filtering: Sanitizes tool responses (Jira/SQL data) before it hits the LLM context.
Clean Logs: Ensures your observability tools (LangSmith/Arize) stay clean of PII.
Self-Hosted: Docker-ready.
Stop debugging with redacted logs. Build agents safely.

Replies
👋I built QuiGuard because I kept finding myself pasting customer data into ChatGPT for analysis and realizing — wait, this is technically a GDPR violation.
The problem is real: every time your app sends a prompt containing someone's name, email, SSN, or health info to an LLM, that data leaves your control. Even if the provider promises not to train on it, you're still transmitting PII to a third party.
QuiGuard fixes this with a simple proxy architecture:
1. Your app sends requests to QuiGuard instead of directly to the LLM
2. QuiGuard detects PII using NLP (not just regex — we use spaCy's large English model for high accuracy)
3. PII is redacted/masked/faked before forwarding to the LLM
4. The LLM response is processed to restore original values
5. Your app gets the answer it expected — without ever exposing real data
Key features:
- 19+ PII entity types detected (personal, financial, government, healthcare, technical)
- 5 action modes: redact, mask, fake, block, warn
- Custom regex patterns for domain-specific PII
- Agent tool call sanitization (for LangChain/AutoGPT/CrewAI users)
- Real-time compliance audit ledger
- No-code policy editor
- Model routing (auto-switches between fast and reasoning models)
Who is this for:
- AI/ML engineers building apps with sensitive data
- Security teams needing AI compliance
- Compliance officers dealing with GDPR/CCPA/HIPAA
- Support teams using AI for ticket analysis
- Anyone sending real user data to LLMs
Free for up to 1,000 requests/month. Takes 5 minutes to set up with Docker.
Happy to answer any questions! AMA 🛡️