All activity
QuiGuard is a security layer for AI Agents. It sits between your agent and the LLM to ensure sensitive data (PII, API Keys) never leaves your network.
Core Features: Tool Call Scrubbing: Detects secrets inside JSON arguments before execution.
Inbound Filtering: Sanitizes tool responses (Jira/SQL data) before it hits the LLM context.
Clean Logs: Ensures your observability tools (LangSmith/Arize) stay clean of PII.
Self-Hosted: Docker-ready.
Stop debugging with redacted logs. Build agents safely.
QuiGuardSelfhosted proxy; scrubs secrets from AI Agent tool calls.
Sammegh Banjaraleft a comment
Hey Hunters! 👋 I built QuiGuard after realizing my own agent traces were filling up with raw secrets and customer data. I noticed that while we filtered prompts, we often ignored the data coming back from tools (like a Jira ticket or SQL row). That data was going straight into my logs and the LLM context. QuiGuard is a simple proxy that acts as a "hygiene layer." It scrubs the data, replaces it...
QuiGuardSelfhosted proxy; scrubs secrets from AI Agent tool calls.
IronLayer is an open-source proxy that prevents data leaks to AI models. It automatically redacts PII (emails, SSN) and blocks dangerous agent actions before they leave your network.
IronLayerThe Security Layer for Generative AI.
Sammegh Banjaraleft a comment
Hey hunters! 👋 Sammegh here. I built IronLayer because I saw a gap in AI security—companies were banning AI tools because of data leaks. This is for developers and CTOs who want to use AI safely without leaking customer data. Happy to answer any technical questions!
IronLayerThe Security Layer for Generative AI.
