AI Safeguard - Prevent sensitive data leaks in popular LLM's like ChatGPT
A Chrome extension + admin dashboard to help teams safely use popular AI tools like ChatGPT, Copilot, and Gemini. AI Safeguard prevents sensitive data (like emails, credentials, or PII) from being sent to AI models — without blocking AI use.
Replies
App Finder
Interesting idea.
But would it not be possible to implement this completely on the client side so that users don't have to trust yet another company?
@konrad_sx Totally fair question — and thanks for raising it!
Right now, AI Safeguard does store prompt data in our backend. This gives admins full visibility into what’s being flagged, allows them to audit risky activity, and fine-tune their org’s security policies over time. That said, we take data privacy seriously — prompts are never shared or used beyond their intended purpose, and all data is encrypted both in transit and at rest.
We’re also exploring a fully client-side version for organisations that require zero prompt storage. Ultimately though our goal is to give teams and organisations the flexibility to choose what fits their needs best and get the balance between privacy and oversight right for them.
App Finder
@lewisdunford Thanks for your answer. I understand the need to store prompt and flagging data on your backend to make it accessible to admins. But as I understand the data is decrypted and processed on your backend, which I think could completely be avoided.
@konrad_sx Thanks for your feedback!
You're right currently we do decrypt and process the prompt data on the backend. This is necessary for flagging, auditing, and giving admins visibility and control over the data in their organisation.
That said, I see your point that this could potentially be avoided. We're definitely open to improving things and will consider a more privacy-focused, client-side version in the future for those who prefer no data storage at all.
Appreciate the feedback, and I'll keep it in mind as we continue to evolve!