Lewis Dunford

AI Safeguard - Prevent sensitive data leaks in popular LLM's like ChatGPT

A Chrome extension + admin dashboard to help teams safely use popular AI tools like ChatGPT, Copilot, and Gemini. AI Safeguard prevents sensitive data (like emails, credentials, or PII) from being sent to AI models — without blocking AI use.

Add a comment

Replies

Best
Lewis Dunford
Maker
📌
Hey everyone! 👋 I’m Lewis — and I started building AI Safeguard as a way to improve my MERN stack skills while tackling a real problem I was seeing more and more: sensitive data accidentally being shared with AI tools. So many teams want to use ChatGPT, Gemini, and Copilot — but they’re often blocked by security risks or compliance worries. This tool is designed to help with that without shutting down AI usage. AI Safeguard includes: ✅ A Chrome extension that scans AI prompts for sensitive info ✅ An admin dashboard where orgs can set rules to prevent sensitive data (like emails, credentials, or PII) from being sent to AI models — without blocking AI use. I’d love to hear what you think. Feedback is golden 💬 Thanks for checking it out! – Lewis 🚀
Konrad S.

Interesting idea.

But would it not be possible to implement this completely on the client side so that users don't have to trust yet another company?

Lewis Dunford

@konrad_sx Totally fair question — and thanks for raising it!


Right now, AI Safeguard does store prompt data in our backend. This gives admins full visibility into what’s being flagged, allows them to audit risky activity, and fine-tune their org’s security policies over time. That said, we take data privacy seriously — prompts are never shared or used beyond their intended purpose, and all data is encrypted both in transit and at rest.


We’re also exploring a fully client-side version for organisations that require zero prompt storage. Ultimately though our goal is to give teams and organisations the flexibility to choose what fits their needs best and get the balance between privacy and oversight right for them.

Konrad S.

@lewisdunford Thanks for your answer. I understand the need to store prompt and flagging data on your backend to make it accessible to admins. But as I understand the data is decrypted and processed on your backend, which I think could completely be avoided.

Lewis Dunford

@konrad_sx Thanks for your feedback!


You're right currently we do decrypt and process the prompt data on the backend. This is necessary for flagging, auditing, and giving admins visibility and control over the data in their organisation.


That said, I see your point that this could potentially be avoided. We're definitely open to improving things and will consider a more privacy-focused, client-side version in the future for those who prefer no data storage at all.


Appreciate the feedback, and I'll keep it in mind as we continue to evolve!