Launched this week

Sequirly
Prevent accidental data leaks while using AI tools
115 followers
Prevent accidental data leaks while using AI tools
115 followers
Sequirly warns you before you share sensitive data with AI tools, keeping your privacy and security intact. It scans prompts and document uploads in real time, detecting API keys, credentials, and personal information before they reach Claude, ChatGPT, Gemini, or any AI tool. All scanning happens locally in your browser.








Sequirly
@qsudip_bhandari
This is brilliant! As AI adoption accelerates, I've heard cases where integrating tools like Claude Code with ad accounts led to account takeovers, so a service like this feels essential.
Does Sequirly require integration with Claude Code or other AI tools to work?
Sequirly
@qsudip_bhandari Hi Sudip. What happens if the tool accidentally flags non-sensitive data? Can I override it? And are there guarantees that sensitive data won’t reach AI tools?
Sequirly
@kimberly_ross
Hi Ross. Flagging non-sensitive data and overriding it is there, and these are part of our premium plan. Basically, you can set your own rules: what kind of data do you consider to be sensitive and don't want to share with AI tools (these will be on top of our own set of rules). If, in some case you get flagged for non-sensitive data, you can override them, and those will create a new rule for you.
And about guaranteeing that sensitive data won't reach AI tools, we will stop you from sending the prompt to the AI tool until you remove the sensitive data from your prompt, making your data safe.
@qsudip_bhandari
Really cool idea! Have you considered automatically replacing sensitive information with templates? As you mentioned, sensitive data often slips through when people are moving fast, so this could help them move quickly without leaking anything sensitive.
Sequirly
@giulio_l Yes, we have thought about that as well, and we are working on that next. I am glad we are thinking in the right direction. Thank you for your feedback.
This solves a real problem. With so many teams pasting data into ChatGPT and other AI tools without thinking, having a safety layer makes total sense. Does it work with self-hosted LLMs too, or just cloud-based ones?
Sequirly
@mx_mt
This works with the popular LLMs at the moment. But yeah, we can think about self-hosted LLMs someday as well.
Can you also configure certain keywords for it to flag? Planning to have leadership take a look at this since I think it's going to help a lot. And any discounts after the trial period. :-))
Sequirly
@john_oliver11
Hi John,
Yes, we can configure keywords based on your company's needs. We can schedule a call to discuss further. What do you say?
The idea of putting a safety layer before the prompt reaches the AI tool, feels like the right place to solve the problem rather than trying to monitor it after the fact.
Sequirly
@grege_rodrigues Exactly. Instead of fixing things after the leak, we can add a safety layer that will prevent the leak in the first place.