Forums
We ran a red teaming test before launchβhereβs what surprised us (and what weβre still debating)
Before launching our AI assistant, we worked with a red teaming vendor (let s call them L ) to check how safe our product really was.
We were expecting a few corner cases or prompt injection attempts.
What we got was a pretty eye-opening report: infinite output loops, system prompt leaks, injection attacks that bypass moderation, and even scenarios where malicious content could be inserted by users via email inputs.
p/introduce-myself
Hey everyone! I m Soomin, a Product Manager based in Seoul.
Over the past few years, I ve worked on various AI-driven and mobile services, and I m currently building a personal AI assistant that helps people get things done more effortlessly.
Lately, I ve been exploring how emerging LLM technologies and services can be meaningfully integrated into products especially to narrow the gap between intention and execution.
I love connecting with folks who are passionate about the same space.



