We ran a red teaming test before launchāhereās what surprised us (and what weāre still debating)
Before launching our AI assistant, we worked with a red teaming vendor (letās call them āLā) to check how safe our product really was. We were expecting a few corner cases or prompt injection attempts. What we got was a pretty eye-opening report: infinite output loops, system prompt leaks, injection attacks that bypass moderation, and even scenarios where malicious content could be inserted by...
p/introduce-myself
Hey everyone! Iām Soomin, a Product Manager based in Seoul. ā šŖ Over the past few years, Iāve worked on various AI-driven and mobile services, and Iām currently building a personal AI assistant that helps people get things done more effortlessly. ā š¤ Lately, Iāve been exploring how emerging LLM technologies and services can be meaningfully integrated into productsāespecially to narrow the gap...



