Launching today
AegisLM

AegisLM

See how easily your AI can be broken — in seconds

4 followers

AegisLM shows how easily modern AI can be broken. Test any model for prompt injection, jailbreaks, and data leaks in seconds. Input a prompt, run attacks, and see where it fails. Designed for builders who want to stress-test AI systems under real-world conditions. Try built-in attacks or create your own.

AegisLM makers

Here are the founders, developers, designers and product people who worked on AegisLM