trending
Ayush Ahirrao

16h ago

AegisLM - See how easily your AI can be broken — in seconds

AegisLM shows how easily modern AI can be broken. Test any model for prompt injection, jailbreaks, and data leaks in seconds. Input a prompt, run attacks, and see where it fails. Designed for builders who want to stress-test AI systems under real-world conditions. Try built-in attacks or create your own.