Nabil A.

ModelRed - Red-team any AI system in minutes

by
ModelRed continuously tests AI applications for security vulnerabilities. Run thousands of attack probes against your LLMs to catch prompt injections, data leaks, and jailbreaks before production. Get a simple 0-10 security score, block CI/CD deployments when thresholds drop, and access an open marketplace of attack vectors contributed by security researchers. Works with OpenAI, Anthropic, AWS, Azure, Google, and custom endpoints. Python SDK available. Stop hoping your AI is secure—know it is.

Add a comment

Replies

Best
Nabil A.
Maker
📌
Hey Product Hunt! Back with some updates. We shipped two things since last time: Open Marketplace - Anyone can contribute attack probes now. Started with 200+ and the community keeps adding more. It's becoming a shared library of AI vulnerabilities. Public Leaderboard - Tested 9 major models (Claude, GPT, Mistral, etc) and made results public. No signup needed. The security gap is massive - Claude scored 9.5/10, Mistral scored 3.3/10. Also shipped a Python SDK so you can automate this in CI/CD. If you're finding vulnerabilities or building security stuff, would love to have you contribute to the marketplace. Happy to answer questions!
Chilarai M

Congrats on the launch!