All activity
Nabil A.left a comment
Hey Product Hunt! Back with some updates. We shipped two things since last time: Open Marketplace - Anyone can contribute attack probes now. Started with 200+ and the community keeps adding more. It's becoming a shared library of AI vulnerabilities. Public Leaderboard - Tested 9 major models (Claude, GPT, Mistral, etc) and made results public. No signup needed. The security gap is massive -...

ModelRedRed-team any AI system in minutes
ModelRed continuously tests AI applications for security vulnerabilities. Run thousands of attack probes against your LLMs to catch prompt injections, data leaks, and jailbreaks before production. Get a simple 0-10 security score, block CI/CD deployments when thresholds drop, and access an open marketplace of attack vectors contributed by security researchers. Works with OpenAI, Anthropic, AWS, Azure, Google, and custom endpoints. Python SDK available. Stop hoping your AI is secure—know it is.

ModelRedRed-team any AI system in minutes
Automatically find vulnerabilities in your AI models before attackers do. Run 200+ adaptive red team probes testing 4000+ attack vectors across any LLM, OpenAI, Anthropic, Azure, AWS, HuggingFace, and more. Get your ModelRed Score and ship secure AI, faster.

ModelRedYour AI is vulnerable. We'll prove it.
Nabil A.left a comment
Hey Product Hunt! I'm Nabil, founder of ModelRed. I built this after watching companies ship AI systems with basically zero security testing. The standard approach? An engineer spends an afternoon trying to type "ignore previous instructions", then they ship to production and hope nobody figures out how to jailbreak it. Spoiler: they always do. The problem is that real security tools are locked...

ModelRedYour AI is vulnerable. We'll prove it.

