Shamans.dev: Securing AI-generated Code - Find hidden vulnerabilities in AI-generated code
by•
40% of code is AI-generated. 48% contains vulnerabilities automated tools miss. Shamans provides expert manual security reviews for AI code. We've found 378+ critical flaws, preventing $4.7M+ in incidents.
Launch special: $299 (90% off).

Replies
👋 Hey ProductHunt! I'm Darwin, founder of Shamans.
How this started:
I was reviewing security research and kept seeing the same pattern: AI tools generating vulnerable code that passes all automated tests. Then I found GitHub's own data showing Copilot repos have 40% higher secret leak rates.
That's when it clicked: we have a massive blind spot.
This isn't isolated. We've documented everything:
📊 Our comprehensive research:
- AI Code Security Analysis Report (https://shamans.dev/research/ai-...) with 160+ documented cases
- Bad Vibes Report (https://shamans.dev/bad-vibes) featuring real AI code disasters with sources
🚨 Real CVEs in AI coding tools:
- CVE-2025-54136: Cursor RCE vulnerability (CVSS 7.2)
- CVE-2025-54135: Prompt injection attack (CVSS 8.6)
- CVE-2025-32711: Zero-click Copilot attack (CVSS 9.3)
- CVE-2025-53773: GitHub Copilot YOLO mode exploit
💸 Real financial impact from our database:
- $500K crypto theft via malicious Cursor extension
- $47K AWS bill from unlimited upload bug
- $2M data breach from auth bypass (50K records)
- $500K fraud prevented by catching CORS vulnerability
- 39 million secrets leaked on GitHub in 2024 alone
Here's what really shocked us:
GitHub's own data shows Copilot repos leak secrets 40% more often (6.4% vs 4.6%). We analyzed 8,127 Copilot suggestions and found 2,702 actual working secrets. That's 1 in 3! The scary part? This code passes all tests and looks totally normal.
Real examples we've caught:
- JWT verification accepting expired tokens (10K accounts at risk)
- S3 uploads with no size limits (resulted in $47K bill)
- Admin panels with authentication bypasses
- API endpoints accessible from any domain
- AWS credentials embedded in React bundles
Why "Shamans"?
Traditional shamans see spirits that others cannot. Code shamans see vulnerabilities that others miss. Just like ancient shamans divine hidden truths in the spiritual realm, we divine hidden security flaws in the digital realm. Your AI code might look perfect, but we see the bad vibes lurking beneath.
Look, we're not AI haters:
We use Copilot and Cursor every day. They're amazing for productivity. But would you ship human code without review? AI code is the same. Someone needs to understand both security AND how these models think. That's us.
From our database of 160+ cases:
- Average breach cost: $4.35M
- 48% of AI-generated code contains vulnerabilities
- 1,800 mobile apps found with hardcoded AWS keys
- Half of all suggestions from AI tools are vulnerable
🔥 ProductHunt exclusive:
First 50 assessments: $299 (normally $2,997)
- 3-day expert manual review
- CVE-style vulnerability reports
- 1-hour findings walkthrough
- Full remediation guidance
Perfect for teams using:
✓ GitHub Copilot (1M+ developers)
✓ Cursor (10x monthly growth)
✓ Claude/ChatGPT for coding
✓ Any AI coding assistant
Get the full research:
Check out our reports at shamans.dev/research for all the documented cases, sources, and analysis.
Questions for the community:
1. Have you found security issues in AI-generated code?
2. What percentage of your codebase is AI-generated?
3. How do you currently review AI code for security?
Would love to hear your AI coding war stories below! 👇
Website: https://shamans.dev