Janus battle-tests your AI agents to surface hallucinations, rule violations, and tool-call/performance failures. We run thousands of AI simulations against your chat/voice agents and offer custom evals for further model improvement.
Hi, we're Jet and Shivum, and today we're launching Janus!
AI agents are breaking in production - not because companies aren't testing, but because traditional testing doesn't match real-world complexity. Static datasets and generic benchmarks miss the edge cases, policy violations, and tool failures that actual users expose.
We built Janus because we believe the only way to truly test AI agents is with realistic human simulation at scale - AI users stress-testing AI agents.
What makes Janus different?
Unlike other platforms, we don't give you canned prompts or off-the-shelf evals. Instead, we generate thousands of synthetic AI users that:
1. Think, talk, and behave like your actual customers 2. Run thousands of realistic multi-turn conversations 3. Evaluate agents with tailored, rule-aware test cases 4. Judge fuzzy qualities like realism and response quality—not just guardrail pass/fail 5. Track regressions and improvements over time 6. Provide actionable insights from advanced judge models
This is simulation-driven testing designed for your domain - not generic playgrounds.
🧠 Our Vision We believe human simulation will become the standard for AI agent evaluation. As agents become more sophisticated, only realistic human behavior can truly stress-test their capabilities and surface edge cases before users do.
🚀 Try Janus Today Book a demo today and see Janus generate custom AI users for your specific business! We rethought AI agent testing from the ground up with human simulation - let's make reliable AI agents the norm, not the exception.
@pritraveler Thank you so much! That's exactly why we built this ourselves as well - it's so easy to ship an AI agent. But why hasn't evals and testing gotten easier as well? Janus is our passion project to help fix this!
Report
Janus provides exactly the kind of rigorous testing AI agents need before going live. The large-scale simulations and customizable evaluations make it a powerful ally for building more reliable systems.
@supa_l We love to hear that! You put it perfectly - right now evals need to be fundamentally rethought as conversational AI becomes more and more important in our everyday lives.
Congratulations on the launch, Jet and Shivum! Janus sounds like a game-changer for AI testing. The focus on realistic human simulation to stress-test AI agents is so crucial in addressing real-world complexities. Excited to see how this advances reliable AI development. Best of luck!
@alex_cloudstarThank you so much! We're thrilled to hear that, and we completely agree — capturing real-world nuance is essential for building robust AI. Janus is just the beginning, and we’re excited to push the boundaries of what’s possible in AI testing. Appreciate the support! 🙌
This looks interesting @jw_12 ! We're currently using Coval and would like to understand how Janus is priced, as well as some of its key differentiators.
@marco_dewey Great question Marco! We use a mix of data-driven techniques to make the magic happen - but definitely a long ways to go still in refining and improving our product!
Report
We are precisely having this problem at our company now, I will reach out for a demo!
Replies
Janus
Hi, we're Jet and Shivum, and today we're launching Janus!
AI agents are breaking in production - not because companies aren't testing, but because traditional testing doesn't match real-world complexity. Static datasets and generic benchmarks miss the edge cases, policy violations, and tool failures that actual users expose.
We built Janus because we believe the only way to truly test AI agents is with realistic human simulation at scale - AI users stress-testing AI agents.
What makes Janus different?
Unlike other platforms, we don't give you canned prompts or off-the-shelf evals. Instead, we generate thousands of synthetic AI users that:
1. Think, talk, and behave like your actual customers
2. Run thousands of realistic multi-turn conversations
3. Evaluate agents with tailored, rule-aware test cases
4. Judge fuzzy qualities like realism and response quality—not just guardrail pass/fail
5. Track regressions and improvements over time
6. Provide actionable insights from advanced judge models
This is simulation-driven testing designed for your domain - not generic playgrounds.
🧠 Our Vision
We believe human simulation will become the standard for AI agent evaluation. As agents become more sophisticated, only realistic human behavior can truly stress-test their capabilities and surface edge cases before users do.
🚀 Try Janus Today
Book a demo today and see Janus generate custom AI users for your specific business!
We rethought AI agent testing from the ground up with human simulation - let's make reliable AI agents the norm, not the exception.
Get started at withjanus.com
Prit
A lot of AI companies made powerful AI models,
but even the developers couldn't trust their results, because of halluciations, policy breaks, etc.
I hope them to sleep without worry :) Congratulations!
Janus
@pritraveler Thank you so much! That's exactly why we built this ourselves as well - it's so easy to ship an AI agent. But why hasn't evals and testing gotten easier as well? Janus is our passion project to help fix this!
Janus provides exactly the kind of rigorous testing AI agents need before going live. The large-scale simulations and customizable evaluations make it a powerful ally for building more reliable systems.
Janus
@supa_l We love to hear that! You put it perfectly - right now evals need to be fundamentally rethought as conversational AI becomes more and more important in our everyday lives.
Product Hunt Wrapped 2025
Congratulations on the launch, Jet and Shivum! Janus sounds like a game-changer for AI testing. The focus on realistic human simulation to stress-test AI agents is so crucial in addressing real-world complexities. Excited to see how this advances reliable AI development. Best of luck!
Janus
@alex_cloudstarThank you so much! We're thrilled to hear that, and we completely agree — capturing real-world nuance is essential for building robust AI. Janus is just the beginning, and we’re excited to push the boundaries of what’s possible in AI testing. Appreciate the support! 🙌
Hyring
This looks interesting @jw_12 ! We're currently using Coval and would like to understand how Janus is priced, as well as some of its key differentiators.
Janus
@adithyan_rk Would love to chat! Feel free to book a demo!
Geocities.live
@jw_12 We definitely need to introduce Janus in @Job for Agent 🔥
Janus
@kamilstanuch Thanks Kamil!
All the best for the launch @jw_12 & team!
Janus
@parekh_tanmay Thanks Tanmay, really appreciate the support!
Jazzberry
How do you get the thousands of synthetic AI users to behave differently, so that you cover all user paths?
Janus
@marco_dewey Great question Marco! We use a mix of data-driven techniques to make the magic happen - but definitely a long ways to go still in refining and improving our product!
We are precisely having this problem at our company now, I will reach out for a demo!
Janus
@manuelflara Looking forward to chatting Manuel, would love to find a way to help!
The "Jenkins for AI agents" is born 🛠️. Must-have for:
- Deterministic scenario replay 🔄
- Multi-agent collision testing 💥
- Ethical boundary stress tests ⚖️