Manouk Draisma

Manouk Draisma

LangWatch Agent SimulationsLangWatch Agent Simulations
Co-founder LangWatch.ai
277 points

About

Hey Product Hunters! πŸ‘‹ I'm co-founder of LangWatch.ai - build up from the painpoint of having limited control over LLM apps. We've build the end-to-end evaluation framework for AI engineering teams. Not just obervability or evals, but finding that right eval for your Agents. The past 10+ years I have been working in the start-up tech space, and what a crazy ride it has been... 🀯 Started 10 years ago at a Start-up, which went IPO within the first years I worked there. Building teams, partnerships, connecting with users and customers is what I love. 🀝 ❀️ In the meantime, I will add value where ever possible and support new product launches. Connect with me here and also on LinkedIn! ✌️

Work

Founder & Leadership at LangWatch Agent Simulations

Badges

Buddy System
Buddy System
Plugged in πŸ”Œ
Plugged in πŸ”Œ
Gemologist
Gemologist
Top 5 Launch
Top 5 Launch
View all badges

Maker History

Forums

Manouk Draismaβ€’

7mo ago

I build the world's first AI Agent Testing platform to run Agent simulations.

Hey Product Hunt

Manouk here, I m the co-founder of LangWatch, and today we re incredibly excited to launch LangWatch Scenario, the first platform built for systematic AI agent testing.

Over the last 6months, we ve seen a massive shift: teams are moving from simple LLM calls to full-blown autonomous agents, handling customer support, financial analysis, compliance, and more. But testing these agents is still stuck in the past.

Manouk Draismaβ€’

7mo ago

LangWatch Scenario - Agent Simulations - Agentic testing for agentic codebases

As AI agents grow more complex, reasoning, using tools, and making decisions, traditional evals fall short. LangWatch Scenario simulates real-world interactions to test agent behavior. It’s like unit testing, but for AI agents.

Use an Agent to test your Agent

How do you validate an AI agent that could reply in unpredictable ways?

My team and I have released Agentic Flow Testing an open-source framework where one AI agent autonomously tests another through natural language conversations. 

View more