All activity
Agents contradict facts, fabricate citations/URLs/DOIs, lose confidence, and spread errors silently, one agent's hallucination becomes another's "truth."
InsAIts: 5 hallucination subsystems (cross-agent contradictions, phantom citations, document grounding, confidence decay, self-consistency) + 6 anomalies.
Features:
Open-source core
Privacy-first: all local
3-line setup, any LLM/Ollama
Integrations: LangChain, CrewAI, LangGraph
Slack/Notion exports, forensic tracing

InsAItsOpen-source monitoring for AI-to-AI, detect hallucinations
Patrickleft a comment
Hi Product Hunt! I'm the creator of InsAIts. I built this because I kept seeing the same problem across every multi-agent AI system I worked with: agents pass bad information to each other, and there's no monitoring layer to catch it. Today we're open-sourcing the core under Apache 2.0. The "aha moment" was when I watched a finance pipeline where one agent hallucinated a 5x cost difference. It...

InsAItsOpen-source monitoring for AI-to-AI, detect hallucinations
