Launched this week
InsAIts

InsAIts

Open-source monitoring for AI-to-AI, detect hallucinations

2 followers

Agents contradict facts, fabricate citations/URLs/DOIs, lose confidence, and spread errors silently, one agent's hallucination becomes another's "truth." InsAIts: 5 hallucination subsystems (cross-agent contradictions, phantom citations, document grounding, confidence decay, self-consistency) + 6 anomalies. Features: Open-source core Privacy-first: all local 3-line setup, any LLM/Ollama Integrations: LangChain, CrewAI, LangGraph Slack/Notion exports, forensic tracing

InsAIts makers

Here are the founders, developers, designers and product people who worked on InsAIts