Forums
Built a middleware to catch LLM hallucinations in RAG apps
I've been working on RAG applications lately and the biggest issue I keep running into is trust. The models sound confident, but the facts are often slightly off, and manually checking logs is not sustainable.
So I built AgentAudit to solve this for my own projects.
It is a middleware API built with Node.js and TypeScript that sits between your LLM and the user. It uses PostgreSQL and pgvector to cross-reference the AI's response against your source documents in real-time. If the "Trust Score" is too low, it flags the response before it reaches the frontend.
AgentAudit - The "Lie Detector" API for RAG & AI Agents
Stop AI hallucinations. AgentAudit is a middleware API that acts as a semantic firewall for your agents. It intercepts LLM responses and verifies them against the source context in real-time. Catch silent failures before they reach your users. Built with TypeScript & pgvector.
