
Hallucina-Gen
Spot where your LLM might make mistakes on documents
33 followers
Spot where your LLM might make mistakes on documents
33 followers
Using LLMs to summarize or answer questions from documents? We auto-analyze your PDFs and prompts, and produce test inputs likely to trigger hallucinations. Built for AI developers to validate outputs, test prompts, and squash hallucinations early.




