Pongo

Pongo

Reduce LLM Hallucinations by 80%

37 followers

Pongo greatly improves accuracy for RAG pipelines using our semantic filter technology. This improved accuracy results in 80% fewer incorrect and/or partially correct answers from LLMs. Pongo integrates into existing RAG pipelines with just 1 line of code
Pongo gallery image
Pongo gallery image
Pongo gallery image
Pongo gallery image
Free Options
Launch Team / Built With
Auth0
Auth0
Start building with Auth0 for AI Agents, now generally available.
Promoted

What do you think? …

Caleb John
Maker
📌
Those of you building AI apps that require RAG (Retrieval Augmented Generation) know how crucial getting the R part correctly is. It can be the difference between user's adopting a product and churning. Not to mention agent workflows where compounding error rates lead to unusable in production. While we've seen tremendous progress in vector search, the paradigm it's self has significant limitations. This mainly stems from 2 issues. 1. Compressing an entire paragraph with multiple subjects and potential meanings of text into a single vector is incredibly difficult. 2. Embedding the document before hand is crucial for efficiency, but not being able to compare the documents and query directly leads to accuracy limitations. At Pongo we solved this by developing a "Semantic Filter". A technology that combines multiple AI search models as a post processing step to a retrieval pipeline. We've put in a lot of work to make Pongo scaleable and to ensure it adds minimal latency to existing applications. If you're building with RAG would love to get your feedback and hopefully we can unlock the potential of new applications with this technology.
Atai Barkai
Incredible results- and love the simple interface. RAG beyond vector similarity is a no brainer. Excited to check this out and for future expansions !
Carson Nye
Take yourself seriously if you are building AI. Check out Pongo.
Parsa Khzai
Unreal performance and accuracy boost when compared to OpenAI’s vector search