GitHub

GitHub

Sifaka adds reflection & reliability to LLM applications

3 followers

Sifaka improves AI-generated text through iterative critique using research-backed techniques. Instead of hoping your AI output is good enough, Sifaka provides a transparent feedback loop where AI systems validate and improve their own outputs.
GitHub gallery image
GitHub gallery image
GitHub gallery image
Launch Team / Built With
Intercom
Intercom
Startups get 90% off Intercom + 1 year of Fin AI Agent free
Promoted

What do you think? …

Evan Volgas
Maker
📌
I build Sifaka as an experiment in reducing hallucinations through iterative feedback using a different LLM. As a began researching it, I found several research papers that deal with LLM "reflection" -- having an LLM reflect on its output to improve it. The idea is incredibly fascinating to me, and I used the insights of those papers to develop what is now Sifaka. I believe Sifaka has the most promise in reviewing legal documents (using the ollama/legal_model for example. I think it might also be useful for highly regulated documents such as government policy docs, or highly technical ones like product documentation. I'm looking for contributors to the project, if you're interested :) Right now, I'm working on evaluating improvement quality. I have a few ideas, but I'd love to come up with additional methods to evaluate it.