Nikhil Pareek

Open Sourced my AI Eval Library :)

Hi everyone, excited to be here! We just released our AI Evaluation Library - a lightweight, enterprise-grade open‑source tool to help teams build trustworthy GenAI systems. It integrates real‑time evals across hallucination rate, factual accuracy, tone consistency, red teaming, prompt injection, and more. Built for agentic workflows, voice, vision, RAG pipelines. Happy to share a demo or collaborate on benchmarks. Our GitHub is open for issues, PRs, and feature ideas. Feedback welcome!

Please do check it out- https://github.com/future-agi/ai-evaluation

20 views

Add a comment

Replies

Best
Lorenzo Castelli
Hi @nikhilpareek. Like to get in touch…I’m trying to understand if what I have built with AI Model Match could be considered as complementerary with your solution. I focus mainly on release strategies for Prompt configurations and i see it very connected with what you are doing! I leverage Evals (done by other solutions) to automatically release new prompts configurations ;) please take a look to my solution and if you think it is valuable, we can discuss about this topic