Launching today

Lenz
Fact-check any statement with source-backed, multi-model AI
25 followers
Fact-check any statement with source-backed, multi-model AI
25 followers
Lenz is an AI fact-checker. Paste any claim and Lenz searches for independent sources, has multiple AI models argue both sides and a separate AI panel review the evidence before reaching a scored verdict — with every source and argument and step of the process visible to you. Most AI tools give you one model's best guess from memory. Lenz orchestrates a multi-model research system based on real sources so no single model's blind spots drive the conclusion. Try it free at lenz.io/ph














Happy launching. I have a question. How do you quantify and mitigate correlated bias across your ‘independent’ AI models and sources especially when those models are trained on overlapping datasets and the sources themselves may share upstream information pipelines so that your multi-model consensus doesn’t simply reinforce systemic errors rather than reduce them?
@bogomep Deep question. Overall concept is to pick the highest-quality available sources/evidence and work with that data (plus the LLM background knowledge which is explicitly listed as a separate source in the output). From that perspective, we don't eliminate correlated bias in the sources, but we try to identify diverse sources and have rules such as looking for sources that support both sides of the argument, and looking for local/regional sources -- that's why if you check, for example, for China, you will get Chinese sources alongside the Western ones. Each source is graded for authority, recency, etc. Retracted research papers are excluded, etc.
With respect to the models, there are two key intermediate steps: (1) adversarial debate between two different models that's aimed to crystalize the positions and arguments and is also valuable for the user to see the strongest opposing arguments, and (2) panel of three different AI models reviews the evidence and the debates in parallel and each model evaluates different axis to detect logical fallacies, biases, context issues, etc. Each of the three panel model is instructed to analyze the data from a different perspective. Then a voting makes the final call, and the executive summary is written. This does not solve correlated bias from shared training data entirely, but helps. And this orchestrated multi-model multi-axis AI pipeline of frontier models has a higher likelihood of producing higher quality output than a single frontier model alone.
All sources, citations, debates, panel reviews, and voting scores are available to the user to inspect.
If multiple models are arguing both sides, how do you ensure the final verdict isn’t just "averaging opinions" but actually getting closer to truth?
@lak7 After the debates, we run another step at which 3 other models in parallel review on three different axis the sources/evidence and the debates, and make the final decision. They flag things like biases, logical fallacies, etc. in the arguments (also shown in the final report), and are pretty good at detecting the weak ones.
Amazing! When do we get SKILL + CLI or an MCP so our agents can do fact checking for us? Or a chrome extension
@milko_slavov There's an API available -- https://lenz.io/api/v1/docs/ API keys are managed from the Account menu. Drop me a note if you need any support. There's also a bookmarklet (https://lenz.io/install when accessed from desktop) as a first step before the chrome extension.
@kostaj Great! Here is the skill + cli: https://github.com/mslavov/lenz-cli
@milko_slavov Wow! Amazing! Welcome to the Lenz team :) I will play with the skill & cli later today.
Congratulations!
Does it rely on a single or multi- model approach to perform the fact checking? Any plans for specialized / domain-focused checks - for example solving historic mysteries?
@stefan_lilov Lenz relies on a pipeline that includes several steps. At the key steps (debates and panel reviews), several different forefront models from different providers are run in parallel. We push the models to not rely on their internal factual knowledge, but to focus on evaluating the selected high-quality sources/evidence instead. From that perspective, we don't use domain-trained/focused models in the pipeline, but it's a strong idea to introduce the knowledge of such models at the source level. We already include LLMs internal knowledge as one of the sources (and it's highlighted for the user), but it's not currently coming from a domain-focused model.
When a user start chatting with Lenz (the follow-up questions), then we use a domain-trained model, that also has full access to the source/evidence materials and is restricted to not discuss topics outside of it's narrow domain.
Hi PH, Vicky here (co-founder, non-tech side)
Sharing a bit more on the “why” behind Lenz.
We didn’t build it for obvious fake news, they are easy to spot - we built it for the convincing stuff that sounds right. The kind you don’t think to question.
Misinformation moves faster than fact-checking. Most fact-checkers publish a handful of articles a week, while a single claim you stumble upon online can take 30+ minutes and 20+ tabs to research.
Lenz is, in a way, the tool I wish I didn’t need.
Unexpected side effect: I argue less with my teenagers 😄 Instead of “that’s not true” → “let’s check it.”
Curious to hear what you think.
I’ve used Lenz to fact check scholarly materials in philosophy. More specifically, contentious topics like Heidegger, where personal opinions can bias the way he’s interpreted. I’ve found that Lenz has a solid way of grading truth claims on the 1 to 10 scale which really helps separate where on the spectrum a claim lies.
An interesting use case @ivan_spasov thanks for sharing!
Philosophy is exactly where things get tricky as it’s less about clear facts and more about interpretation (and bias:)
Really glad the truthfulness score was useful - sometimes claims sit on a spectrum .
Were there were any results that surprised you?