Liminary - Ground your AI in saved knowledge as you work
by•
Liminary turns everything you’ve saved into working memory for AI. Unlike chatbots, meeting tools, or project-based notebooks, it gives your knowledge one shared memory across writing, meetings, and research. It surfaces relevant context automatically as you work, helping expert knowledge workers reuse their best thinking, avoid starting from scratch, and produce source-grounded work with traceable citations.



Replies
Liminary
Hey Product Hunt 👋 I'm Sarah, founder of Liminary.
I led ML engineering for Dropbox. Semantic search, retrieval, and Dropbox's first generative AI integrations. I built Liminary out of personal frustration: storage is archival. I couldn't save articles, meeting notes, and the useful AI conversations in one place, and then on top of that, I'd never see any of it again. Lost in closed tabs, various note taking apps, emails, and AI chats.
AI tool proliferation made it worse, not better. Every new model meant re-benchmarking, redoing workflows, re-feeding context. As a builder, I believe users should get the best model for the job, not chase whichever one shipped this week.
But there's a deeper problem beneath both of those: every AI tool you use is working from what the model thinks is relevant. Trained on the internet, guessing at your context. Not what you've decided matters. That's the gap.
Our team at Liminary is all ex-Dropbox and ex-Google. We built Liminary to close that gap: the memory layer for your AI work. You decide what goes in: files, web pages, YouTube videos, LLM transcripts, Gmail threads. Your AI works from that. Always.
Liminary lives across the surfaces where you work: a browser extension, a writing sidekick in Google Docs, a meetings layer, and a place where everything you save lives and connects.
Three things Liminary does that no other tool can:
Proactive recall. The right knowledge surfaces at the moment of work. You don't search. It finds you.
In-context fact-check and Gap detection. As you write in Google Docs, Liminary validates claims against your own library, finds what’s missing from the research you already did or the information your clients already shared with you. Not the web, not training data.
Meeting recall, live. No bot in the room. When someone says "Project Atlas," your notes already read "Project Atlas with Alice and Bob [source]." Other meeting tools take notes. Liminary connects what's said to everything you already know.
Built for people who bill for their perspective: independent consultants, fractional leaders, VC analysts and strategists. In a world where everyone uses the same models, your edge is what those models are grounded in.
The work looks like this: you keep ambient context on a small set of clients, accounts, companies, or topics you think about repeatedly. You research them. You meet about them. You produce deliverables about them. Liminary connects all three, so the research, the meetings, and the writing all work from the same knowledge.
What's the one piece of context you wish your AI actually remembered?
Early days. Honest feedback welcome: liminary.io
~ Sarah and the Liminary Team
This feels powerful, but I'm curious how often it pulls "technically relevant" context that's actually not useful in practice.
Liminary
@charlotte_reed1 good question! I’ve been using Liminary for a month in my consulting work and what I’ve noticed is that the more context you give it, the sharper it gets. For example, it recently linked a specific question a client asked in a meeting to a comment from a strategy audit I did weeks ago.
The technical reason it doesn't just pull "keyword matches" is that it uses more than just text similarity. It treats your access patterns and recency as signals too. If you're looking for something new that's related to an older note, the system treats that as a sign that the older note is still "alive" in your thinking. It also lets you dismiss or mark things as outdated, so the retrieval actually learns from what you find useful versus what’s just noise.
Basically, it’s designed to prioritize your current thinking over just anything that looks similar on paper.
Liminary
@charlotte_reed1 +1 to Kevin. The other thing I'd add is more about how we approach the design problem than the technical one. Precision in practice is genuinely hard to measure, there's no clean metric for useful vs technically related, and it's also dependent on each user's individual preferences. So we lean on a couple of things instead.
First, we let you tell the system when something isn't useful, dismiss it, mark it as not relevant, and that feedback shapes what gets surfaced next time. The system learns your standard for "useful" as you use it.
Second, we deliberately surface less rather than more. It's easier to build something that throws every "related" note at you, but that's how you get the noise problem. Restraint is a design choice.
Still imperfect, and we're tuning it constantly. The bar we hold ourselves to is that a surfaced note that doesn't earn its place is worse than no note at all.
I wonder if users end up trusting the surfaced context too much, even when it's slightly off.
Liminary
@hudson_blake That's a fair concern and honestly one we think about a lot. The research we did with consultants actually surfaced something counterintuitive: as AI models get better, hallucinations become harder to spot, not easier. When errors were frequent, people checked everything. Now that output quality is generally higher, the temptation to trust it goes up, but the stakes haven't changed. A single wrong stat in a client deliverable is still a professional liability.
The way we've approached this is to make verification the default, rather than an afterthought. Everything Liminary surfaces is tied back to a source you actually saved. So if something looks slightly off, you're one click away from the original document.
We're not asking you to trust the AI. We're trying to make it fast and easy to check it. The goal is to collapse what our users call the "reconciliation loop" - that painful cycle of generating output, hunting down sources, and verifying every line before it goes anywhere near a client.
Liminary
@hudson_blake +1 to everything Kevin said. The other piece I'd add: Liminary is a content-first platform, not chat-first like most AI tools. That distinction matters a lot here. We don't respond from an LLM's general knowledge, we respond from the content you've already saved. And what you save required your judgment and expertise in the first place.
So everything downstream of that, what gets surfaced, what gets synthesized, what gets cited, is grounded in a corpus you already vetted. The AI isn't introducing new claims for you to trust or distrust. It's pulling from sources you chose, and showing you exactly which one. That's also why citations are a core part of the product, not just a polish item.
Does the system ever surface too much context and slow down decision-making instead of helping it?
Liminary
@jack_sullivan5 It's a real tension. The design challenge is relevance and timing vs volume.
What we're building is closer to a research assistant who speaks up when something is relevant to what you're working on right now. We notice that decision making tends to be faster when context is fully sourced and prioritized from sources you've deliberately saved.
If something comes up that isn't helpful, that's useful signal too - Liminary will learn from your feedback. The goal is a system that gets sharper over time.
Interesting idea, but I keep thinking about whether "always-on context" actually improves thinking or just adds more noise.
Liminary
@easton_grant Always-on implies a constant feed, which would absolutely add noise. What we're building at Liminary is closer to ambient context.
Our goal is to minimize the cognitive overhead - focused, high-quality context when you need it, not a constant stream to distract you. The quality of the context that is surfaced is where Liminary can prove its value.
Liminary
@easton_grant Always-on is definitely a challenge, and we spent a lot of time balancing the utility vs distraction question. Like Kevin said, the goal isn't to be on for the sake of being on, but to be available in-context when you actually need it.
A good example: we built controls so users decide when to run fact-check or gap detection while writing, instead of firing those pre-emptively. Surfacing them uninvited can break flow, even when the insight is useful.
Still something we're learning and want to tune per user, because everyone's threshold for "helpful nudge" vs "get out of my way" is different.
Liminary
@easton_grant Hi Easton, It's a big design challenge to present more information in a way that's not distracting, yet there when you need it. When our goal is providing value instead of asking for engagement, we're at an advantage. We're constantly simplifying language, creating clear information hierarchies and tweaking our model's instructions so the user can choose what's helpful to them.
In real workflows, do people actually maintain structured "knowledge sets." or does it become messy over time?
Liminary
@cody_spencer In reality, it gets messy. It's just how knowledge work typically happens -nobody has time to file things perfectly in the moment.
What we found talking to consultants is that the maintenance burden itself is what kills most knowledge systems. People start with good intentions and a clean folder structure, then three weeks into a busy engagement it's already out of date and they've stopped trusting it.
Liminary is built around that reality. It reduces the overhead of maintaining a knowledge system over time. You save things as you go, and the system does the organizational work in the background. The knowledge set emerges from your actual workflow rather than requiring a separate one to keep it alive.
Feels like the hardest part here is not retrieval, but knowing what not to bring into the moment.
Liminary
@dylan_hayes2 Yes, you've identified what we think is actually the harder design problem. Retrieval is largely solved. Knowing what's relevant to the moment - and what isn't - is where the real work is.
It connects back to something Sarah said early on: people don't always know how to describe what they want, but it's almost always inferable from the context they're operating in. That's the principle Liminary is built around.
And when it doesn't have a confident answer, it says so rather than filling the gap with something plausible-sounding.
Mailwarm
Congratulations!
The real value, to me, is not saving knowledge, but making past thinking reusable at the exact moment it matters.
how do you handle memory hygiene over time, especially when old context becomes outdated or no longer reflects the user’s current thinking?
Liminary
Thanks@thamibenjelloun! And yeah you nailed it. Storage for the sake of saving isn't the point. Finding at the right moment and finding the most relevant thinking is the problem we're trying to solve.
We're carrying over a lot of lessons from working on retrieval at Dropbox. Couple big ones we leverage are that recency is a strong signal, but so is access. When you look for something new that's related to an older note, that tells the system the older note is still alive in your thinking, even if you haven't touched it in months. It earns its way back up.
Also as users curate collections we treat those as living, not append-only, so pruning and regrouping is part of the workflow, not a chore bolted on top. Updates supersede instead of piling up, so when you rewrite a note the new version is what gets retrieved and the old framing doesn't keep haunting you. And you stay in the loop. When Liminary surfaces something, you can dismiss it, edit it, or mark it as outdated, and that feedback shapes what shows up next time.
Honestly hygiene is a hard, ongoing problem, and I'd rather make curation lightweight and continuous than pretend the system can fully self-clean.
Finally something that actually works to bring together the context mess I've created across my digital universe!
Liminary
@matthew_barclay Thank you Matthew, this means a lot. The "context mess" framing really resonates, it's the exact problem that got me to start building this. Hope Liminary holds up to that promise as you actually use it, and please tell me when it doesn't.
How does it handle conflicting versions of the same idea across different notes or time periods?
Liminary
@caleb_hunter_guahip It's a genuinely hard technical problem, and one the team has thought carefully about. When you save sources into Liminary, they don't just sit in a file store waiting to be retrieved. The system runs an extraction process immediately, building an understanding of the content that includes the relationships between sources - where things corroborate each other, and where they contradict.
So if you saved a client interview from six months ago and a more recent one where the same person has changed their view, Liminary doesn't flatten those into a single answer and present whichever ranks highest. It surfaces both, with enough context for you to see where the tension sits.