Launching today

Weavable
Give every AI agent persistent work context
80 followers
Give every AI agent persistent work context
80 followers
Weavable gives AI agents persistent, live work context from the tools your business already runs on. Through a single MCP endpoint, it turns scattered updates, relationships, and system changes into a usable context layer so agents can reason more accurately without constantly re-ingesting data. The result is lower token usage, better outputs, and more reliable agent behavior across real business workflows.









Hey Product Hunt 👋 I'm Abesh, co-founder of That Works, and today we are launching Weavable.
The Problem
Teams building agentic workflows are sitting on a goldmine of work context: decisions, relationships, pipeline data and support history that is spread across every tool they use. Getting that context into agents reliably is still harder than it should be.
Most approaches follow one of two flawed paths:
❌ Direct app connections - raw API and MCP responses flood the model, token costs balloon, and the agent burns its context window figuring out what matters instead of acting on it.
❌ Static knowledge bases or RAG - context goes stale the moment it's captured. Agents work from the last snapshot and confidently get things wrong.
So we built Weavable.
The difference is measurable: one-tenth the tokens compared to direct app connections, with outputs preferred 85% of the time in LLM-as-a-judge evals.
How Weavable is Different 🔌
Weavable is context infrastructure for AI agents. Instead of dumping raw data at the model or freezing a snapshot, Weavable maintains a continuous updating changelog across your actual work tools, so the knowledge graph your agents reason from is always mapped, reconciled, and up to date.
🔷 Connect your tools: one OAuth flow covers HubSpot, Slack, Zendesk, Jira, GitHub, email. Scoped access, no broad permissions.
🔷 Define shared contexts: customer health might live across your HubSpot pipeline, Zendesk queue, and a Slack channel. Weavable pulls that together into a single context your whole team's agents reason from. No per-agent app connections, no duplicated permissions, no visibility gaps.
🔷 Plug it in: one MCP endpoint into Claude, Cursor, n8n, or any client you're running. Live in a few minutes.
Who is this for?
If you're building or operating agentic workflows on top of real work data, and you're tired of silent failures, token blowout, and context that's always slightly wrong - Weavable is built for you.
🚀 Get started today
Start free for 30 days, full access, no card required at weavable.ai
— Abesh & Varun
Nice! I especially like the activity graph/changelog approach because it treats context as something dynamic.
Curious: how do you actually reduce token usage by 90%?
@grantmac_ Thanks! Indeed, nothing about data is every static!
On the tokens: pulling raw data into the context window costs you twice. Once on ingestion, then again on reasoning, while the LLM connects records, sorts by recency, and figures out what matters. Weavable's graph already knows the relationships and what changed when. The agent queries for the specific signals it needs, and the model only reasons over those.
The 90% is what we see on realistic workflows like pre-meeting briefs, renewal analysis, pipeline summaries, compared to the same thing built on raw MCP calls.
@varunn very cool guys!
This is a really interesting point of view here. The activity-graph approach makes sense — context should reflect what's happening, not just what was recorded.
One question from our experience building Faindo: we connect to multiple AI models (ChatGPT, Perplexity, Gemini) and one challenge we keep hitting is that each model interprets the same context differently depending on how it was trained. Does Weavable normalize context before it hits the MCP endpoint, or does it stay model-agnostic and let the agent handle interpretation?
Congrats on the launch, following the progress closely.
@nerijusrimdzius Thanks, and a great question!
Weavable structures context before it hits the MCP endpoint. We rank it, denoise it, and resolve the connections that carry the most signal, so the downstream model gets a high-quality, ready-to-reason-over view rather than raw records to make sense of.
One thing we've noticed: because of how we construct the context, models in the same class tend to reason about it in similar ways. The structure is unambiguous enough that interpretation converges. Different classes of model still unlock different capabilities on top of that, but the floor moves up everywhere, and the variance within a class drops noticeably.
Curious where you've seen the biggest gaps across the three you're running at Faindo. That's exactly the kind of cross-model signal we want to be informed by.
Jumping in as the other maker.
Here’s the bet underneath everything we built: work isn’t just documents or records. It’s activity. The things people and agents do over time. A renewal slips because three signals lined up across CRM, support, and Slack that nobody connected. A deal closes because of a conversation in a thread, not a field.
The record is the residue. The work is what moved.
Most AI context tools either flatten all of that into a snapshot, or stitch together a handful of MCPs that make endless calls against flat records, pollute the context window, and still don’t know what changed or why. We thought both were wrong.
So we built Weavable on a deterministic engine that tracks how information changes, builds a changelog of every meaningful update, and stitches it into an activity graph. That graph is what your agent queries through the MCP endpoint. Not a summary, not a vector blob. A structured, time-aware picture of what’s actually happening. And because your agent can query for the specific signals it needs, it doesn’t ingest an entire workspace to find them. Less context window, less cost, sharper answers.
Would love to hear from anyone who’s tried to solve this differently. We think the activity-graph approach is the right primitive, but we’re early enough that we want to be wrong out loud if we are.
DayOne
Yes! I've been trying to solve this problem for months with various (often questionable) hacks. Love it.
One thing I’ve been thinking about a lot with agentic systems is context governance.
Most team have hugely different sensitivity levels across data, customer conversations, board discussions, HR issues, commercial terms, etc. How does Weavable handle permissions and context boundaries so agents only reason from the information that specific users or teams should actually be able to see?
Massive congrats on the launch! 🚀 One-tenth the tokens vs direct app connections, with 85% preference in LLM-as-judge evals — that's a serious pair of numbers to lead with, and it maps to a real pain. Most agent setups I've seen either drown in raw API output or reason from a snapshot that's already wrong. Treating context as live infra rather than a dump or a freeze is the right call. Signing up.
@dyballnoble Thank you! "Live infra rather than a dump or a freeze" is a sharper way of putting it than we've managed ourselves 😀
The whole bet behind the activity graph is that context has to be a live, structured view of what's happening, not a dump of static data handed to the model where reasoning happens at a language level. Work is about things happening, not just the end-state artifact. Looking forward to hearing what you think once you're in.
@varunn Jumping in this week, will share notes!
Epsilla (YC S23)
Congratulations. And happy product launch. @abesh_thakur
Thanks so much for your support @huisong_li!