Launching today
Yavy

Yavy

Turn any website into an MCP server for AI

59 followers

Yavy turns any public website into an MCP server. Paste a URL, and we crawl, index, and serve your content to AI tools like Claude, Cursor, and any MCP-compatible assistant. No more copy-pasting docs into chat. No more hallucinated answers. Your AI gets accurate, up-to-date information from your actual content. Perfect for developer docs, help centers, blogs, and knowledge bases. Set up in minutes - no code required. Organize multiple sources and share access with your team.
Yavy gallery image
Yavy gallery image
Yavy gallery image
Yavy gallery image
Yavy gallery image
Free Options
Launch Team / Built With
Intercom
Intercom
Startups get 90% off Intercom + 1 year of Fin AI Agent free
Promoted

What do you think? …

Vildan Bina
Hey Product Hunt! 👋 I built Yavy out of pure frustration. Every day I'd find myself copying chunks of documentation into Claude or Cursor, asking "how does X work?" - then doing it again 5 minutes later for a different page. The AI would sometimes hallucinate answers that sounded right but weren't in the actual docs. When MCP came out, I saw the solution: what if any documentation could become an MCP server? Your AI assistant could just... search the real docs directly. So I built Yavy. Paste any public URL - framework docs, help centers, blogs and it crawls, indexes with semantic embeddings, and serves it via MCP. Now Claude, Cursor, and other AI tools can search your actual content instead of guessing. What I learned building this: - Chunk-based indexing beats full-page indexing for accuracy - Semantic search changes everything - find by meaning, not keywords - Developers want one place to connect ALL their docs (hence organizations & multi-project support) I'm using Yavy myself every day now. No more copy-paste workflow. No more hallucinated API methods. Would love your feedback - what docs would you index first?
Victor Grdr

this is super cool, i've been dealing with mcp hell lately for my own startup and honestly the setup process is brutal. curious how you guys are handling the crawling and indexing - are you doing real-time updates when sites change or is it more of a snapshot thing? also wondering about rate limits and how you deal with sites that don't want to be crawled. we've been trying to connect our agents to docs and wikis and it's way harder than it should be, so really excited to see someone tackling this properly. congrats on the launch!

Vildan Bina

@victor_eth really good question, regarding crawling, staleness-based snapshots, not real-time. Each page has a refresh frequency (daily by default). We only re-crawl what's actually stale and skip re-indexing if content hash hasn't changed. Regarding rate limits, 100ms delays between requests, max 5 concurrent jobs per project, depth/URL caps and we identify ourselves with a custom User-Agent. Regarding robots.txt, yes, we respect it. We check for Sitemap directives first and prefer using the site's own sitemap over recursive crawling

Would love to hear more about your use case!

Daniele Packard

Very cool! Will it automatically collect all sub directories content (e.g. from parent docs page, collect content of all docs)? This is huge pain manually

Vildan Bina
@daniele_packard yes, for the web crawl discovery type, Yavy performs a recursive crawl across all documentation pages