Launching today
Hyperterse

Hyperterse

Connect your data to your agents.

69 followers

Hyperterse treats data access as declarative infrastructure, rather than existing data tools that rely on insecure Text-to-SQL or tedious manual APIs. Define queries once, and we auto-generate secure Model Context Protocol (MCP) tools and REST endpoints. Standout features include "Security-by-Abstraction" (agents never see raw SQL), automatic input validation, and real-time generation of LLM-friendly documentation. It bridges the "Data Access Gap" for your Postgres, MySQL, and Redis data.
Hyperterse gallery image
Hyperterse gallery image
Free
Launch Team / Built With
Framer
Framer
Launch websites with enterprise needs at startup speeds.
Promoted

What do you think? …

Samrith Shankar
Hi Product Hunt! 👋 I’m Samrith, the creator of Hyperterse. We are witnessing a massive shift from passive chatbots to active Agentic AI—a market projected to hit over $50B by 2030. But as many of you know, there is a massive "Data Access Gap" holding us back. To make agents useful, you have to connect them to production data, and right now, that process is broken. The Problem: The "Integration Nightmare" If you want to give Claude or a custom agent access to your PostgreSQL or MySQL database, you usually have two bad options: 1. Risky Text-to-SQL: Letting an LLM write raw SQL is a security minefield (hello, prompt injection) and reliability trap,. 2. Manual Boilerplate: Hand-coding API wrappers, validating inputs, and maintaining llms.txt files takes hours of "toil" - contributing to the 5+ hours a week developers already lose to unproductive tasks. The Solution: Hyperterse Hyperterse is an open-source runtime that treats data access as declarative infrastructure. You define your queries once in a simple config file, and we handle the rest. ✨ Key Features: - 🔌 MCP Native: Automatically generates Model Context Protocol (MCP) tools that agents (like Claude or Cursor) can discover and use instantly. - 🛡️ Security-by-Abstraction: Agents never see your raw SQL or schema. They only see typed, validated tools. This kills SQL injection risks. - ⚡ Zero-Boilerplate: We auto-generate REST endpoints, OpenAPI specs, and LLM-friendly documentation in real-time. - 🗄️ DB Agnostic: Works with PostgreSQL, MySQL, and Redis out of the box. Why Open Source? We believe the "interface" between AI and data should be a standard, not a silo. With the explosion of MCP adoption, developers need a reliable bridge that just works. I’d love to hear your feedback! Give us a star on GitHub if you love this project, that goes a long way in helping us out! ⭐️
mostafa kh

@samrith nice approach to a real pain point. i've burned hours writing boilerplate wrappers just to let an agent query a database safely, and text to sql is always sketchy in production.

the security by abstraction idea is smart. agents only seeing typed tools instead of raw schema removes a whole class of problems.

couple of questions: how does it handle complex joins or multi step queries? like if an agent needs data from 3 tables, do you predefine that as one tool or can it chain multiple tools together? also curious about performance overhead compared to direct db queries.

Samrith Shankar

@topfuelauto Hello Mostafa, thank you so much!

Since it uses native DB drivers and not ORMs, your "statements" are just SQL strings. You can easily perform multi-table joins. For example, this is a statement I have in one of my queries:

SELECT DISTINCT
        u.id as user_id,
        u.email,
        up.first_name,
        up.last_name,
        a.city,
        a.state,
        a.country
      FROM users u
      JOIN user_profiles up ON u.id = up.user_id
      JOIN addresses a ON up.address_id = a.id
      WHERE a.city = {{ inputs.city }}
      ORDER BY u.email

The best part about descriptions in the Hyperterse config is that it tells LLMs what to expect, what to do. Agents are good at multi-tool calls inherently if they are provided apt context with the tools.

The performance overhead is very minimal since it as close to a direct DB query as possible. The entire thing uses native drivers for all databases. The only latency you really face is the network latency. Hyperterse itself is super light. I don't have benchmarks yet, but the in my local tests, Hyperterse calls themselves took an average of 2-5ms per request.

Victor Grdr

@samrith such a great idea! upvoted:) super curious about how are you handling the actual mcp protocol implementation? we've had so many issues with mcps being flaky or not maintaining connections properly. also curious about latency - are you caching query results or is every agent call hitting the db directly? and last thing, how do you handle when different agents need different levels of access to the same data? like some should only read, others can write, etc. we're trying to figure out the right architecture for this and would love to know how you're approaching it. congrats on the launch!

Samrith Shankar

@victor_eth Hey Victor,

Thank you so much!

  • The entire MCP protocol is implemented from scratch as JSON-RPC 2.0 per the spec directly within Hyperterse engine over Streamable HTTP.

    • Just curious, what kind of flakiness are you facing? Or where exactly? I have been running tons of tests, and haven't really run into it a lot. What are you using for your MCP server?

  • I am not caching results right now, but plan to add it in the near (by March 2026) future, so unfortunately all queries hit the DB directly

    • Latency is so far a non-issue. I don't have proper benchmarks yet, but from my tests I have averaged a tool-call time of 2-5ms on local, and on hosted MCPs a round-tripping time of 15ms

  • Agent-level access is something I have thought of, and currently brainstorming on it. But the way I handle this right now is to have different MCP servers, one for read another fro write. And provide the relevant agents access to it.

  • One thing that is in the plan though, is a gateway which handles routing for you. This is where I envision all auth, roles, etc to be handled. Until we have something like that, you can handle it by putting your MCP behind API Gateway and have a token exchange that works similar to a real user and solve by means of user-agent, or agent-identity headers.

Daniele Packard

Powerful to see it create mcp tools and documentation at the same time - congrats!

Samrith Shankar

@daniele_packard Thank you so much, please do try it and let me know if you have any feedback!