Launching today
Hyperterse
Connect your data to your agents.
73 followers
Connect your data to your agents.
73 followers
Hyperterse treats data access as declarative infrastructure, rather than existing data tools that rely on insecure Text-to-SQL or tedious manual APIs. Define queries once, and we auto-generate secure Model Context Protocol (MCP) tools and REST endpoints. Standout features include "Security-by-Abstraction" (agents never see raw SQL), automatic input validation, and real-time generation of LLM-friendly documentation. It bridges the "Data Access Gap" for your Postgres, MySQL, and Redis data.






@samrith nice approach to a real pain point. i've burned hours writing boilerplate wrappers just to let an agent query a database safely, and text to sql is always sketchy in production.
the security by abstraction idea is smart. agents only seeing typed tools instead of raw schema removes a whole class of problems.
couple of questions: how does it handle complex joins or multi step queries? like if an agent needs data from 3 tables, do you predefine that as one tool or can it chain multiple tools together? also curious about performance overhead compared to direct db queries.
@topfuelauto Hello Mostafa, thank you so much!
Since it uses native DB drivers and not ORMs, your "statements" are just SQL strings. You can easily perform multi-table joins. For example, this is a statement I have in one of my queries:
The best part about descriptions in the Hyperterse config is that it tells LLMs what to expect, what to do. Agents are good at multi-tool calls inherently if they are provided apt context with the tools.
The performance overhead is very minimal since it as close to a direct DB query as possible. The entire thing uses native drivers for all databases. The only latency you really face is the network latency. Hyperterse itself is super light. I don't have benchmarks yet, but the in my local tests, Hyperterse calls themselves took an average of 2-5ms per request.
@samrith such a great idea! upvoted:) super curious about how are you handling the actual mcp protocol implementation? we've had so many issues with mcps being flaky or not maintaining connections properly. also curious about latency - are you caching query results or is every agent call hitting the db directly? and last thing, how do you handle when different agents need different levels of access to the same data? like some should only read, others can write, etc. we're trying to figure out the right architecture for this and would love to know how you're approaching it. congrats on the launch!
@victor_eth Hey Victor,
Thank you so much!
The entire MCP protocol is implemented from scratch as JSON-RPC 2.0 per the spec directly within Hyperterse engine over Streamable HTTP.
Just curious, what kind of flakiness are you facing? Or where exactly? I have been running tons of tests, and haven't really run into it a lot. What are you using for your MCP server?
I am not caching results right now, but plan to add it in the near (by March 2026) future, so unfortunately all queries hit the DB directly
Latency is so far a non-issue. I don't have proper benchmarks yet, but from my tests I have averaged a tool-call time of 2-5ms on local, and on hosted MCPs a round-tripping time of 15ms
Agent-level access is something I have thought of, and currently brainstorming on it. But the way I handle this right now is to have different MCP servers, one for read another fro write. And provide the relevant agents access to it.
One thing that is in the plan though, is a gateway which handles routing for you. This is where I envision all auth, roles, etc to be handled. Until we have something like that, you can handle it by putting your MCP behind API Gateway and have a token exchange that works similar to a real user and solve by means of user-agent, or agent-identity headers.
Cloudthread
Powerful to see it create mcp tools and documentation at the same time - congrats!
@daniele_packard Thank you so much, please do try it and let me know if you have any feedback!