
OLLM.COM
The Confidential AI Gateway
78 followers
The Confidential AI Gateway
78 followers
OLLM is a privacy-first AI Gateway that offers a curated selection of popular LLMs deployed on confidential computing chips such as Intel SGX and NVIDIA. Zero-Knowledge architecture means zero data visibility, retention or training use. Data now also stays encrypted during processing, not just in transit or at rest. To add an extra layer of verifiable privacy, OLLM provides users with cryptographic proof that their requests were safely processed in a TEE.






Very nice and needed. Many are wondering the same thing about how truly private is their data and code once it goes through the network and the result comes back. We can't be sure every step our data is protected. This apparently would solve that issue.
The document button is not working on the website. Would love to understand how to actually verify the data is really encrypted. Because at this moment you report that is, but why we should trust what you report? How users can actually verify everything?
thank you for trying out the product@true_alter for now we have made a demo to showcase how to access the tee proofs while we are working on the docs.
https://app.supademo.com/demo/cmjbgf0ye2xhmf6zps7aftnn3?utm_source=link
@shadid_io Thanks will check it out
Hey team, first of all super cool concept! I do have to say that, even though the UI is pretty sleek, I think the login flow and how you generate your API is not very intuitive. I could suggest maybe a very clear "login" button as you enter the console that maybe re-directs you directly to the dashboard.
@marina_romero There’s a fine line between user feedback and internal review notes - context tends to blur it 😄
I gave OLLM a real try for coding with Zed, routing requests to DeepSeek-V3.1 on NEAR, and wanted to share honest feedback.
First: this is not just a landing-page idea, it actually works. I ran fairly large coding contexts (20/25k input tokens per request), and inference was stable. Latency varied (a few seconds for short completions, ~20s+ for heavier ones), which is expected with confidential compute + large contexts. Costs were transparent and predictable, which I appreciated.
What really stands out is the request-level transparency. Being able to inspect individual requests with:
TEE attestation
Intel TDX + NVIDIA GPU verification
cryptographic signatures
…is something I haven’t seen in any mainstream coding assistant. This feels like a fundamentally different trust model than “just believe our privacy policy.”
That said, wearing an enterprise / security hat, I’m curious about what’s coming next beyond the tech (which is impressive):
Will there be contractual assurances suitable for enterprises?
DPA / SCCs / indemnities?
Audit rights?
SLAs and incident response commitments?
Add option for SSO login for corporate clients
Would be nice to see some success story eg: actual app using this gateway or how founder is using it etc.
Those are usually the last blockers for adoption in regulated orgs, even when the tech is strong.
One small UX note from actual usage:
When you have many past requests, it’s hard to find a specific historical request. Having the request ID visible in the dashboard + a search-by-ID in the explorer/scanner would make debugging and audits much smoother.
Overall: this feels like a serious attempt to move AI from “trust us” to “verify it yourself”. Curious to see how it evolves beyond MVP.
OLLM is one of the first AI platforms that actually closes the trust gap instead of just asking you to “believe the privacy policy.” It lets you run open‑source LLMs inside confidential computing environments (TEEs) and gives you cryptographic attestation for every call, so you can prove where and how your data was processed. This makes it a genuinely usable option for teams handling sensitive code, financial data, or patient records who want modern AI workflows without sacrificing compliance or peace of mind.
Great initiative and a much-needed direction. At a time when trust in AI is mostly based on promises, an approach grounded in verifiable, cryptographic proof is a real game changer. Wishing OLLM strong momentum and looking forward to seeing how the product evolves.
Which open-source models are you planning to support next, and are you considering any code-focused LLMs specifically?