Launching today

Tinfoil
AI chat and API that keeps your conversations fully private
76 followers
AI chat and API that keeps your conversations fully private
76 followers
Don’t want OpenAI seeing all your conversations? We don’t either. That’s why we built Tinfoil - an AI that keeps your conversations strictly between you and the AI model, everyone else is locked out. It’s like a local AI but running in the cloud, using secure hardware. Tinfoil leverages hardware security features available on NVIDIA GPUs to deliver verifiable privacy. No pinky promises required: you can check for yourself that your conversations are end-to-end private.
Interactive






Free Options
Launch Team / Built With




Tinfoil
Hi there! I’m Sacha, cofounder of Tinfoil. Excited to share what we’ve been building!
Tinfoil gives you a familiar AI chat (browser and iOS) and an inference API, featuring the latest open-source models like DeepSeek V4, Gemma 4, Kimi K2.6, and GLM 5.1. However, with Tinfoil all data is stored end-to-end encrypted and processed privately, even during inference.
Backstory: my cofounders and I have always been very aware of how important privacy is, especially with tools like AI chatbots that we use daily for personal discussions and to process our thoughts. We strongly believe nobody should be privy to these chats.
I did my PhD in cryptography and internet privacy, and was an early user of ChatGPT. My cofounders and I quickly realized that the amount of control we were giving up to get access to powerful AI like ChatGPT was simply unprecedented, and frankly creepy. We found ourselves hesitating when sharing certain things or wondering if our deleted conversations would end up in the next training cycle. Today, we’re all leaking our brains to AI labs. Tinfoil is the Flex Tape to stop that.
The latest NVIDIA GPUs have built-in support for secure enclaves. These are security mechanisms built into the hardware that allow running LLMs in a way that keeps data private during processing. Nobody, not even the operators of the GPUs, can see the data being processed. Secure enclaves also allow you to perform remote attestation, which means the chip can return a signed fingerprint of the code and security configuration currently running inside the enclave. This means you don’t have to take our word that your data is secure, you can actually check it yourself:
All the code running in the enclave is open source and it’s fingerprint is pinned to a transparency log, Sigstore. You can inspect this code yourself and verify that it’s secure.
When you connect to our chat or inference API, the client fetches the attestation report from the enclave it is connecting to.
The client checks that the fingerprints match. If they do, the server is running exactly the code that we claimed it would be running.
This whole process happens automatically, so you always know that you are connecting to a trustworthy service. If you’re curious, you can read more about that in our docs: https://docs.tinfoil.sh/verification/verification-in-tinfoil
Apple, Meta and others have been using secure enclaves to build private AI for their own apps and services, but Tinfoil’s goal is to give everyone else the ability to build verifiably private AI applications with state-of-the-art open source models. We put a lot of effort into removing the friction that security & privacy tends to introduce, so we’re excited to hear what you think!
Pricing:
Chat: $20/month but you can try it out for free!
API: $5 in free credits when you sign up.
I'm building proprietary systems, and didn't want my IP sitting in a mainstream AI provider's logs or training pipeline. Tinfoil has been a huge unlock for me.
I leaned on it heavily during design and prototyping to have a strong LLM thinking partner. Tinfoil has let me keep my existing LLM workflow without running a local model that would roast my laptop.
The architecture they're using- hardware enforced privacy via secure enclaves with client-side attestation of the CPU/GPU inference server is the best approach I've seen for capable and private AI.
A few specifics:
In-session UX is comparable to mainstream non-private chat services.
Passkey/recovery flow had a couple rough edges early on, but the team is actively improving this and Sacha was super responsive to help me.
Also recommend the documentation if you're the type of person that wants a primer on secure enclaves and a detailed breakdown of their architecture. I had a fun time nerding out on it
Tinfoil
@andrew_forman1 Thank you for the feedback!
cubic
Love this. Really thoughtful product and the verification piece is especially compelling.
One question I had: for teams or developers building on the API, where have you seen the biggest tradeoff between verifiable privacy and product usability or performance, and how have you tried to minimise that?
Feels like that balance is probably where a lot of adoption decisions get made.
Tinfoil
@paul_sangle_ferriere1 thanks for the question! We really believe that privacy should be seamless to ensure usability. We made our API OpenAI compatible, which means it is very easy to go from using an inference provider like OpenRouter to using Tinfoil - it's just a one line import change. We also have SDKs in several languages like Python and TypeScript that automatically verify the hardware attestation and security on each connection for you. So if you're using an OpenAI SDK right now, it's enough to swap out the import line to something like `import TinfoilAI` and you should be good to go!