Lapis

Lapis

Secure, offline AI chat assistant on your device

57 followers

Lapis brings a unique twist by running powerful open-source LLMs locally on your device, giving you a fully offline, private AI chat experience. Unlike cloud-based assistants, Lapis never sends your data to servers or collects it. You can load any Hugging Face model (like Gemma, Phi, LLaMA, GPT-OSS…) by pasting its URL. Manage your model library, optimize performance, and chat securely all free, no subscription needed
Lapis gallery image
Lapis gallery image
Lapis gallery image
Lapis gallery image
Lapis gallery image
Lapis gallery image
Lapis gallery image
Lapis gallery image
Free
Launch Team / Built With
Intercom
Intercom
Startups get 90% off Intercom + 1 year of Fin AI Agent free
Promoted

What do you think? …

Álvaro García
I created Lapis because I wanted an AI assistant I could truly trust, one that ran entirely on my own device, without sending data to any server or relying on subscriptions. Most AI tools today trade convenience for privacy, and I felt there had to be a better way. Lapis is my attempt to bring powerful, open-source models to everyone in a simple, secure, fully offline experience. Excited to finally share it with you all!
Hannah Adam

Looks promising! Congrats on the launch!

Any plans on supporting android users in the future?

Álvaro García

Thanks! @info_team3 , not really in the near future. The inference runs on native swift solutions

Harkirat Singh

Congrats on the launch @lvrpiz Curious , how smoothly does Lapis handle bigger workloads while running fully offline on-device?

Álvaro García

@harkirat_singh3777 It will depend on the model you are using and the device that runs the inference. I've tested it on an iPhone 11 and I get 75 tokens/s with some models