Lora is a local LLM for mobile, with performance comparable to GPT-4o-mini. It offers seamless SDK integration, guarantees complete privacy without data logging, and supports airplane mode. Try our app and build your own Lora-powered app.
@parekh_tanmay Thank you so much for your kind words and support! 😊 Your encouragement truly means a lot to us and gives our team a huge boost on this special day. 🚀 We’ll keep pushing forward—really appreciate you being part of our journey! 🙌
@hanzala_siddiqui7 Thank you! With Lora, everyone can now experience the power of LLM anytime and anywhere, without limitations. It opens the door to a new world of on-device technology, making advanced AI more accessible and bringing innovative possibilities right to your fingertips. Welcome to this exciting new era!
@hanzala_siddiqui7 It really feels like on-device AI is becoming a part of our daily lives! 🚀 Thank you so much for your support—it truly means a lot! 😊
@hanzala_siddiqui7 I'm glad you think so! What you say aligns our mission statement, which is to empower AI everywhere!
Report
Congrats on the launch, @seungwhan! Offering a local LLM with easy SDK integration sounds incredibly useful—how does Lora handle updates and improvements without relying on cloud-based processing?
@seungwhan@adamyork Hello! Thank you for good question! Lora is designed to ensure seamless updates and continuous improvements while maintaining an on-device experience. Updates are delivered efficiently through lightweight patches, allowing users to keep their models up-to-date without depending on cloud-based processing. This approach not only preserves privacy and minimizes latency but also ensures that developers can easily integrate improvements via the SDK without interrupting workflows.
@adamyork Great question—thank you! 😊 We’re continuously improving the model and will enable dynamic downloads for updates. This way, users can leverage Lora’s local LLM on-device while still receiving the latest LLM improvements through patches, all without relying on cloud-based processing. 🚀
Interesting! As this is local LLM can I use it without internet? if so it must be so helpful. Few weeks ago I visited Yosemite National Park, and the signal was unavailable. It must be so helpful if I can talk with AI in those situation.
@psycoder It must have been really inconvenient due to the unstable internet. I will work hard on research and development to ensure that services can work without problems even in wide areas with unstable internet. Thank you so much for your warm encouragement :)
Report
Wow, congratulations to the launch! 🥳 🚀 🎉
I'm an enthusiastic consumer of conversational AI services and am excited to try the private AI assistant "Lora" app and also to share feedback soon! 🥳 ✨
Thank you very much! 😊
I wish you a lot of success and all the best! 🤩 🍀 🎉
@chiaracervetta Thank you so much for your kind words and enthusiasm! We’re thrilled that you’re excited to try Lora and truly appreciate your support. Looking forward to your feedback!
Wishing you all the best as well, and thanks again for your encouragement!
@chiaracervetta Since you love conversational AI services, you must be a true tech early adopter! 😊 Your feedback will be incredibly valuable for us. Thank you so much for your kind and encouraging comment! 🚀🎉
Report
@seungwhan, you're most welcome! 🥳 💐
Thank you so much for your kindness and I'm looking forward to giving feedback about the Lora chat AI soon. 😊 🙌🏻
I wish you and also the entire team all the best too and I'm so grateful to you all! 🥳 🍀 💐
Report
@hansol_nam, you're most welcome and I'm incredibly grateful for your kindness! 😊 🥳 💐
I'm looking forward to giving feedback about the Lora chat AI soon. 🤩 🙌🏻
I wish you and also the entire team all the best and I'm so grateful to you all! 😊 🍀 💐
Report
Really excited for this! As someone who’s always been mindful of privacy and security, this approach feels like a game changer! 🔥 wishing you great success with the launch!!! 🚀👏
@jeongmin_park1 Sharing sensitive or confidential information on a server-based LLM can definitely feel burdensome. Thank you for your kind and warm comment! 😊
@izakotim Thank you!! In our test, ANY device with 8GB ram can run Lora(For example, Galaxy Note 9 or OnePlus 5). It is just matter of how long it takes to run inference :) In our best test device iphone 15 pro, it takes only 1.2 second to start inference!
@izakotim Thank you so much for your deep interest in asking about the minimum spec devices! 😊 We’re continuously optimizing our model to ensure the service works across a wide range of devices. 🚀
@jmarten Thank you so much for your kind words! 🎉 Your support truly means a lot! 😊 Next, we’re focusing on making Lora even faster, lighter, and more versatile while keeping privacy at its core. Excited for what’s ahead—stay tuned! 🚀🙌
Replies
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Steev: Ultimate AI Training Assistant
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Mentionitis: No more (unnecessary) tabs
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie
Ollie