Seunghwan

Lora - Local LLM for Mobile: Fast, Secure with Zero-Cost.

by
Lora is a local LLM for mobile, with performance comparable to GPT-4o-mini. It offers seamless SDK integration, guarantees complete privacy without data logging, and supports airplane mode. Try our app and build your own Lora-powered app.

Add a comment

Replies

Best
Tanmay Parekh
All the best for the launch @seungwhan & team!
Woobeen Back
@parekh_tanmay Thank you! Lora will allow everyone anytime and anywhere to use LLM. Welcome to the world of on-device
Hansol Nam
@parekh_tanmay Thank you so much for your kind words and support! 😊 Your encouragement truly means a lot to us and gives our team a huge boost on this special day. 🚀 We’ll keep pushing forward—really appreciate you being part of our journey! 🙌
Seunghwan
@parekh_tanmay Thank you so much. We're building something truly outstanding. Stay tuned!
Junghwan Seo
@seungwhan SUPER THANKS ☺️
Hanzala Siddiqui
Great addition to the AI world 👏🏽
Woobeen Back
@hanzala_siddiqui7 Thank you! With Lora, everyone can now experience the power of LLM anytime and anywhere, without limitations. It opens the door to a new world of on-device technology, making advanced AI more accessible and bringing innovative possibilities right to your fingertips. Welcome to this exciting new era!
Hansol Nam
@hanzala_siddiqui7 It really feels like on-device AI is becoming a part of our daily lives! 🚀 Thank you so much for your support—it truly means a lot! 😊
Junghwan Seo
@hanzala_siddiqui7 We'll keep improving to deliver more value to everyone! Please support us ☺️
Seunghwan
@hanzala_siddiqui7 I'm glad you think so! What you say aligns our mission statement, which is to empower AI everywhere!
Adam Y.
Congrats on the launch, @seungwhan! Offering a local LLM with easy SDK integration sounds incredibly useful—how does Lora handle updates and improvements without relying on cloud-based processing?
Woobeen Back
@seungwhan @adamyork Hello! Thank you for good question! Lora is designed to ensure seamless updates and continuous improvements while maintaining an on-device experience. Updates are delivered efficiently through lightweight patches, allowing users to keep their models up-to-date without depending on cloud-based processing. This approach not only preserves privacy and minimizes latency but also ensures that developers can easily integrate improvements via the SDK without interrupting workflows.
Hansol Nam
@adamyork Great question—thank you! 😊 We’re continuously improving the model and will enable dynamic downloads for updates. This way, users can leverage Lora’s local LLM on-device while still receiving the latest LLM improvements through patches, all without relying on cloud-based processing. 🚀
Seunghwan
@adamyork @woobeen_back Our team member wrote in detail. If you want to use, please contact us.
Junghwan Seo
@adamyork For more information, see Team members 🫡
Aleksandr Milenovich
Interesting! Goodluck with the launch @seungwhan!
Woobeen Back
@seungwhan @ficus_26 Thank you for your interest and support 🤗 You can use our demo app right now!
Hansol Nam
@ficus_26 Thank you so much for your support! 😊 It truly means a lot to us and gives our team a huge boost! 🚀🔥
Seunghwan
@ficus_26 Thank you so much.
Junghwan Seo
@ficus_26 Thank you for your comment 😊
Sanguk Park
Interesting! As this is local LLM can I use it without internet? if so it must be so helpful. Few weeks ago I visited Yosemite National Park, and the signal was unavailable. It must be so helpful if I can talk with AI in those situation.
Junghwan Seo
@psycoder You can also use it when you go into space or scuba diving, so try it out there and let us know 🤣
Seunghwan
@psycoder It's so true! That's one of the reasons why a local LLM is needed.
Hansol Nam
@psycoder It must have been really inconvenient due to the unstable internet. I will work hard on research and development to ensure that services can work without problems even in wide areas with unstable internet. Thank you so much for your warm encouragement :)
Chiara Maria Cervetta
Wow, congratulations to the launch! 🥳 🚀 🎉 I'm an enthusiastic consumer of conversational AI services and am excited to try the private AI assistant "Lora" app and also to share feedback soon! 🥳 ✨ Thank you very much! 😊 I wish you a lot of success and all the best! 🤩 🍀 🎉
Seunghwan
@chiaracervetta Thank you so much for your kind words and enthusiasm! We’re thrilled that you’re excited to try Lora and truly appreciate your support. Looking forward to your feedback! Wishing you all the best as well, and thanks again for your encouragement!
Hansol Nam
@chiaracervetta Since you love conversational AI services, you must be a true tech early adopter! 😊 Your feedback will be incredibly valuable for us. Thank you so much for your kind and encouraging comment! 🚀🎉
Chiara Maria Cervetta
@seungwhan, you're most welcome! 🥳 💐 Thank you so much for your kindness and I'm looking forward to giving feedback about the Lora chat AI soon. 😊 🙌🏻 I wish you and also the entire team all the best too and I'm so grateful to you all! 🥳 🍀 💐
Chiara Maria Cervetta
@hansol_nam, you're most welcome and I'm incredibly grateful for your kindness! 😊 🥳 💐 I'm looking forward to giving feedback about the Lora chat AI soon. 🤩 🙌🏻 I wish you and also the entire team all the best and I'm so grateful to you all! 😊 🍀 💐
Jeongmin Park
Really excited for this! As someone who’s always been mindful of privacy and security, this approach feels like a game changer! 🔥 wishing you great success with the launch!!! 🚀👏
Seunghwan
@jeongmin_park1 Thanks a lot! I’d love to hear your feedback on the app soon.
Hansol Nam
@jeongmin_park1 Sharing sensitive or confidential information on a server-based LLM can definitely feel burdensome. Thank you for your kind and warm comment! 😊
Isaac Martin Otim
This is amazing! What are the minimum mobile hardware requirements for this to work on device?
Woobeen Back
@izakotim Thank you!! In our test, ANY device with 8GB ram can run Lora(For example, Galaxy Note 9 or OnePlus 5). It is just matter of how long it takes to run inference :) In our best test device iphone 15 pro, it takes only 1.2 second to start inference!
Hansol Nam
@izakotim Thank you so much for your deep interest in asking about the minimum spec devices! 😊 We’re continuously optimizing our model to ensure the service works across a wide range of devices. 🚀
Seunghwan
@izakotim We believe that hardware performance will increase exponentially in the future.
Junghwan Seo
@izakotim If you want a smaller spec... ask to @hansol_nam and @woobeen_back 🤣
Zeshan Ather
@seungwhan, Many Congrats on your launch.
Seunghwan
@zeshan_ather Thank you for your support. I’d love to hear your feedback on the app soon.
Hansol Nam
@zeshan_ather Thank you for your kind and warm comment! 😊✨ It really means a lot! 🚀🎉
Junghwan Seo
@zeshan_ather Thank you for your support ☺️
Jan Ahrend
All the best for the launch! What’s planned next for Lora? 🎉
Seunghwan
@jmarten Thank you for your support! Our next plan is to find a PMF and make a revenue stream.
Hansol Nam
@jmarten Thank you so much for your kind words! 🎉 Your support truly means a lot! 😊 Next, we’re focusing on making Lora even faster, lighter, and more versatile while keeping privacy at its core. Excited for what’s ahead—stay tuned! 🚀🙌
Junghwan Seo
@jmarten We are now prioritizing listening to our customers to understand what they truly need 😘