Pocket LLM

Pocket LLM

The fastest neural search for your documents

5.0
1 review

148 followers

Memorize 1000s of pages PDFs & Documents to search through them. Powered by AI and LLMs. Trained on your laptop. Fully private. Fully Free.
This is the 2nd launch from Pocket LLM. View more
ThirdAI PocketLLM

ThirdAI PocketLLM

Personalized search and Q&A with AI on your own documents
ThirdAI PocketLLM is your personal knowledge search engine powered by AI. Memorize 1000s of pages from PDFs & Documents, scrape URLs, index through your inbox and more. Chat with your personal knowledge. Fully private. Trained on your laptop.
Interactive
ThirdAI PocketLLM gallery image
ThirdAI PocketLLM gallery image
ThirdAI PocketLLM gallery image
ThirdAI PocketLLM gallery image
Free
Launch Team
Migma AI
Migma AI
Lovable for Email
Promoted

What do you think? …

Lyndon
I have a large but too large chrome bookmark list exported them a a csv and when I tried to load the file, the app hung and then was using 6 GB of memory at one point before I killed it. Now it has been resource limited by windows and hangs loading the csv with bookmarks. The mail reading function is not useful, I have many gmails with non text characters at the beginning of the descriptive text, making this pretty much useless. Limiting the mail to 200 on a trial isn't that great either. Pricing is missing on your page the only way I found it was attempting to start a subscription. UI is needs a lot of work as well. Just a blank page with 2 or three sentences and an input line. Needs to be able to search all files on the machine and index them not individually add them.
Ye Cao
@lyndon1 Really appreciate for the feedbacks. As you can see, PocketLLM is the optimized for privacy: everything is done on user's computer without any of private data going to the cloud. In other words, the AI model training and inferencing all take place on users' computer. Therefore, it's possible that you could face large memory resource usage sometimes. One simple solution to address your concern is to tune some hyper-parameter of the AI model. I will ensure that the issues will be resolved in a timely manner. I want to invite you to our discord channel or connect with me in private: https://discord.com/invite/thirdai
TA
Before PocketLLM I tried to use DocFetcher which has issues with PDFs which are over 80MB in size. You can change the RAM size limit of DocFetcher but throwing gigabytes upon gigabytes of data I never dared to do with it. Paperless is another tool, good in theory but it´s search engine called "Whoosh" is too fuzzy for my liking. NVIDIAs Chat with RTX gave surprisingly good search results. Having the ability to make use of the GPU is neat if you have the system for that and can stomach the electricity bills. Pitfalls of Nvidias solution are; it needs your browser for it´s GUI. It takes a lot of space and installs a couple dependencies. There is no apparent import export solution for already build indexes. Nvidias solution does not let you open the found result at the page it was found inside your PDFs. PocketLLM comes in and does things better. I recommend it if you want an easy to use "grep"or RAG (Retrieval Augmented Generation) solution which opens up indexed PDFs and highlights the searched result. PocketLLM is using less space and thanks to its usage of CPU + RAM can be used on cheaper hardware. It´s lovely that PocketLLM doesn´t rely on Acrobat reader. PocketLLM as well as its viewer can easily handle ~270Mb big PDF files. When I contacted the Third AI team regarding privacy and handling of data for business cases, they helped me out.