All activity
Gregory Sekoleft a comment
Launching Ollama or LM Studio is not a problem. My issue is with integrating memory for a local LLM. Maybe instead of focusing on another program like Ollama or LM Studio, you could dedicate time to creating a program that serves as memory for LLM? You know… instead of another LLM program, make a program that functions as memory. So you install memory for LLM, connect it to Ollama or LM Studio,...
I'm working on a tool to launch AIs locally as easily as launching spotify - I'd love your feedback!
Enzo GhillainJoin the discussion



