Itβs one of the most convenient and lightweight tools for working with LLMs locally. The interface is clean and thoughtfully designed. New models are added almost immediately after release. It can run in server mode with remote access, and it automatically selects the largest model variant that fully fits into available memory. It runs stably and natively on macOS and is well suited for use as a remote server on a MacStudio. An excellent choice for everyday users who want to run models locally.
Raycast
- 1. Download LM Studio for your operating system from here.
- 2. Click the π icon on the sidebar and search for "DeepSeek"
- 3. Pick an option that will fit on your system. For example, if you have 16GB of RAM, you can run the 7B or 8B parameter distilled models. If you have ~192GB+ of RAM, you can run the full 671B parameter model.
- 4. Load the model in the chat, and start asking questions!
Of course, you can also run other models locally using LM Studio, like @Llama 3.2, @Mistral AI, Phi, Gemma, @DeepSeek AI, and Qwen 2.5.