trending
Zac Zuo

26d ago

Ollama v0.19 - Massive local model speedup on Apple Silicon with MLX

Ollama v0.19 rebuilds Apple Silicon inference on top of MLX, bringing much faster local performance for coding and agent workflows. It also adds NVFP4 support and smarter cache reuse, snapshots, and eviction for more responsive sessions.
Alex Ablazevics

15d ago

Ollama Explainer

Recent upgrade has been huge -- made a quick explainer video on what Ollama does, hope you all like it! :)

Zac Zuo

9mo ago

Ollama Desktop App - The easiest way to chat with local AI

Ollama's new official desktop app for macOS and Windows makes it easy to run open-source models locally. Chat with LLMs, use multimodal models with images, or reason about files, all from a simple, private interface.
Zac Zuo

11mo ago

Ollama multimodal engine - Run leading vision models locally with the new engine

Ollama v0.7 introduces a new engine for first-class multimodal AI, starting with vision models like Llama 4 & Gemma 3. Offers improved reliability, accuracy, and memory management for running LLMs locally.
Chris Messina

2yr ago

Ollama - The easiest way to run large language models locally

Run Llama 2 and other models on macOS, with Windows and Linux coming soon. Customize and create your own.