Badges

Tastemaker
Tastemaker
Gone streaking
Gone streaking

Forums

Sequence-LLM - Manage multiple local LLMs with simple commands.

Sequence-LLM is a developer CLI that helps you run and switch between local AI models without dealing with ports, processes, or server management. You define model profiles once, and then switch instantly: /brain /coder /status The tool automatically handles: • Starting and stopping model servers • Port management • Config loading • Cross-platform support (Windows, macOS, Linux) Built for developers experimenting with local AI on limited hardware. Early stage (v0.1). feedback welcome.
View more