Sequence-LLM

Sequence-LLM

Manage multiple local LLMs with simple commands.

3 followers

Sequence-LLM is a developer CLI that helps you run and switch between local AI models without dealing with ports, processes, or server management. You define model profiles once, and then switch instantly: /brain /coder /status The tool automatically handles: β€’ Starting and stopping model servers β€’ Port management β€’ Config loading β€’ Cross-platform support (Windows, macOS, Linux) Built for developers experimenting with local AI on limited hardware. Early stage (v0.1). feedback welcome.

Sequence-LLM Reviews

Tines
Tines
Promoted
Reviews
Most Informative