All activity
xavier_schahlleft a comment
Hi Product Hunt! I’m Xavier, maker of vAquila. vAquila is an open-source orchestrator for running vLLM models with a simple developer experience: - one-command model launches - Docker-native runtime - GPU/CPU support - local Web UI for monitoring and control We’re launching in public beta today. Core workflows are stable, and we’re actively improving VRAM estimation/tuning. If you test it, I’d...

vAquillaDeploy local LLMs with smart and auto GPU management
vAquila is an open-source AI model inference manager. It combines the absolute simplicity of a CLI with the production performance of vLLM and the isolation of Docker, all with smart and automated GPU management. It orchestrates everything for you. Like an eagle soaring over your infrastructure, it analyzes your GPU state in real-time, calculates the perfect memory ratio, and deploys the vLLM Docker container invisibly and securely.

vAquillaDeploy local LLMs with smart and auto GPU management
