Launching today
Torrix

Torrix

Self-hosted LLM observability. Every token. Every dollar.

10 followers

Most LLM observability tools send your prompts to their cloud. Torrix runs on your server. Add 2 lines of Python, or route any HTTP client through the proxy. No code changes needed. Every AI call is logged instantly: tokens, cost, latency, and the full prompt trace. Works with OpenAI, Anthropic, Gemini, Groq, Azure, Mistral, SAP AI Core, n8n, and any HTTP API. Community edition is free forever. Your data never leaves your infrastructure.

Torrix makers

Here are the founders, developers, designers and product people who worked on Torrix