Langfuse Operator - Self-host Langfuse on Kubernetes, properly
by•
Langfuse Operator makes self-hosting Langfuse on Kubernetes actually production-ready. Deploy the full stack from one custom resource, automate upgrades and migrations, and avoid the usual self-hosting glue. Built for platform teams that want control, repeatability, and a cleaner path to running LLM observability in production.
Replies
Best
Maker
📌
We built Langfuse Operator because self-hosting Langfuse on Kubernetes should feel production-ready by default, not like stitching together a deployment and then spending weeks hardening it.
Langfuse is already a strong open-source LLM engineering platform for traces, evals, prompt management, and metrics. We wanted to make running the full stack on Kubernetes much more operationally native: one custom resource to deploy Web, Worker, PostgreSQL, ClickHouse, Redis, and Blob Storage, with automated upgrades, migrations, secret rotation, observability, and security-first defaults built in.
A big part of the value is that platform teams shouldn’t have to choose between moving fast and running a setup they can trust in production. Langfuse Operator is designed to close that gap and make Langfuse easier to operate across vanilla Kubernetes, OpenShift, EKS, GKE, and AKS.
We’d love feedback from platform engineers, MLOps teams, and anyone running Langfuse in self-hosted environments: what part of operating your AI observability stack is still more painful than it should be?
Replies