Michael Seibel

Langfuse - Open source tracing and analytics for LLM applications

byβ€’
Langfuse provides open-source observability and analytics for LLM apps.
Observability: Explore and debug complex logs & traces in a visual UI.
Analytics: Improve costs, latency and response quality using intuitive dashboards.

Add a comment

Replies

Best
Clemens Rawert
@mwseibel - thank you for the hunt! Hi Product Hunt community πŸ‘‹πŸ‘‹πŸ‘‹ I’m Clemens, co-founder of Langfuse. We're proud to launch our open source observability and analytics platform for LLM apps @marc_klingen @max_deichmann. Getting to an MVP using LLMs is lightning fast. But productionizing an app and going from 80 to 100% is hard work. Langfuse is built to support makers on this journey through insights into their production data. We’ve experienced this ourselves as makers and built the product that we wanted for ourselves. We re-think observability & analytics from the ground up for LLM applications. Langfuse can visualize & analyze even complex LLM executions such as agents, nested chains, embedding retrieval and tool usage. πŸ“ŠLog traces Integrate via SDKs (Python, JS/TS), API or πŸ¦œπŸ”— Langchain to asynchronously log production traces & attach metadata πŸ”Inspect & Debug Use our visual UI to inspect & debug production traces πŸ’―Score Attach scores to traces through user feedback or manual scoring in our UI. Filter for high and low quality traces to improve your application quickly πŸ”§Dashboards Cost: Track token counts and compute costs. Break down by user, feature, session etc. to understand your unit economics Latency: Measure total latency and its constituents to improve UX Quality: Monitor output quality over time, users and features to ensure your users see value πŸ—Open Source We’re open source, model-agnostic, and our fully async SDKs can integrate with any LLM. πŸƒGet Started We’re excited to hear your thoughts & feedback in the comments. To see Langfuse in action today, give our interactive demo a spin at: https://langfuse.com/docs/demo Offer: Start using Langfuse Cloud for free today and track unlimited events (up to 1GB storage) in our free tier. You can find self-hosting instructions in our docs: https://langfuse.com/docs/deploy...
Heiki Riesenkampf
@mwseibel @marc_klingen @max_deichmann @clemo_sf exciting stuff! Glad to see more open source LLM projects in the monitoring space!
David
πŸ’Ž Pixel perfection
So excited to see Langfuse go live β€”Β we've been a happy user for 4 weeks now. Most detailed latency and analytics in the market. Highly recommend for anyone using complex chains or with user-facing chat applications, where latency becomes crucial. Congrats, Clemens, Marc, and Max!
Clemens Rawert
@david_paffenholz Thank you, David. It's been great fun to work with you and Ishan. You've built an amazing product at juicebox.work and we're super proud to be able to support you.
Fengjiao Peng
πŸ’Ž Pixel perfection
Easy to integrate and very intuitive to use!! Highly recommend for anyone developing chatGPT products. Amazing founding team who are super detail-oriented and understands LLM products deeply, and cool to see the evolution of the open source project every week!
Clemens Rawert
@fengjiao_peng1 Thank you, FJ. Diving into Stable Diffusion and image gen with you has been great & we're looking forward to offering more support for multi-modal use cases in the future.
Rajtilak Bhattacharjee
Awesome!!! First, congratulations on the launch. Making LLM apps cost-effective is one of the priorities for the service that I am working on. This should help. Just a quick question, can this be used in tandem with Langsmith?
Clemens Rawert
@rajtilakjee Hi Rajtilak, yes you can use Langfuse and Langsmith at the same time. We also have a Langchain integration with which you're set up in minutes via callback handlers. We wrote a blogpost about it: https://langfuse.com/blog/langch... Let me know if you have any further questions! Clemens
Tobias Kemkes
πŸ’Ž Pixel perfection
Langfuse was a huge help in getting our product off the ground. Especially the tracing, gaining insights into how our product is used and where we need to improve it. Congrats on the launch, guys!
Marc Klingen
Thanks @tobiaskemkes, your and your teams feedback was instrumental in building Langfuse!
Jasdeep Chhabra
This looks fantastic! Great work, @clemo_sf and team. As a developer who regularly uses LLMs, I'm particularly excited about the integration via SDKs and API. Just curious - are there any specific use-cases you had in mind while building Langfuse? Additionally, I see that you guys are open source. Has the community contributed significantly to the development or evolution of the platform?
Marc Klingen
Hi @jasdeep_chhabra, teams building applications with LLMs that do more than a single inference call get the most value out of adopting Langfuse as traces are more complex (see demo) and analytics need to be broken down by single steps of a complex task. We open sources Langfuse as we've built this version based on community feedback of more than 200 teams. Currently we are three people who are the primary maintainers but we seek to make it easier to contribute to Langfuse. Re use cases: mostly analytics on quality/cost/latency broken down by multiple dimensions (e.g. for A/B testing), and traces to debug applications
Tristan Berguer
oooh looks nice πŸ”₯
Marc Klingen
@tberguer thanks Tristan
Gustaf AlstrΓΆmer
This is awesome - congrats! :)
Marc Klingen
Thank you @gustaf, we appreciate your support immensely
Michael Vandi
@clemo_sf Big congrats on the launch! If Langfuse were an LLM app itself, what "Aha!" insight do you think it would've thrown at you during its development?
Marc Klingen
Thanks @michael_vandi! Probably it would have suggested the insights our users get from using Langfuse: suggest which product use cases have high/low quality; which step of a complex chain actually contributes the most to overall latency; based on existing experiments, which changes actually led to quality improvements; if you are using multiple models, overview on how they compare re quality/speed/reliability
Felix Roesner
Congrats Langfuse Team! πŸš€ Insights on unit economics on feature and user level will be super valuable
Marc Klingen
@felix_roesner agree and that's what we are hearing from many users. Rn you get some aggregated metrics but we test detailed breakdowns by individual users as part of our analytics alpha program. Dm me on Discord for access: https://langfuse.com/discord
123
β€’β€’β€’
Next
Last