Michael Seibel

Langfuse - Open source tracing and analytics for LLM applications

byโ€ข

Langfuse provides open-source observability and analytics for LLM apps.
Observability: Explore and debug complex logs & traces in a visual UI.
Analytics: Improve costs, latency and response quality using intuitive dashboards.

Add a comment

Replies

Best
Clemens Rawert
@mwseibel - thank you for the hunt! Hi Product Hunt community ๐Ÿ‘‹๐Ÿ‘‹๐Ÿ‘‹ Iโ€™m Clemens, co-founder of Langfuse. We're proud to launch our open source observability and analytics platform for LLM apps @marc_klingen @max_deichmann. Getting to an MVP using LLMs is lightning fast. But productionizing an app and going from 80 to 100% is hard work. Langfuse is built to support makers on this journey through insights into their production data. Weโ€™ve experienced this ourselves as makers and built the product that we wanted for ourselves. We re-think observability & analytics from the ground up for LLM applications. Langfuse can visualize & analyze even complex LLM executions such as agents, nested chains, embedding retrieval and tool usage. ๐Ÿ“ŠLog traces Integrate via SDKs (Python, JS/TS), API or ๐Ÿฆœ๐Ÿ”— Langchain to asynchronously log production traces & attach metadata ๐Ÿ”Inspect & Debug Use our visual UI to inspect & debug production traces ๐Ÿ’ฏScore Attach scores to traces through user feedback or manual scoring in our UI. Filter for high and low quality traces to improve your application quickly ๐Ÿ”งDashboards Cost: Track token counts and compute costs. Break down by user, feature, session etc. to understand your unit economics Latency: Measure total latency and its constituents to improve UX Quality: Monitor output quality over time, users and features to ensure your users see value ๐Ÿ—Open Source Weโ€™re open source, model-agnostic, and our fully async SDKs can integrate with any LLM. ๐ŸƒGet Started Weโ€™re excited to hear your thoughts & feedback in the comments. To see Langfuse in action today, give our interactive demo a spin at: https://langfuse.com/docs/demo Offer: Start using Langfuse Cloud for free today and track unlimited events (up to 1GB storage) in our free tier. You can find self-hosting instructions in our docs: https://langfuse.com/docs/deploy...
Heiki Riesenkampf
@mwseibel @marc_klingen @max_deichmann @clemo_sf exciting stuff! Glad to see more open source LLM projects in the monitoring space!
David
๐Ÿ’Ž Pixel perfection
So excited to see Langfuse go live โ€” we've been a happy user for 4 weeks now. Most detailed latency and analytics in the market. Highly recommend for anyone using complex chains or with user-facing chat applications, where latency becomes crucial. Congrats, Clemens, Marc, and Max!
Clemens Rawert
@david_paffenholz Thank you, David. It's been great fun to work with you and Ishan. You've built an amazing product at juicebox.work and we're super proud to be able to support you.
Fengjiao Peng
๐Ÿ’Ž Pixel perfection
Easy to integrate and very intuitive to use!! Highly recommend for anyone developing chatGPT products. Amazing founding team who are super detail-oriented and understands LLM products deeply, and cool to see the evolution of the open source project every week!
Clemens Rawert
@fengjiao_peng1 Thank you, FJ. Diving into Stable Diffusion and image gen with you has been great & we're looking forward to offering more support for multi-modal use cases in the future.
Rajtilak Bhattacharjee
Awesome!!! First, congratulations on the launch. Making LLM apps cost-effective is one of the priorities for the service that I am working on. This should help. Just a quick question, can this be used in tandem with Langsmith?
Clemens Rawert
@rajtilakjee Hi Rajtilak, yes you can use Langfuse and Langsmith at the same time. We also have a Langchain integration with which you're set up in minutes via callback handlers. We wrote a blogpost about it: https://langfuse.com/blog/langch... Let me know if you have any further questions! Clemens
Tobias Kemkes
๐Ÿ’Ž Pixel perfection
Langfuse was a huge help in getting our product off the ground. Especially the tracing, gaining insights into how our product is used and where we need to improve it. Congrats on the launch, guys!
Marc Klingen
Thanks @tobiaskemkes, your and your teams feedback was instrumental in building Langfuse!
Jasdeep Chhabra
This looks fantastic! Great work, @clemo_sf and team. As a developer who regularly uses LLMs, I'm particularly excited about the integration via SDKs and API. Just curious - are there any specific use-cases you had in mind while building Langfuse? Additionally, I see that you guys are open source. Has the community contributed significantly to the development or evolution of the platform?
Marc Klingen
Hi @jasdeep_chhabra, teams building applications with LLMs that do more than a single inference call get the most value out of adopting Langfuse as traces are more complex (see demo) and analytics need to be broken down by single steps of a complex task. We open sources Langfuse as we've built this version based on community feedback of more than 200 teams. Currently we are three people who are the primary maintainers but we seek to make it easier to contribute to Langfuse. Re use cases: mostly analytics on quality/cost/latency broken down by multiple dimensions (e.g. for A/B testing), and traces to debug applications
Tristan Berguer
oooh looks nice ๐Ÿ”ฅ
Marc Klingen
@tberguer thanks Tristan
Gustaf Alstrรถmer
This is awesome - congrats! :)
Marc Klingen
Thank you @gustaf, we appreciate your support immensely
Michael Vandi
@clemo_sf Big congrats on the launch! If Langfuse were an LLM app itself, what "Aha!" insight do you think it would've thrown at you during its development?
Marc Klingen
Thanks @michael_vandi! Probably it would have suggested the insights our users get from using Langfuse: suggest which product use cases have high/low quality; which step of a complex chain actually contributes the most to overall latency; based on existing experiments, which changes actually led to quality improvements; if you are using multiple models, overview on how they compare re quality/speed/reliability
Felix Roesner
Congrats Langfuse Team! ๐Ÿš€ Insights on unit economics on feature and user level will be super valuable
Marc Klingen
@felix_roesner agree and that's what we are hearing from many users. Rn you get some aggregated metrics but we test detailed breakdowns by individual users as part of our analytics alpha program. Dm me on Discord for access: https://langfuse.com/discord
Christophe Pasquier
Super exciting, kudos for the launch folks!
Marc Klingen
@christophepas thanks for your support Christophe!
Mehdi Djabri
This is something we were looking for and that we be very useful to allow our users and ourselves to track spend and compute in our upcoming AI product. Congrats Clemens and team!
Clemens Rawert
@mehdidjabri Thank you, Mehdi. Appreciate it & looking forward to working together. Can't wait to see how we can help @iteration_x
Max Schulz
Congrats on the launch. Excited to try it out myself! :)
Marc Klingen
@max_schulz1 thanks Max, feel free to ask us in the chat with any questions you might have
Viet Le
Super cool, congrats on your launch!
Marc Klingen
@viet_le thanks for your support Viet!
Stefan Gerbes
Congrats on the launch, guys :) Super excited for whatโ€™s to come!
Marc Klingen
@stefan_gerbes thanks Stefan!
Mika Sagindyk
Congrats on the launch, this looks great! ๐Ÿ”ฅ
Marc Klingen
@mikasagi thanks Mika!
Simon
Awesome product! ๐Ÿš€
Marc Klingen
@shoeferlin thank you ๐Ÿ™
Max T.Pham
Congrats Clemens & team on the launch!
Marc Klingen
@maxtpham Thanks Max!
Dan
Congrats on the launch! Excited to try it out.
Marc Klingen
@dan_meier1 Thanks Dan, excited to see what you'll launch!
Guys, congratulations on the launch of the Product Hunt. I love open source products, especially in analytics tools! Wish you luck ๐Ÿš€
Marc Klingen
Thanks @richmal, looking forward to see your launch soon!