


How do you benchmark your local LLM performance? 🤔
Hey everyone! 👋 I've been running a lot of local LLMs (Llama, Mistral) and Diffusers lately on my machine. But I always struggle to accurately measure their performance. Usually, I just look at "tokens/sec" in the terminal, but it feels inconsistent. 😅 How do you guys benchmark your local AI setup? Do you use any specific tools, or just rely on vibes? I'm actually building an open-source tool...

A non-dev pretending to be a dev — feedback wanted 😆
Hey PH! I’m a non-developer cosplaying as a developer and I’m about to launch a lightweight Korean learning PWA. It’s super simple, supports 11 languages, is fully open-source, and works offline. Before the launch, I’d love honest (but gentle 😭😂) feedback from anyone willing to try it. Thanks!
