trending
박기철

2mo ago

How do you benchmark your local LLM performance? 🤔

Hey everyone!

I've been running a lot of local LLMs (Llama, Mistral) and Diffusers lately on my machine. But I always struggle to accurately measure their performance.

Usually, I just look at "tokens/sec" in the terminal, but it feels inconsistent.

How do you guys benchmark your local AI setup? Do you use any specific tools, or just rely on vibes?

박기철

2mo ago

PKC MARK - Open-source local AI benchmark tool for LLMs & Diffusers

Curious how fast your local rig runs the latest AI? Meet PKC Mark—the open-source benchmarking tool for devs and AI enthusiasts! Measure LLMs (like Llama) and Diffusers directly on your hardware without spending a dime. Key Highlights: ⚡ Local Testing: Zero API costs. Purely your GPU/CPU. 📊 Deep Metrics: Detailed performance reports. 🛠 Multi-Model: Supports LLMs, Stable Diffusion & more. 🐍 Open Source: Python-based, easy to install & extend. Stop guessing and start benchmarking today! 🚀