→ 100% local inference — Wispr-optimized model runs on your hardware
→ Model selection — tiny (75MB, CPU) to large-v3 (3GB, CUDA)
→ Correction dictionary — teach your vocabulary, corrections
apply retroactively to history
→ Toggle and push-to-talk modes
→ Full transcript history w/ search and CSV export
→ Compact floating widget (95px collapsed)
→ CUDA GPU acceleration auto-detected
→ Cross-platform — Linux, macOS, Windows
Free demo available. Full version: CA$29.99 one-time. No subscription.