All activity
visualstudioblyatleft a comment
I wanted to build a pure Rust GGUF parser more secure than anything available, especially with CVEs piling up in C++. I also find joy in complexity. No local runtime can prove which model produced a response. Yule signs every inference. Model files run with full system access everywhere else -- Yule sandboxes the process at the kernel level. I built this for myself initially, like most of my...

YuleRun models locally. Prove what ran.
Every local AI runtime is C++ bindings wrapped in Python. llama.cpp has 15+ CVEs. Yule is written from scratch in pure Rust -- no llama.cpp, no CUDA, no C++. Every tensor is Merkle-verified. Every inference is Ed25519-signed. The model process is sandboxed. 12 Vulkan compute shaders, 8.5x GPU speedup, no NVIDIA lock-in.

YuleRun models locally. Prove what ran.
