Launched this week

Yule
Run models locally. Prove what ran.
2 followers
Run models locally. Prove what ran.
2 followers
Every local AI runtime is C++ bindings wrapped in Python. llama.cpp has 15+ CVEs. Yule is written from scratch in pure Rust -- no llama.cpp, no CUDA, no C++. Every tensor is Merkle-verified. Every inference is Ed25519-signed. The model process is sandboxed. 12 Vulkan compute shaders, 8.5x GPU speedup, no NVIDIA lock-in.




I wanted to build a pure Rust GGUF parser more secure than anything available, especially with CVEs piling up in C++. I also find joy in complexity.
No local runtime can prove which model produced a response. Yule signs every inference. Model files run with full system access everywhere else -- Yule sandboxes the process at the kernel level.
I built this for myself initially, like most of my projects, but businesses and developers can find it useful too.