Launched this week
Yule

Yule

Run models locally. Prove what ran.

2 followers

Every local AI runtime is C++ bindings wrapped in Python. llama.cpp has 15+ CVEs. Yule is written from scratch in pure Rust -- no llama.cpp, no CUDA, no C++. Every tensor is Merkle-verified. Every inference is Ed25519-signed. The model process is sandboxed. 12 Vulkan compute shaders, 8.5x GPU speedup, no NVIDIA lock-in.
Yule gallery image
Free
Launch Team / Built With