AI characters evolve visually as emotions change during conversation.
**How it works:**
- 6-axis emotion model
- LLM drives conversations with persistent memory (1,000-char rolling context + trauma system)
- ComfyUI generates images when emotions peak and situation changes.
- Gacha mechanics add unpredictability (watch x5.0 at 0:12)
- Dual-core freedom: Use any LLM (Ollama/OpenRouter) + any ComfyUI checkpoint/LoRA (SDXL, QWEN, Z-image)
- 100% local option with Ollama