Official Z.ai platform to experience our new, MIT-licensed GLM models (Base, Reasoning, Rumination). Simple UI focuses on model interaction. Free.
This is the 10th launch from Z.ai. View more

GLM-5V-Turbo
Launching today
GLM-5V-Turbo is Z.AI's first multimodal coding model. It understands images, video, files, and UI layouts, then turns that visual context into runnable code, debugging help, and stronger agent workflows with Claude Code and OpenClaw.




Payment Required
Launch Team








Flowtica Scribe
Hi everyone!
GLM-5V-Turbo is one of the more interesting coding model releases lately because it is not just "vision added onto a code model." @Z.ai is clearly positioning it as a native multimodal coding model that can understand screenshots, design drafts, videos, document layouts, and real interfaces, then turn that into code, debugging, and action.
"Seeing the screen and writing the code" is a very real workflow, and GLM-5V is built exactly for that.
It is also deeply adapted for @Claude Code and @OpenClaw style loops, which makes it feel much more relevant than a generic VLM with some coding demos on top.
Try it on chat.z.ai or plug in the official API.