A2UI - A safe way for AI to build UIs your app can render
by•
A2UI is an open protocol by Google enabling agents to generate rich, interactive UIs. Instead of risky code execution, agents send declarative JSON that clients render natively (Flutter/Web/Mobile). Secure, framework-agnostic, and designed for LLMs.



Replies
Flowtica Scribe
Hi everyone!
A2UI tackles the specific problem of safely sending UI components across trust boundaries. We have already seen this concept in action with Gemini's Visual Layout, which inspired this protocol, and it is now powering Gemini Enterprise and Opal.
But protocols are only useful when they connect things.
The team at @CopilotKit (making @AG-UI ) has a great tutorial showing how to bring this stack to life. It demonstrates connecting an @A2A Protocol backend speaking A2UI directly to the frontend. It really shows how you can deliver a full-stack agentic experience where the UI is just as dynamic as the conversation.
Raycast
This is pretty interesting; curious to see how this will compete with OpenAI's widget model.
I've started adopting A2UI in my intent-based shopping assistant project, and what stood out isn’t just the UI rendering—it’s the clarity it brings to agent output. Having a shared, declarative way for an agent to express intent as structure (instead of ad-hoc JSON or text conventions) has already reduced a lot of glue logic on our side.
It works particularly well for intent-based search and discovery flows, which is probably what Google is aiming for. It still feels early, but promising—especially for teams that already have solid intent detection and recommendation logic and are looking for a cleaner contract between agent reasoning and user interaction.