Google

Google

Organizing the world's information

4.8
β€’65 reviewsβ€’

9.1K followers

Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.
This is the 498th launch from Google. View more
A2UI

A2UI

A safe way for AI to build UIs your app can render
A2UI is an open protocol by Google enabling agents to generate rich, interactive UIs. Instead of risky code execution, agents send declarative JSON that clients render natively (Flutter/Web/Mobile). Secure, framework-agnostic, and designed for LLMs.
A2UI gallery image
A2UI gallery image
A2UI gallery image
A2UI gallery image
Free
Launch Team
AppSignal
AppSignal
Get the APM insights you need without enterprise price tags.
Promoted

What do you think? …

Zac Zuo

Hi everyone!

A2UI tackles the specific problem of safely sending UI components across trust boundaries. We have already seen this concept in action with Gemini's Visual Layout, which inspired this protocol, and it is now powering Gemini Enterprise and Opal.

But protocols are only useful when they connect things.

The team at @CopilotKit (making @AG-UI ) has a great tutorial showing how to bring this stack to life. It demonstrates connecting an @A2A Protocol backend speaking A2UI directly to the frontend. It really shows how you can deliver a full-stack agentic experience where the UI is just as dynamic as the conversation.

Chris Messina

This is pretty interesting; curious to see how this will compete with OpenAI's widget model.

Jorge AlcΓ‘ntara
Good inspiration! A common protocol for ephemeral UI elements is some that everyone building conversational experiences has been after for a decade. It does seem early, and will be interested in seeing libraries for those widgets sprawl to really make it a turnkey solution eventually. Good work!
Ahmed Ali

I've started adopting A2UI in my intent-based shopping assistant project, and what stood out isn’t just the UI renderingβ€”it’s the clarity it brings to agent output. Having a shared, declarative way for an agent to express intent as structure (instead of ad-hoc JSON or text conventions) has already reduced a lot of glue logic on our side.

It works particularly well for intent-based search and discovery flows, which is probably what Google is aiming for. It still feels early, but promisingβ€”especially for teams that already have solid intent detection and recommendation logic and are looking for a cleaner contract between agent reasoning and user interaction.