A2UI is an open protocol by Google enabling agents to generate rich, interactive UIs. Instead of risky code execution, agents send declarative JSON that clients render natively (Flutter/Web/Mobile). Secure, framework-agnostic, and designed for LLMs.
A2UI tackles the specific problem of safely sending UI components across trust boundaries. We have already seen this concept in action with Gemini's Visual Layout, which inspired this protocol, and it is now powering Gemini Enterprise and Opal.
But protocols are only useful when they connect things.
The team at @CopilotKit (making @AG-UI ) has a great tutorial showing how to bring this stack to life. It demonstrates connecting an @A2A Protocol backend speaking A2UI directly to the frontend. It really shows how you can deliver a full-stack agentic experience where the UI is just as dynamic as the conversation.
This is pretty interesting; curious to see how this will compete with OpenAI's widget model.
I've started adopting A2UI in my intent-based shopping assistant project, and what stood out isn’t just the UI rendering—it’s the clarity it brings to agent output. Having a shared, declarative way for an agent to express intent as structure (instead of ad-hoc JSON or text conventions) has already reduced a lot of glue logic on our side.
It works particularly well for intent-based search and discovery flows, which is probably what Google is aiming for. It still feels early, but promising—especially for teams that already have solid intent detection and recommendation logic and are looking for a cleaner contract between agent reasoning and user interaction.
Good inspiration! A common protocol for ephemeral UI elements is some that everyone building conversational experiences has been after for a decade.
It does seem early, and will be interested in seeing libraries for those widgets sprawl to really make it a turnkey solution eventually.
Good work!
About A2UI on Product Hunt
“A safe way for AI to build UIs your app can render”
A2UI launched on Product Hunt on December 24th, 2025 and earned 265 upvotes and 4 comments, placing #4 on the daily leaderboard. A2UI is an open protocol by Google enabling agents to generate rich, interactive UIs. Instead of risky code execution, agents send declarative JSON that clients render natively (Flutter/Web/Mobile). Secure, framework-agnostic, and designed for LLMs.
A2UI was featured in Open Source (68.3k followers), User Experience (364.7k followers), Artificial Intelligence (466.2k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 146.6k products, making this a competitive space to launch in.
Who hunted A2UI?
A2UI was hunted by Zac Zuo. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how A2UI stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hi everyone!
A2UI tackles the specific problem of safely sending UI components across trust boundaries. We have already seen this concept in action with Gemini's Visual Layout, which inspired this protocol, and it is now powering Gemini Enterprise and Opal.
But protocols are only useful when they connect things.
The team at @CopilotKit (making @AG-UI ) has a great tutorial showing how to bring this stack to life. It demonstrates connecting an @A2A Protocol backend speaking A2UI directly to the frontend. It really shows how you can deliver a full-stack agentic experience where the UI is just as dynamic as the conversation.