I was impressed by how quickly I could get up and running. In just a few minutes, I uploaded an e-commerce inventory, configured goals and guardrails, and had a functional sales agent. During testing, the agent handled interactive conversations smoothly, including the full transaction flow and purchase process.
Thesys
MCP integration in 2 lines is a strong hook — the real friction in generative UI isn't rendering components, it's getting the LLM to emit the right structure reliably. We build LLM workflows for structured financial data and the jump from "returns JSON" to "returns interactive UI" is where most teams get stuck. Curious: how does C1 handle edge cases where the model's UI intent is ambiguous — do you fall back gracefully or does the developer define constraints upfront?
Thesys
@slavaakulov Due to strict schema enforcement, when schema breaks we retry internally to handle the interaction
What types of LLMs and frameworks are currently supported by C1's 2-line integration, and how does it handle UI rendering consistency across different platforms?
Thesys
@mordrag C1 was designed to LLM and framework agnostic. However we currently recommend GPT5 and Sonnet4 for production use. C1 only support web but we are planning to support for native mobile apps in upcoming months.
How does OpenUI handle the challenge of adapting to different UI design paradigms and aesthetic preferences between various AI model outputs, given that these can vary significantly?
Thesys
@zhukmax Interesting point. Currently OpenUI is not opinionated to any paradigms. We recommend C1 by Thesys which has extensively tested to follow your preferences.
This is one of those "why didn't this exist sooner" ideas. I'm so tired of AI responses being giant walls of text when what I actually need is a table or a card. The fact that it works with GPT, Claude, and Google ADK with just 2 lines of code is really appealing — nobody wants to be locked into one model these days. One question: how does it handle streaming? Like if the AI is generating a chart in real-time, does it render progressively or wait for the full response? That'd be a dealbreaker for chat-style apps where latency matters.
Thesys
@sparkuu OpenUI is built to be streaming native. Users can expect to see first render within 500ms-1s.
Thesys
Really proud of what we shipped here.
For a year we watched the same three problems surface across 10,000+ developers building AI-generated interfaces: slow rendering, broken output, hard-to-integrate designs. We kept patching. The problems kept coming back.
Turns out they were all symptoms of the same root cause: the format we were using didn't fit how LLMs think.
So we built one that did. The results were immediate. 3x faster, 67% fewer tokens, dramatically more reliable.
Open source and free. Hope it helps.
Thesys
@orateur Hey Ossy, what do you mean by ACC testing?