We've been building Tambo for about a year, and just released our 1.0.
We make are making it easier to register React components with Zod schemas, a build an agent picks the right one and renders the right props.
We handle many of the complications with building generative user interfaces like: managing state between the user, the agent, and react component, rendering partial props, and we handle auth between your user, and MCP. We also support adding MCP servers and most of the spec.
We are 100% open-source and currently have 8k+ GitHub stars, thousands of developers, and over half-millions messages processed by our hosted service.
If you're building AI agents with generative UI, we'd like to hear from you.
Edit: Announcement was more clear https://tambo.co/blog/posts/introducing-tambo-generative-ui
Can it also generate new components?
Developers are using it to build agents that actually solve user needs with their own UI elements, instead of text instructions or taking actions with minimal visibility for the user.
We're building out a generative UI library, but as of right now it doesn't generate any code (that could change).
We do have a skill you can give your agent to create new UI components:
``` npx skills add tambo-ai/tambo ```
/components
Basically it's just... agreeing upon a description format for UI components ("put the component C with params p1, p2, ... at location x, y") using JSON / zod schema etc... and... that's it?
Then the agent just uses a tool "putCompoent(C, params, location)" which just renders the component?
I'm failing to understand how it would be more than this?
On one hand I agree that if we "all" find a standard way to describe those components, then we can integrate them easily in multiple tools so we don't have to do it again each time. At the same time, it seems like this is just a "nice render-based wrapper" over MCP / tool calls, no? am I missing something?
Maybe I’m misunderstanding, but isn’t generating UI just-in-time kind of risky because AI can get it wrong? Whereas you can generate/build an MCP App once that is deterministic, always returns a working result, and just as AI native.
I like to think how much time I spend clicking different nav links, clicking different drop downs trying to find the functionality I need.
It's just a new way for the app to surface what the user needs when they need it.
It sounds promising, because it is on the outside reproducible deterministic component generation in a modern fashion as far as I understood it.
I build a large platform using a methodically comparable approach I suppose, albeit in the pre-AI time, and that's why I wanna have a closer look at the inner workings and results of your project - curiosity so to say.
You appear to be the only solid and promising endeavor in the GenUI domain with a solid approach other than simply relying on an LLM but using math in combination with AI.
Good luck!
our use case is to allow other users to build lightweight internal apps within your chat workspace (say like an applicant tracking system per hire etc.)
is this the same category to CopilotKit? CPK is a AGUI proxy for similar topics, but here seems to be more emphasis on linked components?
The major difference is we provide an agent. You don't need to bring your own agent or framework. A lot of our developers are using our agent, really happy with it, and we have a bunch of upcoming features to make it even better out of the box.
Release: http://blog.modelcontextprotocol.io/posts/2026-01-26-mcp-app... . Announcement: http://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-app... . Submission: https://news.ycombinator.com/item?id=46020502
But our use case is a little different. MCP Apps embed interfaces into other agents. Tambo is an embedded agent that can render your UI. There's overlap for sure, but many of the developers using us don't see themselves putting their UI inside ChatGPT or Claude. That's just not how users use their apps.
That said, we're thinking about how we could make it easy to build an embedded agent and then selectively expose those UI elements over MCP Apps where it makes sense.
My agents need a UI and I'm in the market for a good framework to land on, but as is always the case in these kinds of interfaces I strongly suspect there will be a standard inter-compatible protocol underlying it that can connect many kinds of agents to many kinds of frontends. What is your take on that?
The way we elevator-pitch Tambo is "an agent that understands your UI" (which, admittedly, is not very descriptive on the implementation details). We've spent the time on allowing components (be that pre-existing or purpose-built) to be registered as tools that can be controlled and rendered either in-chat, or out within your larger application. The chat box shouldn't be the boundary.
Personally, my take on standards like A2UI is that they could prove useful but the models have to easily understand them or else you have to take up additional context explaining the protocol. Models already understand tool-calling so we're making use of that for now.