0x01C - Gen UI 🎨
- Enterprise SaaS? Prepare for GenAI compliance with a free pilot from sema.
- WorkOS: the modern identity platform for B2B SaaS. It provides easy-to-use APIs for auth, user identity, & enterprise features like SSO and SCIM.
Gen UI 🎨
Or Generative UI, Dynamic User Experiences, Adaptive UI.
- Full-stack/frontend developers.
- Product managers.
- Backend developers that want UI to be handled by AI.
TL;DR:
- Problem: Current user interfaces aren’t personalized.
- Solution: AI that generates dynamic UI to fit our user’s request.
- In Sum: With the rise of AI capabilities, we can now generate user interfaces in real-time, providing a much more personal user experience in our products for every single user. However, these come with some downsides we need to address.
How does it work?💡
The whole concept is pretty new, so there isn’t a formal standard to explain this topic yet, but I’ll try to give it my best shot.
When a user sends input to your system, an AI agent analyzes the input to determine the most suitable user interface for the response. It may be an image, a graph or even a full-fledged interactive <UIComponent />
. These components are generated on the fly and streamed back to the user.
I would break Gen UI into to 3 levels:
- Dynamic text - think ChatGPT right now.
- Dynamic components - return an interactive UI component that best fits the input.
- Full app control - imagine tasking the app for something, and the app processes the request seamlessly with no extra UI clicks.
Level 2 example, imagine a flight booking service. One of the service’s users (note how we refer to an individual, and not a cohort here) might be price sensitive, and really wants a nice warm weekend retreat. When they type in “weekend retreat in May” the system returns an interactive map including a price filter (remember? price sensitive) with a weather forecast display.
A code example to better explain level 2:
if (role === 'tool') {
if (action === 'weather') {
content = <Weather data={data} />;
} else if (action === 'price-sensitive') {
content = <MapPricing people={data} />;
} else {
content = <div>{data}</div>;
}
} else {
content = <div>{data}</div>;
}
Level 3 example, imagine a fully interactive application like an email client. At that level, you should be able to tell Gmail something concrete yet “wide”, like “unsubscribe from all of my unread newsletters” - and it will do your bidding (this example, by the way, came to me after thinking about the limitations in Gmail that caused things like from Inbox Zero by Elie to, well, be a thing).
Another critical aspect is providing a truly personalized experience. For instance, if the system knows a user has bad eyesight, all responses can be displayed with bolder and larger fonts. For a user who is highly detail-oriented, the returned components can automatically expand to provide more detailed information. This level of personalization ensures that each user receives a response that best fits their unique needs and preferences.
For a deeper technical intro, I would recommend checking out Vercel’s announcement of open sourcing their generative UI SDK - which helps us stream components in real-time without client-side bundling headaches.
Questions ❔
- Gen UI vs traditional UI? Gen UI is best for situations where a user can get back a lot of different responses from your product for a single question/request they send.
- Is this AI-assisted design? No, in AI-assisted design the AI helps a human generate the UI, and then humans implement it in the product. In Gen UI the UI is generated automatically on the fly without any human intervention.
- How will I know what components to provide? Currently, You’ll need to map out the outcomes and intents of your users and try to find components that address those needs. Preferably, ones that work in tandem together. In the future, components will most likely be generated by AI.
Why? 🤔
- Personalization: Give your users tailored experiences that address their intention and needs. No more need to hard-code user journeys.
- Richer responses: Instead of having only static responses, we can now interact with dynamic responses (think ChatGPT), or widgets (streamed UI components) users can interact with actively to address their needs.
- Simpler interaction: Picture typing “add a new lead” into a CRM, and it automatically displays only the necessary fields, eliminating the need to click through numerous buttons and fields like in Salesforce.
- Complex journeys: Gen UI can significantly enhance products with complicated user journeys. Consider booking a flight with various parameters; a personalized UI can be created to match your exact requirements, making the experience smoother and more intuitive.
- Accessibility: Automatically adapt interfaces to address accessibility (Gen UI voice is better than text for the visually impaired).
Why not? 🙅
- Education: Like every new technology shift, one of the last places you want to innovate is in the UI space, people that aren’t used to generative UI might need some time to learn how it functions and how to make it work for them, not against them. Make sure to hold those users' hands so your churn will not explode. Also, having a product that is too inconsistent might pose a UX problem.
- Security and QA headaches: Generating components on the fly sounds cool to us, but it doesn’t sound cool at all to your QA and security people. I’m sure we are going to see some serious bugs and security incidents with this new UI paradigm.
- Early stage: There isn’t much of an ecosystem built around Gen UI at the moment, so finding robust and complete libraries and frameworks can be tough. I would also be prepared to write a bunch of code myself to make things work in the real world.
- Costs: Processing and generating gen UI responses will incur LLM costs, which may not be feasible unit economically for many applications.
- Slow: Generating AI responses and interactions take longer than returning static responses, you need to take that into consideration when designing your Gen UI application.
- Accidents: Some Gen UI interactions might result in the wrong outcome. Imagine accidentally deleting your account, so you’d have to put some mitigations in place to stop problematic outcomes.
Tools & players 🛠️
- AI-SDK - (By Vercel) The first SDK I saw for creating Gen UI applications - streaming components without bundling headaches.
- Horizon UI - A shadcn boilerplate with backed in Gen UI AI components.
- v0 - (By Vercel) Is one of the best examples of Gen UI right now, it generates UI from a prompt, all of the UI is streamed and runs as client side components.
- Coframe - Automatically optimizes your website’s copy, images (and soon UI) with AI.
- ChatBotKit - A node SDK for generating AI chats, with the ability to utilize Gen UI responses.
Forecast 🧞
- Gen UI frameworks v2: Right now, we need to define the components that are returned (level 2). I can see how someone creates frameworks that use smaller compatible components to generate novel coherent components given the input, or even write code for new components from scratch.
- Proliferation: As more and more products embrace Gen UI, I can see how users will demand better UX from tools that don't implement Gen UI.
- Enabler for AI Agents: GenUI adds another output agents can utilize - shaping a future where agents become ubiquitous.
- Voice-based Gen UI: Writing can be slower and more cumbersome for many users - imagine interacting with UIs via voice. This seems like an amazing use case - J.A.R.V.I.S with an HUD that auto-generates itself.
- Gen UI companies: I can see some companies taking on the QA/Security/Accidents aspects of Gen UI. More ideas that pop to mind are Gen UI boilerplates? Gen UI libraries? React & Python framework for AI?
- Gen UI experts: I’m sure we will start seeing UI/UX and frontend specializations in Gen UI soon. Courses, Boot camps and maybe even University degrees?
- “Gen API”: Bear with me as this one might be stupid. Imagine interacting with an API via natural text? No more REST API mistakes or type sharing shenanigans. Ask in natural language and get the response you want, no integration time wasted. Why are we limiting ourselves to UIs?
- Death of product experts: Over time I can see how high touch products will become obsolete, if we can just write or say what we want happening, and the AI will do it for us. This will level the playing field of experts vs novices.
Extra ✨
Additional information that is related:
- Building Generative UI with Next.js (Jared Palmer).
- A great into about the topic (YouTube).
- If you liked v0, there is an open source alternative called openv0.
- Gen UI from a designer perspective.
- A16z’s take on Gen UI.
Thanks 🙏
I wanted to thank Tom Granot (who edits every issue, and owns a TPM - Technical Product Marketing - consultancy for deep-tech startups).
EOF
(Where I tend to share unrelated things).
I’ve already referenced Inbox Zero in this post (no affiliate link) - a great productivity booster for anything email related, you should totally check the codebase too and Elie’s “Learn from open source” YouTube channel.