Last week, I attended Config 2025, the user conference of design industry darling Figma. At the surprisingly huge event, thousands of designers, developers, and product thinkers gathered to explore the future of collaborative design. This year’s event was Figma’s largest yet, reflecting the scale of the company’s ambitions (including a recent S-1 filing indicating a desire to take the company public). Reflecting that drive, Figma announced four new products—this from a company that launched four products total in its previous ten-year history. The headline announcement that most caught my attention was Figma Make, a prompt-to-code tool powered by Anthropic’s Claude 3.7 Sonnet model. Make lets anyone generate live, editable prototypes or apps from natural language or existing designs, blurring the line between design and development like never before.
The huge event crowd was looking at Figma Make as something akin to vibe-coding for experience design—text-to-prototype, if you will. I, however, started to see that with tools like Figma Make, the implications for conversational AI are especially profound. For example, with Make, teams can now rapidly prototype, test, and iterate on conversational interfaces—chatbots, voice assistants, and more—directly within Figma’s collaborative environment. By integrating Make with conversational AI tools, designers and product teams can rethink how dialogue flows are designed, how prototypes are validated with real users, and how cross-functional teams collaborate.
More importantly, this fusion could help brands completely rethink how conversational experiences look, feel, and behave—moving beyond the bolt-on chatbots to unified, innovative experiences conceived from the very start of the design process. Who knows what conversational-enabled experiences will look like when the overall experience design becomes much easier?
From a practical perspective, here are three ways I think this combination could reshape the design and delivery of conversational AI in the near term:
- Conversational prototyping and instant testing
Figma Make’s AI-powered prototyping can be directly integrated with conversational AI tools to allow designers to build, test, and refine chatbot or voice assistant flows in real time. Designers could prompt Figma Make to generate interactive prototypes of conversational interfaces, simulate the look of dialogues, and instantly visualize how the AI responds in the designed environment—enabling rapid iteration and immediate feedback without writing code. By attaching Figma components and prompting the AI, multiple stakeholders including designers, developers, and conversation designers, can iterate on dialogue logic, UI elements, and responses together, ensuring alignment and reducing handoff friction.
- Rapid exploration of diverse conversation designs
Figma Make’s ability to generate multiple design variations from a single prompt empowers conversational AI teams to quickly explore a wide range of interface and interaction styles for chatbots or voice assistants. By attaching a design or describing a conversational scenario, designers can prompt Figma Make to produce several distinct layouts, flows, or visual themes in seconds, enabling fast comparison and selection of the most effective approaches. This accelerates the creative process, helps uncover novel solutions, and supports rapid iteration, all without the need for manual redesign or extensive coding. Conversation designers and traditional designers can become “deciders,” selecting the design ideas that best meet the moment.
- Dynamic publishing and live user testing
By combining Figma Make’s ability to publish functional prototypes as essentially live web apps with conversational AI, teams can deploy interactive chatbot demos or voice interfaces to real users for live testing. Feedback can be gathered directly within the published prototype, enabling continuous improvement and real-world validation of conversational experiences before full-scale development.
As tools like Figma Make redefine how teams collaborate, prototype, and ship digital experiences, they also signal a deeper shift in the future of work—where conversation becomes not just an interface, but a medium for creation. In this new era, hopefully tech vendors won’t just design for brands—they’ll design with them, leveraging real-time interaction, AI co-creation, and collaborative prototyping to continuously evolve experiences. The boundary between maker and user, designer and developer, conversation and code is rapidly dissolving—and in its place we expect to see a more dynamic, participatory model of customer engagement.
Categories: Intelligent Assistants, Articles