<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>AI on Smashing Magazine — For Web Designers And Developers</title><link>https://www.smashingmagazine.com/category/ai/index.xml</link><description>Recent content in AI on Smashing Magazine — For Web Designers And Developers</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Thu, 25 Dec 2025 10:32:38 +0000</lastBuildDate><item><author>Paul Boag</author><title>Giving Users A Voice Through Virtual Personas</title><link>https://www.smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/</link><pubDate>Tue, 23 Dec 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/</guid><description>Turn scattered user research into AI-powered personas that give anyone consolidated multi-perspective feedback from a single question.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/" />
              <title>Giving Users A Voice Through Virtual Personas</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Giving Users A Voice Through Virtual Personas</h1>
                  
                    
                    <address>Paul Boag</address>
                  
                  <time datetime="2025-12-23T10:00:00&#43;00:00" class="op-published">2025-12-23T10:00:00+00:00</time>
                  <time datetime="2025-12-23T10:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>In my <a href="https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/">previous article</a>, I explored how AI can help us create functional personas more efficiently. We looked at building personas that focus on what users are trying to accomplish rather than demographic profiles that look good on posters but rarely change design decisions.</p>

<p>But creating personas is only half the battle. The bigger challenge is getting those insights into the hands of people who need them, at the moment they need them.</p>

<p>Every day, people across your organization make decisions that affect user experience. Product teams decide which features to prioritize. Marketing teams craft campaigns. Finance teams design invoicing processes. Customer support teams write response templates. All of these decisions shape how users experience your product or service.</p>

<p>And most of them happen without any input from actual users.</p>

<h2 id="the-problem-with-how-we-share-user-research">The Problem With How We Share User Research</h2>

<p>You do the research. You create the personas. You write the reports. You give the presentations. You even make fancy infographics. And then what happens?</p>

<p>The research sits in a shared drive somewhere, slowly gathering digital dust. The personas get referenced in kickoff meetings and then forgotten. The reports get skimmed once and never opened again.</p>

<p>When a product manager is deciding whether to add a new feature, they probably do not dig through last year’s research repository. When the finance team is redesigning the invoice email, they almost certainly do not consult the user personas. They make their best guess and move on.</p>

<p>This is not a criticism of those teams. They are busy. They have deadlines. And honestly, even if they wanted to consult the research, they probably would not know where to find it or how to interpret it for their specific question.</p>

<p>The knowledge stays locked inside the heads of the UX team, who cannot possibly be present for every decision being made across the organization.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="what-if-users-could-actually-speak">What If Users Could Actually Speak?</h2>

<blockquote>What if, instead of creating static documents that people need to find and interpret, we could give stakeholders a way to consult all of your user personas at once?</blockquote>

<p>Imagine a marketing manager working on a new campaign. Instead of trying to remember what the personas said about messaging preferences, they could simply ask: <em>“I’m thinking about leading with a discount offer in this email. What would our users think?”</em></p>

<p>And the AI, drawing on all your research data and personas, could respond with a consolidated view: how each persona would likely react, where they agree, where they differ, and a set of recommendations based on their collective perspectives. One question, synthesized insight across your entire user base.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="496"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png"
			
			sizes="100vw"
			alt="Personas"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      You can question how personas will react to different scenarios based on the research available. (<a href='https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This is not science fiction. With AI, we can build exactly this kind of system. We can take all of that scattered research (the surveys, the interviews, the support tickets, the analytics, the personas themselves) and turn it into an <strong>interactive resource</strong> that anyone can query for multi-perspective feedback.</p>

<h2 id="building-the-user-research-repository">Building the User Research Repository</h2>

<p>The foundation of this approach is a centralized repository of everything you know about your users. Think of it as a single source of truth that AI can access and draw from.</p>

<p>If you have been doing user research for any length of time, you probably have more data than you realize. It is just scattered across different tools and formats:</p>

<ul>
<li>Survey results sitting in your survey platform,</li>
<li>Interview transcripts in Google Docs,</li>
<li>Customer support tickets in your helpdesk system,</li>
<li>Analytics data in various dashboards,</li>
<li>Social media mentions and reviews,</li>
<li>Old personas from previous projects,</li>
<li>Usability test recordings and notes.</li>
</ul>

<p>The first step is gathering all of this into one place. It does not need to be perfectly organized. AI is remarkably good at making sense of messy inputs.</p>

<p>If you are starting from scratch and do not have much existing research, you can use AI deep research tools to establish a baseline.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="599"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png"
			
			sizes="100vw"
			alt="Research with perplexity"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Online deep research with a tool like perplexity can be invaluable as a starting point for user research. (<a href='https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>These tools can scan the web for discussions about your product category, competitor reviews, and common questions people ask. This gives you something to work with while you build out your primary research.</p>

<h2 id="creating-interactive-personas">Creating Interactive Personas</h2>

<p>Once you have your repository, the next step is creating personas that the AI can consult on behalf of stakeholders. This builds directly on <a href="https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/">the functional persona approach I outlined in my previous article</a>, with one key difference: these personas become <strong>lenses</strong> through which the AI analyzes questions, not just reference documents.</p>

<p>The process works like this:</p>

<ol>
<li>Feed your research repository to an AI tool.</li>
<li>Ask it to identify distinct user segments based on goals, tasks, and friction points.</li>
<li>Have it generate detailed personas for each segment.</li>
<li>Configure the AI to consult all personas when stakeholders ask questions, providing consolidated feedback.</li>
</ol>

<p>Here is where this approach diverges significantly from traditional personas. Because the AI is the primary consumer of these persona documents, they do not need to be scannable or fit on a single page. Traditional personas are constrained by human readability: you have to distill everything down to bullet points and key quotes that someone can absorb at a glance. But AI has no such limitation.</p>

<p>This means your personas can be considerably <strong>more detailed</strong>. You can include lengthy behavioral observations, contradictory data points, and nuanced context that would never survive the editing process for a traditional persona poster. The AI can hold all of this complexity and draw on it when answering questions.</p>

<p>You can also create <strong>different lenses or perspectives within each persona</strong>, tailored to specific business functions. Your “Weekend Warrior” persona might have a marketing lens (messaging preferences, channel habits, campaign responses), a product lens (feature priorities, usability patterns, upgrade triggers), and a support lens (common questions, frustration points, resolution preferences). When a marketing manager asks a question, the AI draws on the marketing-relevant information. When a product manager asks, it pulls from the product lens. Same persona, different depth depending on who is asking.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="568"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png"
			
			sizes="100vw"
			alt="Persona Lenses"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Personas can have different lenses relevant to different functions within the business. (<a href='https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The personas should still include all the functional elements we discussed before: goals and tasks, questions and objections, pain points, touchpoints, and service gaps. But now these elements become the basis for how the AI evaluates questions from each persona’s perspective, synthesizing their views into actionable recommendations.</p>

<div class="partners__lead-place"></div>

<h2 id="implementation-options">Implementation Options</h2>

<p>You can set this up with varying levels of sophistication depending on your resources and needs.</p>

<h3 id="the-simple-approach">The Simple Approach</h3>

<p>Most AI platforms now offer project or workspace features that let you upload reference documents. In ChatGPT, these are called Projects. Claude has a similar feature. Copilot and Gemini call them Spaces or Gems.</p>

<p>To get started, create a dedicated project and upload your key research documents and personas. Then write clear instructions telling the AI to consult all personas when responding to questions. Something like:</p>

<blockquote>You are helping stakeholders understand our users. When asked questions, consult all of the user personas in this project and provide: (1) a brief summary of how each persona would likely respond, (2) an overview highlighting where they agree and where they differ, and (3) recommendations based on their collective perspectives. Draw on all the research documents to inform your analysis. If the research does not fully cover a topic, search social platforms like Reddit, Twitter, and relevant forums to see how people matching these personas discuss similar issues. If you are still unsure about something, say so honestly and suggest what additional research might help.</blockquote>

<p>This approach has some limitations. There are caps on how many files you can upload, so you might need to prioritize your most important research or consolidate your personas into a single comprehensive document.</p>

<h3 id="the-more-sophisticated-approach">The More Sophisticated Approach</h3>

<p>For larger organizations or more ongoing use, a tool like <a href="https://www.notion.com/">Notion</a> offers advantages because it can hold your entire <strong>research repository</strong> and has AI capabilities built in. You can create databases for different types of research, link them together, and then use the AI to query across everything.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="599"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png"
			
			sizes="100vw"
			alt="Notion homepage"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Notion is a powerful tool for user research with built-in AI functionality that can refer to all your personas as well as your entire research repository. (<a href='https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The benefit here is that the AI has access to much <strong>more context</strong>. When a stakeholder asks a question, it can draw on surveys, support tickets, interview transcripts, and analytics data all at once. This makes for richer, more nuanced responses.</p>

<h2 id="what-this-does-not-replace">What This Does Not Replace</h2>

<p>I should be clear about the limitations.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aVirtual%20personas%20are%20not%20a%20substitute%20for%20talking%20to%20real%20users.%20They%20are%20a%20way%20to%20make%20existing%20research%20more%20accessible%20and%20actionable.%0a&url=https://smashingmagazine.com%2f2025%2f12%2fgiving-users-voice-virtual-personas%2f">
      
Virtual personas are not a substitute for talking to real users. They are a way to make existing research more accessible and actionable.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>There are several scenarios where you still need primary research:</p>

<ul>
<li>When launching something genuinely new that your existing research does not cover;</li>
<li>When you need to validate specific designs or prototypes;</li>
<li>When your repository data is getting stale;</li>
<li>When stakeholders need to hear directly from real humans to build empathy.</li>
</ul>

<p>In fact, you can configure the AI to recognize these situations. When someone asks a question that goes beyond what the research can answer, the AI can respond with something like: <em>“I do not have enough information to answer that confidently. This might be a good question for a quick user interview or survey.”</em></p>

<p>And when you do conduct new research, that data feeds back into the repository. The personas evolve over time as your understanding deepens. This is much better than the traditional approach, where personas get created once and then slowly drift out of date.</p>

<div class="partners__lead-place"></div>

<h2 id="the-organizational-shift">The Organizational Shift</h2>

<p>If this approach catches on in your organization, something interesting happens.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aThe%20UX%20team%e2%80%99s%20role%20shifts%20from%20being%20the%20gatekeepers%20of%20user%20knowledge%20to%20being%20the%20curators%20and%20maintainers%20of%20the%20repository.%0a&url=https://smashingmagazine.com%2f2025%2f12%2fgiving-users-voice-virtual-personas%2f">
      
The UX team’s role shifts from being the gatekeepers of user knowledge to being the curators and maintainers of the repository.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>Instead of spending time creating reports that may or may not get read, you spend time ensuring the repository stays current and that the AI is configured to give helpful responses.</p>

<p>Research communication changes from push (presentations, reports, emails) to pull (stakeholders asking questions when they need answers). <strong>User-centered thinking</strong> becomes distributed across the organization rather than concentrated in one team.</p>

<p>This does not make UX researchers less valuable. If anything, it makes them more valuable because their work now has a wider reach and greater impact. But it does change the nature of the work.</p>

<h2 id="getting-started">Getting Started</h2>

<p>If you want to try this approach, start small. If you need a primer on functional personas before diving in, I have written a <a href="https://boagworld.com/usability/personas/">detailed guide to creating them</a>. Pick one project or team and set up a simple implementation using ChatGPT Projects or a similar tool. Gather whatever research you have (even if it feels incomplete), create one or two personas, and see how stakeholders respond.</p>

<p>Pay attention to what questions they ask. These will tell you where your research has gaps and what additional data would be most valuable.</p>

<p>As you refine the approach, you can expand to more teams and more sophisticated tooling. But the core principle stays the same: <strong>take all that scattered user knowledge and give it a voice that anyone in your organization can hear.</strong></p>

<p>In my previous article, I argued that we should move from demographic personas to functional personas that focus on what users are trying to do. Now I am suggesting we take the next step: from static personas to interactive ones that can actually participate in the conversations where decisions get made.</p>

<p>Because every day, across your organization, people are making decisions that affect your users. And your users deserve a seat at the table, even if it is a virtual one.</p>

<h3 id="further-reading-on-smashingmag">Further Reading On SmashingMag</h3>

<ul>
<li>“<a href="https://www.smashingmagazine.com/2014/08/a-closer-look-at-personas-part-1/">A Closer Look At Personas: What They Are And How They Work | 1</a>”, Shlomo Goltz</li>
<li>“<a href="https://www.smashingmagazine.com/2018/04/design-process-data-based-personas/">How To Improve Your Design Process With Data-Based Personas</a>”, Tim Noetzel</li>
<li>“<a href="https://www.smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/">How To Make Your UX Research Hard To Ignore</a>”, Vitaly Friedman</li>
<li>“<a href="https://www.smashingmagazine.com/2023/01/build-strong-customer-relationships-user-research/">How To Build Strong Customer Relationships For User Research</a>”, Renaissance Rachel</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Paul Boag</author><title>How UX Professionals Can Lead AI Strategy</title><link>https://www.smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/</link><pubDate>Mon, 08 Dec 2025 08:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/</guid><description>Lead your organization’s AI strategy before someone else defines it for you. A practical framework for UX professionals to shape AI implementation.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/" />
              <title>How UX Professionals Can Lead AI Strategy</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How UX Professionals Can Lead AI Strategy</h1>
                  
                    
                    <address>Paul Boag</address>
                  
                  <time datetime="2025-12-08T08:00:00&#43;00:00" class="op-published">2025-12-08T08:00:00+00:00</time>
                  <time datetime="2025-12-08T08:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>Your senior management is excited about AI. They’ve read the articles, attended the webinars, and seen the demos. They’re convinced that AI will transform your organization, boost productivity, and give you a competitive edge.</p>

<p>Meanwhile, you’re sitting in your UX role wondering what this means for your team, your workflow, and your users. You might even be worried about your job security.</p>

<p>The problem is that the conversation about how AI gets implemented is happening right now, and if you’re not part of it, <strong>someone else will decide how it affects your work</strong>. That someone probably doesn’t understand user experience, research practices, or the subtle ways poor implementation can damage the very outcomes management hopes to achieve.</p>

<p>You have a choice. You can wait for directives to come down from above, or you can take control of the conversation and lead the AI strategy for your practice.</p>

<h2 id="why-ux-professionals-must-own-the-ai-conversation">Why UX Professionals Must Own the AI Conversation</h2>

<p>Management sees AI as efficiency gains, cost savings, competitive advantage, and innovation all wrapped up in one buzzword-friendly package. They’re not wrong to be excited. The technology is genuinely impressive and can deliver real value.</p>

<p><strong>But without UX input, AI implementations often fail users in predictable ways:</strong></p>

<ul>
<li>They automate tasks without understanding the judgment calls those tasks require.</li>
<li>They optimize for speed while destroying the quality that made your work valuable.</li>
</ul>

<p>Your expertise positions you perfectly to guide implementation. You understand users, workflows, quality standards, and the gap between what looks impressive in a demo and what actually works in practice.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h3 id="use-ai-momentum-to-advance-your-priorities">Use AI Momentum to Advance Your Priorities</h3>

<p>Management’s enthusiasm for AI creates an opportunity to advance priorities you’ve been fighting for unsuccessfully. When management is willing to invest in AI, you can connect those long-standing needs to the AI initiative. Position user research as essential for training AI systems on real user needs. Frame usability testing as the validation method that ensures AI-generated solutions actually work.</p>

<p>How AI gets implemented will shape your team’s roles, your users’ experiences, and your organization’s capability to deliver quality digital products.</p>

<h2 id="your-role-isn-t-disappearing-it-s-evolving">Your Role Isn’t Disappearing (It’s Evolving)</h2>

<p>Yes, AI will automate some of the tasks you currently do. But someone needs to decide which tasks get automated, how they get automated, what guardrails to put in place, and how automated processes fit around real humans doing complex work.</p>

<p>That someone should be <em>you</em>.</p>

<p>Think about what you already do. When you conduct user research, AI might help you transcribe interviews or identify themes. But you’re the one who knows which participant hesitated before answering, which feedback contradicts what you observed in their behavior, and which insights matter most for your specific product and users.</p>

<p>When you design interfaces, AI might generate layout variations or suggest components from your design system. But you’re the one who understands the constraints of your technical platform, the political realities of getting designs approved, and the edge cases that will break a clever solution.</p>

<p><strong>Your future value comes from the work you’re already doing:</strong></p>

<ul>
<li><strong>Seeing the full picture.</strong><br />
You understand how this feature connects to that workflow, how this user segment differs from that one, and why the technically correct solution won’t work in your organization’s reality.</li>
<li><strong>Making judgment calls.</strong><br />
You decide when to follow the design system and when to break it, when user feedback reflects a real problem versus a feature request from one vocal user, and when to push back on stakeholders versus find a compromise.</li>
<li><strong>Connecting the dots.</strong><br />
You translate between technical constraints and user needs, between business goals and design principles, between what stakeholders ask for and what will actually solve their problem.</li>
</ul>

<p>AI will keep getting better at individual tasks. But you’re the person who decides which solution actually works for your specific context. The people who will struggle are those doing simple, repeatable work without understanding why. Your value is in understanding context, making judgment calls, and connecting solutions to real problems.</p>

<h2 id="step-1-understand-management-s-ai-motivations">Step 1: Understand Management’s AI Motivations</h2>

<p>Before you can lead the conversation, you need to understand what’s driving it. Management is responding to real pressures: cost reduction, competitive pressure, productivity gains, and board expectations.</p>

<p><strong>Speak their language.</strong><br />
When you talk to management about AI, frame everything in terms of ROI, risk mitigation, and competitive advantage. <em>“This approach will protect our quality standards”</em> is less compelling than <em>“This approach reduces the risk of damaging our conversion rate while we test AI capabilities.”</em></p>

<p><strong>Separate hype from reality.</strong><br />
Take time to research what AI capabilities actually exist versus what’s hype. Read case studies, try tools yourself, and talk to peers about what’s actually working.</p>

<p><strong>Identify real pain points.</strong><br />
AI might legitimately address in your organization. Maybe your team spends hours formatting research findings, or accessibility testing creates bottlenecks. These are the problems worth solving.</p>

<div class="partners__lead-place"></div>

<h2 id="step-2-audit-your-current-state-and-opportunities">Step 2: Audit Your Current State and Opportunities</h2>

<p>Map your team’s work. Where does time actually go? Look at the past quarter and categorize how your team spent their hours.</p>

<p><strong>Identify high-volume, repeatable tasks versus high-judgment work.</strong><br />
Repeatable tasks are candidates for automation. High-judgment work is where you add irreplaceable value.</p>

<p><strong>Also, identify what you’ve wanted to do but couldn’t get approved.</strong><br />
This is your opportunity list. Maybe you’ve wanted quarterly usability tests, but only get budget annually. Write these down separately. You’ll connect them to your AI strategy in the next step.</p>

<p>Spot opportunities where AI could genuinely help:</p>

<ul>
<li><strong>Research synthesis:</strong><br />
AI can help organize and categorize findings.</li>
<li><strong>Analyzing user behavior data:</strong><br />
AI can process analytics and session recordings to surface patterns you might miss.</li>
<li><strong>Rapid prototyping:</strong><br />
AI can quickly generate testable prototypes, speeding up your test cycles.</li>
</ul>

<h2 id="step-3-define-ai-principles-for-your-ux-practice">Step 3: Define AI Principles for Your UX Practice</h2>

<p>Before you start forming your strategy, establish principles that will guide every decision.</p>

<p><strong>Set non-negotiables.</strong><br />
User privacy, accessibility, and human oversight of significant decisions. Write these down and get agreement from leadership before you pilot anything.</p>

<p><strong>Define criteria for AI use.</strong><br />
AI is good at pattern recognition, summarization, and generating variations. AI is poor at understanding context, making ethical judgments, and knowing when rules should be broken.</p>

<p><strong>Define success metrics beyond efficiency.</strong><br />
Yes, you want to save time. But you also need to measure quality, user satisfaction, and team capability. Build a balanced scorecard that captures what actually matters.</p>

<p><strong>Create guardrails.</strong><br />
Maybe every AI-generated interface needs human review before it ships. These guardrails prevent the obvious disasters and give you space to learn safely.</p>

<h2 id="step-4-build-your-ai-in-ux-strategy">Step 4: Build Your AI-in-UX Strategy</h2>

<p>Now you’re ready to build the actual strategy you’ll pitch to leadership. <strong>Start small</strong> with pilot projects that have a clear scope and evaluation criteria.</p>

<p><strong>Connect to business outcomes management cares about.</strong><br />
Don’t pitch <em>“using AI for research synthesis.”</em> Pitch <em>“reducing time from research to insights by 40%, enabling faster product decisions.”</em></p>

<p><strong>Piggyback your existing priorities on AI momentum.</strong><br />
Remember that opportunity list from Step 2? Now you connect those long-standing needs to your AI strategy. If you’ve wanted more frequent usability testing, explain that AI implementations need continuous validation to catch problems before they scale. AI implementations genuinely benefit from good research practices. You’re simply using management’s enthusiasm for AI as the vehicle to finally get resources for practices that should have been funded all along.</p>

<p><strong>Define roles clearly.</strong><br />
Where do humans lead? Where does AI assist? Where won’t you automate? Management needs to understand that some work requires human judgment and should never be fully automated.</p>

<p><strong>Plan for capability building.</strong><br />
Your team will need training and new skills. Budget time and resources for this.</p>

<p><strong>Address risks honestly.</strong><br />
AI could generate biased recommendations, miss important context, or produce work that looks good but doesn’t actually function. For each risk, explain how you’ll detect it and what you’ll do to mitigate it.</p>

<h2 id="step-5-pitch-the-strategy-to-leadership">Step 5: Pitch the Strategy to Leadership</h2>

<p>Frame your strategy as de-risking management’s AI ambitions, not blocking them. You’re showing them how to implement AI successfully while avoiding the obvious pitfalls.</p>

<p><strong>Lead with outcomes and ROI they care about.</strong><br />
Put the business case up front.</p>

<p><strong>Bundle your wish list into the AI strategy.</strong><br />
When you present your strategy, include those capabilities you’ve wanted but couldn’t get approved before. Don’t present them as separate requests. Integrate them as essential components. <em>“To validate AI-generated designs, we’ll need to increase our testing frequency from annual to quarterly”</em> sounds much more reasonable than <em>“Can we please do more testing?”</em> You’re explaining what’s required for their AI investment to succeed.</p>

<p><strong>Show quick wins alongside a longer-term vision.</strong><br />
Identify one or two pilots that can show value within 30-60 days. Then show them how those pilots build toward bigger changes over the next year.</p>

<p><strong>Ask for what you need.</strong><br />
Be specific. You need a budget for tools, time for pilots, access to data, and support for team training.</p>

<div class="partners__lead-place"></div>

<h2 id="step-6-implement-and-demonstrate-value">Step 6: Implement and Demonstrate Value</h2>

<p>Run your pilots with clear before-and-after metrics. Measure everything: time saved, quality maintained, user satisfaction, team confidence.</p>

<p><strong>Document wins and learning.</strong><br />
Failures are useful too. If a pilot doesn’t work out, document why and what you learned.</p>

<p><strong>Share progress in management’s language.</strong>
 Monthly updates should focus on business outcomes, not technical details. <em>“We’ve reduced research synthesis time by 35% while maintaining quality scores”</em> is the right level of detail.</p>

<p><strong>Build internal advocates by solving real problems.</strong><br />
When your AI pilots make someone’s job easier, you create advocates who will support broader adoption.</p>

<p><strong>Iterate based on what works in your specific context.</strong>
 Not every AI application will fit your organization. Pay attention to what’s actually working and double down on that.</p>

<h2 id="taking-initiative-beats-waiting">Taking Initiative Beats Waiting</h2>

<p>AI adoption is happening. The question isn’t whether your organization will use AI, but whether you’ll shape how it gets implemented.</p>

<p>Your UX expertise is exactly what’s needed to implement AI successfully. You understand users, quality, and the gap between impressive demos and useful reality.</p>

<p><strong>Take one practical first step this week.</strong><br />
Schedule 30 minutes to map one AI opportunity in your practice. Pick one area where AI might help, think through how you’d pilot it safely, and sketch out what success would look like.</p>

<p>Then start the conversation with your manager. You might be surprised how receptive they are to someone stepping up to lead this.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aYou%20know%20how%20to%20understand%20user%20needs,%20test%20solutions,%20measure%20outcomes,%20and%20iterate%20based%20on%20evidence.%20Those%20skills%20don%e2%80%99t%20change%20just%20because%20AI%20is%20involved.%20You%e2%80%99re%20applying%20your%20existing%20expertise%20to%20a%20new%20tool.%0a&url=https://smashingmagazine.com%2f2025%2f12%2fhow-ux-professionals-can-lead-ai-strategy%2f">
      
You know how to understand user needs, test solutions, measure outcomes, and iterate based on evidence. Those skills don’t change just because AI is involved. You’re applying your existing expertise to a new tool.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>Your role isn’t disappearing. It’s evolving into something more strategic, more valuable, and more secure. But only if you take the initiative to shape that evolution yourself.</p>

<h3 id="further-reading-on-smashingmag">Further Reading On SmashingMag</h3>

<ul>
<li>“<a href="https://www.smashingmagazine.com/2025/08/designing-with-ai-practical-techniques-product-design/">Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases</a>”, Ilia Kanazin &amp; Marina Chernyshova</li>
<li>“<a href="https://www.smashingmagazine.com/2025/08/beyond-hype-what-ai-can-do-product-design/">Beyond The Hype: What AI Can Really Do For Product Design</a>”, Nikita Samutin</li>
<li>“<a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">A Week In The Life Of An AI-Augmented Designer</a>”, Lyndon Cerejo</li>
<li>“<a href="https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/">Functional Personas With AI: A Lean, Practical Workflow</a>”, Paul Boag</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk, il)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Victor Yocco</author><title>Beyond The Black Box: Practical XAI For UX Practitioners</title><link>https://www.smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/</link><pubDate>Fri, 05 Dec 2025 15:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/</guid><description>Explainable AI isn’t just a challenge for data scientists. It’s also a design challenge and a core pillar of trustworthy, effective AI products. Victor Yocco offers practical guidance and design patterns for building explainability into real products.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/" />
              <title>Beyond The Black Box: Practical XAI For UX Practitioners</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Beyond The Black Box: Practical XAI For UX Practitioners</h1>
                  
                    
                    <address>Victor Yocco</address>
                  
                  <time datetime="2025-12-05T15:00:00&#43;00:00" class="op-published">2025-12-05T15:00:00+00:00</time>
                  <time datetime="2025-12-05T15:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>In my <a href="https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/">last piece</a>, we established a foundational truth: for users to adopt and rely on AI, they must <strong>trust</strong> it. We talked about trust being a multifaceted construct, built on perceptions of an AI’s <strong>Ability</strong>, <strong>Benevolence</strong>, <strong>Integrity</strong>, and <strong>Predictability</strong>. But what happens when an AI, in its silent, algorithmic wisdom, makes a decision that leaves a user confused, frustrated, or even hurt? A mortgage application is denied, a favorite song is suddenly absent from a playlist, and a qualified resume is rejected before a human ever sees it. In these moments, ability and predictability are shattered, and benevolence feels a world away.</p>

<p>Our conversation now  must evolve from the <em>why</em> of trust to the <em>how</em> of transparency. The field of <strong>Explainable AI (XAI)</strong>, which focuses on developing methods to make AI outputs understandable to humans, has emerged to address this, but it’s often framed as a purely technical challenge for data scientists. I argue it’s a critical design challenge for products relying on AI. It’s our job as UX professionals to bridge the gap between algorithmic decision-making and human understanding.</p>

<p>This article provides practical, actionable guidance on how to research and design for explainability. We’ll move beyond the buzzwords and into the mockups, translating complex XAI concepts into concrete design patterns you can start using today.</p>

<h2 id="de-mystifying-xai-core-concepts-for-ux-practitioners">De-mystifying XAI: Core Concepts For UX Practitioners</h2>

<p>XAI is about answering the user’s question: “<strong>Why?</strong>” Why was I shown this ad? Why is this movie recommended to me? Why was my request denied? Think of it as the AI showing its work on a math problem. Without it, you just have an answer, and you’re forced to take it on faith. In showing the steps, you build comprehension and trust. You also allow for your work to be double-checked and verified by the very humans it impacts.</p>

<h3 id="feature-importance-and-counterfactuals">Feature Importance And Counterfactuals</h3>

<p>There are a number of techniques we can use to clarify or explain what is happening with AI. While methods range from providing the entire logic of a decision tree to generating natural language summaries of an output, two of the most practical and impactful types of information UX practitioners can introduce into an experience are <strong>feature importance</strong> (Figure 1) and <strong>counterfactuals</strong>. These are often the most straightforward for users to understand and the most actionable for designers to implement.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="478"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png"
			
			sizes="100vw"
			alt="A fictional example of feature importance"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 1: A fictional example of feature importance where a bank system shows the importance of various features that lead to a model’s decision. Image generated using Google Gemini. (<a href='https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h4 id="feature-importance">Feature Importance</h4>

<p>This explainability method answers, “<strong>What were the most important factors the AI considered?</strong>” It’s about identifying the top 2-3 variables that had the biggest impact on the outcome. It’s the headline, not the whole story.</p>

<blockquote><strong>Example</strong>: Imagine an AI that predicts whether a customer will churn (cancel their service). Feature importance might reveal that “number of support calls in the last month” and “recent price increases” were the two most important factors in determining if a customer was likely to churn.</blockquote>

<h4 id="counterfactuals">Counterfactuals</h4>

<p>This powerful method answers, “<strong>What would I need to change to get a different outcome?</strong>” This is crucial because it gives users a sense of agency. It transforms a frustrating “no” into an actionable “not yet.”</p>

<blockquote><strong>Example</strong>: Imagine a loan application system that uses AI. A user is denied a loan. Instead of just seeing “Application Denied,” a counterfactual explanation would also share, “If your credit score were 50 points higher, or if your debt-to-income ratio were 10% lower, your loan would have been approved.” This gives Sarah clear, actionable steps she can take to potentially get a loan in the future.</blockquote>

<h3 id="using-model-data-to-enhance-the-explanation">Using Model Data To Enhance The Explanation</h3>

<p>Although technical specifics are often handled by data scientists, it&rsquo;s helpful for UX practitioners to know that tools like <a href="https://www.geeksforgeeks.org/artificial-intelligence/introduction-to-explainable-aixai-using-lime/">LIME</a> (Local Interpretable Model-agnostic Explanations) which explains individual predictions by approximating the model locally, and <a href="https://shap.readthedocs.io/en/latest/example_notebooks/overviews/An%20introduction%20to%20explainable%20AI%20with%20Shapley%20values.html">SHAP</a> (SHapley Additive exPlanations) which uses a game theory approach to explain the output of any machine learning model are commonly used to extract these “why” insights from complex models. These libraries essentially help break down an AI’s decision to show which inputs were most influential for a given outcome.</p>

<p>When done properly, the data underlying an AI tool’s decision can be used to tell a powerful story. Let’s walk through feature importance and counterfactuals and show how the data science behind the decision can be utilized to enhance the user’s experience.</p>

<p>Now let’s cover feature importance with the assistance of <strong>Local Explanations (e.g., LIME)</strong> data: This approach answers, “<strong>Why did the AI make <em>this specific</em> recommendation for me, right now?</strong>” Instead of a general explanation of how the model works, it provides a focused reason for a single, specific instance. It’s personal and contextual.</p>

<blockquote><strong>Example</strong>: Imagine an AI-powered music recommendation system like Spotify. A local explanation would answer, “Why did the system recommend <strong>this specific</strong> song by Adele to <strong>you</strong> right now?” The explanation might be: “Because you recently listened to several other emotional ballads and songs by female vocalists.”</blockquote>

<p>Finally, let’s cover the inclusion of <strong>Value-based Explanations (e.g. Shapley Additive Explanations (SHAP)</strong> data to an explanation of a decision: This is a more nuanced version of feature importance that answers, “<strong>How did each factor push the decision one way or the other?</strong>” It helps visualize <em>what</em> mattered, and whether its influence was positive or negative.</p>

<blockquote><strong>Example</strong>: Imagine a bank uses an AI model to decide whether to approve a loan application.</blockquote>

<p><strong>Feature Importance</strong>: The model output might show that the applicant’s credit score, income, and debt-to-income ratio were the most important factors in its decision. This answers <em>what</em> mattered.</p>

<p><strong>Feature Importance with Value-Based Explanations (SHAP)</strong>: SHAP values would take feature importance further based on elements of the model.</p>

<ul>
<li>For an approved loan, SHAP might show that a high credit score significantly <em>pushed</em> the decision towards approval (positive influence), while a slightly higher-than-average debt-to-income ratio <em>pulled</em> it slightly away (negative influence), but not enough to deny the loan.</li>
<li>For a denied loan, SHAP could reveal that a low income and a high number of recent credit inquiries <em>strongly pushed</em> the decision towards denial, even if the credit score was decent.</li>
</ul>

<p>This helps the loan officer explain to the applicant beyond <em>what</em> was considered, to <em>how each factor contributed</em> to the final “yes” or “no” decision.</p>

<p>It’s crucial to recognize that the ability to provide good explanations often starts much earlier in the development cycle. Data scientists and engineers play a pivotal role by intentionally structuring models and data pipelines in ways that inherently support explainability, rather than trying to bolt it on as an afterthought.</p>

<p>Research and design teams can foster this by initiating early conversations with data scientists and engineers about user needs for understanding, contributing to the development of explainability metrics, and collaboratively prototyping explanations to ensure they are both accurate and user-friendly.</p>

<h2 id="xai-and-ethical-ai-unpacking-bias-and-responsibility">XAI And Ethical AI: Unpacking Bias And Responsibility</h2>

<p>Beyond building trust, XAI plays a critical role in addressing the profound <strong>ethical implications of AI</strong>*, particularly concerning algorithmic bias. Explainability techniques, such as analyzing SHAP values, can reveal if a model’s decisions are disproportionately influenced by sensitive attributes like race, gender, or socioeconomic status, even if these factors were not explicitly used as direct inputs.</p>

<p>For instance, if a loan approval model consistently assigns negative SHAP values to applicants from a certain demographic, it signals a potential bias that needs investigation, empowering teams to surface and mitigate such unfair outcomes.</p>

<p>The power of XAI also comes with the potential for “<strong>explainability washing</strong>.” Just as “greenwashing” misleads consumers about environmental practices, explainability washing can occur when explanations are designed to obscure, rather than illuminate, problematic algorithmic behavior or inherent biases. This could manifest as overly simplistic explanations that omit critical influencing factors, or explanations that strategically frame results to appear more neutral or fair than they truly are. It underscores the ethical responsibility of UX practitioners to design explanations that are genuinely transparent and verifiable.</p>

<p>UX professionals, in collaboration with data scientists and ethicists, hold a crucial responsibility in communicating the <em>why</em> of a decision, and also the limitations and potential biases of the underlying AI model. This involves setting realistic user expectations about AI accuracy, identifying where the model might be less reliable, and providing clear channels for recourse or feedback when users perceive unfair or incorrect outcomes. Proactively addressing these ethical dimensions will allow us to build AI systems that are truly just and trustworthy.</p>

<h2 id="from-methods-to-mockups-practical-xai-design-patterns">From Methods To Mockups: Practical XAI Design Patterns</h2>

<p>Knowing the concepts is one thing; designing them is another. Here’s how we can translate these XAI methods into intuitive design patterns.</p>

<h3 id="pattern-1-the-because-statement-for-feature-importance">Pattern 1: The &ldquo;Because&rdquo; Statement (for Feature Importance)</h3>

<p>This is the simplest and often most effective pattern. It’s a direct, plain-language statement that surfaces the primary reason for an AI’s action.</p>

<ul>
<li><strong>Heuristic</strong>: Be direct and concise. Lead with the single most impactful reason. Avoid jargon at all costs.</li>
</ul>

<blockquote><strong>Example</strong>: Imagine a music streaming service. Instead of just presenting a “Discover Weekly” playlist, you add a small line of microcopy.<br /><br /><strong>Song Recommendation</strong>: “Velvet Morning”<br />Because you listen to “The Fuzz” and other psychedelic rock.</blockquote>

<h3 id="pattern-2-the-what-if-interactive-for-counterfactuals">Pattern 2: The &ldquo;What-If&rdquo; Interactive (for Counterfactuals)</h3>

<p>Counterfactuals are inherently about empowerment. The best way to represent them is by giving users interactive tools to explore possibilities themselves. This is perfect for financial, health, or other goal-oriented applications.</p>

<ul>
<li><strong>Heuristic</strong>: Make explanations interactive and empowering. Let users see the cause and effect of their choices.</li>
</ul>

<blockquote><strong>Example</strong>: A loan application interface. After a denial, instead of a dead end, the user gets a tool to determine how various scenarios (what-ifs) might play out (See Figure 1).</blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="582"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png"
			
			sizes="100vw"
			alt="An example of Counterfactuals"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 2: An example of Counterfactuals using a what-if scenario, letting the user see how changing different values of the model’s features can impact outcomes. Image generated using Google Gemini. (<a href='https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="pattern-3-the-highlight-reel-for-local-explanations">Pattern 3: The Highlight Reel (For Local Explanations)</h3>

<p>When an AI performs an action on a user’s content (like summarizing a document or identifying faces in photos), the explanation should be visually linked to the source.</p>

<ul>
<li><strong>Heuristic</strong>: Use visual cues like highlighting, outlines, or annotations to connect the explanation directly to the interface element it’s explaining.</li>
</ul>

<blockquote><strong>Example</strong>: An AI tool that summarizes long articles.<br /><br /><strong>AI-Generated Summary Point</strong>:<br />Initial research showed a market gap for sustainable products.<br /><br /><strong>Source in Document</strong>:<br />“...Our Q2 analysis of market trends conclusively demonstrated that <strong>no major competitor was effectively serving the eco-conscious consumer, revealing a significant market gap for sustainable products</strong>...”</blockquote>

<h3 id="pattern-4-the-push-and-pull-visual-for-value-based-explanations">Pattern 4: The Push-and-Pull Visual (for Value-based Explanations)</h3>

<p>For more complex decisions, users might need to understand the interplay of factors. Simple data visualizations can make this clear without being overwhelming.</p>

<ul>
<li><strong>Heuristic</strong>: Use simple, color-coded data visualizations (like bar charts) to show the factors that positively and negatively influenced a decision.</li>
</ul>

<blockquote><strong>Example</strong>: An AI screening a candidate’s profile for a job.<br /><br />Why this candidate is a 75% match:<br /><br /><strong>Factors pushing the score up</strong>:<br /><ul><li>5+ Years UX Research Experience</li><li>Proficient in Python</li></ul><br /><strong>Factors pushing the score down</strong>:<br /><ul><li>No experience with B2B SaaS</li></ul></blockquote>

<p>Learning and using these design patterns in the UX of your AI product will help increase the explainability. You can also use additional techniques that I’m not covering in-depth here. This includes the following:</p>

<ul>
<li><strong>Natural language explanations</strong>: Translating an AI’s technical output into simple, conversational human language that non-experts can easily understand.</li>
<li><strong>Contextual explanations</strong>: Providing a rationale for an AI’s output at the specific moment and location, it is most relevant to the user’s task.</li>
<li><strong>Relevant visualizations</strong>: Using charts, graphs, or heatmaps to visually represent an AI’s decision-making process, making complex data intuitive and easier for users to grasp.</li>
</ul>

<p><strong>A Note For the Front End</strong>: <em>Translating these explainability outputs into seamless user experiences also presents its own set of technical considerations. Front-end developers often grapple with API design to efficiently retrieve explanation data, and performance implications (like the real-time generation of explanations for every user interaction) need careful planning to avoid latency.</em></p>

<h2 id="some-real-world-examples">Some Real-world Examples</h2>

<p><strong>UPS Capital’s DeliveryDefense</strong></p>

<p>UPS uses AI to assign a “delivery confidence score” to addresses to predict the likelihood of a package being stolen. Their <a href="https://about.ups.com/us/en/our-stories/innovation-driven/ups-s-deliverydefense-pits-ai-against-criminals.html">DeliveryDefense</a> software analyzes historical data on location, loss frequency, and other factors. If an address has a low score, the system can proactively reroute the package to a secure UPS Access Point, providing an explanation for the decision (e.g., “Package rerouted to a secure location due to a history of theft”). This system demonstrates how XAI can be used for risk mitigation and building customer trust through transparency.</p>

<p><strong>Autonomous Vehicles</strong></p>

<p>These vehicles of the future will need to effectively use <a href="https://online.hbs.edu/blog/post/ai-in-business">XAI to help their vehicles make safe, explainable decisions</a>. When a self-driving car brakes suddenly, the system can provide a real-time explanation for its action, for example, by identifying a pedestrian stepping into the road. This is not only crucial for passenger comfort and trust but is a regulatory requirement to prove the safety and accountability of the AI system.</p>

<p><strong>IBM Watson Health (and its challenges)</strong></p>

<p>While often cited as a general example of AI in healthcare, it’s also a valuable case study for the <em>importance</em> of XAI. The <a href="https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html">failure of its Watson for Oncology project</a> highlights what can go wrong when explanations are not clear, or when the underlying data is biased or not localized. The system’s recommendations were sometimes inconsistent with local clinical practices because they were based on U.S.-centric guidelines. This serves as a cautionary tale on the need for robust, context-aware explainability.</p>

<h2 id="the-ux-researcher-s-role-pinpointing-and-validating-explanations">The UX Researcher’s Role: Pinpointing And Validating Explanations</h2>

<p>Our design solutions are only effective if they address the right user questions at the right time. An explanation that answers a question the user doesn’t have is just noise. This is where UX research becomes the critical connective tissue in an XAI strategy, ensuring that we explain the what and how that actually matters to our users. The researcher’s role is twofold: first, to inform the strategy by identifying where explanations are needed, and second, to validate the designs that deliver those explanations.</p>

<h3 id="informing-the-xai-strategy-what-to-explain">Informing the XAI Strategy (What to Explain)</h3>

<p>Before we can design a single explanation, we must understand the user’s mental model of the AI system. What do they believe it’s doing? Where are the gaps between their understanding and the system’s reality? This is the foundational work of a UX researcher.</p>

<h4 id="mental-model-interviews-unpacking-user-perceptions-of-ai-systems">Mental Model Interviews: Unpacking User Perceptions Of AI Systems</h4>

<p>Through deep, semi-structured interviews, UX practitioners can gain invaluable insights into how users perceive and understand AI systems. These sessions are designed to encourage users to literally draw or describe their internal “mental model” of how they believe the AI works. This often involves asking open-ended questions that prompt users to explain the system’s logic, its inputs, and its outputs, as well as the relationships between these elements.</p>

<p>These interviews are powerful because they frequently reveal profound misconceptions and assumptions that users hold about AI. For example, a user interacting with a recommendation engine might confidently assert that the system is based purely on their past viewing history. They might not realize that the algorithm also incorporates a multitude of other factors, such as the time of day they are browsing, the current trending items across the platform, or even the viewing habits of similar users.</p>

<p>Uncovering this gap between a user’s mental model and the actual underlying AI logic is critically important. It tells us precisely what specific information we need to communicate to users to help them build a more accurate and robust mental model of the system. This, in turn, is a fundamental step in fostering trust. When users understand, even at a high level, how an AI arrives at its conclusions or recommendations, they are more likely to trust its outputs and rely on its functionality.</p>

<h4 id="ai-journey-mapping-a-deep-dive-into-user-trust-and-explainability">AI Journey Mapping: A Deep Dive Into User Trust And Explainability</h4>

<p>By meticulously mapping the user’s journey with an AI-powered feature, we gain invaluable insights into the precise moments where confusion, frustration, or even profound distrust emerge. This uncovers critical junctures where the user’s mental model of how the AI operates clashes with its actual behavior.</p>

<p>Consider a music streaming service: Does the user’s trust plummet when a playlist recommendation feels “random,” lacking any discernible connection to their past listening habits or stated preferences? This perceived randomness is a direct challenge to the user’s expectation of intelligent curation and a breach of the implicit promise that the AI understands their taste. Similarly, in a photo management application, do users experience significant frustration when an AI photo-tagging feature consistently misidentifies a cherished family member? This error is more than a technical glitch; it strikes at the heart of accuracy, personalization, and even emotional connection.</p>

<p>These pain points are vivid signals indicating precisely where a well-placed, clear, and concise explanation is necessary. Such explanations serve as crucial repair mechanisms, mending a breach of trust that, if left unaddressed, can lead to user abandonment.</p>

<p>The power of AI journey mapping lies in its ability to move us beyond simply explaining the final output of an AI system. While understanding <em>what</em> the AI produced is important, it’s often insufficient. Instead, this process compels us to focus on explaining the <em>process</em> at critical moments. This means addressing:</p>

<ul>
<li><strong>Why a particular output was generated</strong>: Was it due to specific input data? A particular model architecture?</li>
<li><strong>What factors influenced the AI’s decision</strong>: Were certain features weighted more heavily?</li>
<li><strong>How the AI arrived at its conclusion</strong>: Can we offer a simplified, analogous explanation of its internal workings?</li>
<li><strong>What assumptions the AI made</strong>: Were there implicit understandings of the user’s intent or data that need to be surfaced?</li>
<li><strong>What the limitations of the AI are</strong>: Clearly communicating what the AI <em>cannot</em> do, or where its accuracy might waver, builds realistic expectations.</li>
</ul>

<p>AI journey mapping transforms the abstract concept of XAI into a practical, actionable framework for UX practitioners. It enables us to move beyond theoretical discussions of explainability and instead pinpoint the exact moments where user trust is at stake, providing the necessary insights to build AI experiences that are powerful, transparent, understandable, and trustworthy.</p>

<p>Ultimately, research is how we uncover the unknowns. Your team might be debating how to explain why a loan was denied, but research might reveal that users are far more concerned with understanding how their data was used in the first place. Without research, we are simply guessing what our users are wondering.</p>

<h2 id="collaborating-on-the-design-how-to-explain-your-ai">Collaborating On The Design (How to Explain Your AI)</h2>

<p>Once research has identified what to explain, the collaborative loop with design begins. Designers can prototype the patterns we discussed earlier—the “Because” statement, the interactive sliders—and researchers can put those designs in front of users to see if they hold up.</p>

<p><strong>Targeted Usability &amp; Comprehension Testing</strong>: We can design research studies that specifically test the XAI components. We don’t just ask, “*Is this easy to use?*” We ask, “*After seeing this, can you tell me in your own words why the system recommended this product?*” or “*Show me what you would do to see if you could get a different result.*” The goal here is to measure comprehension and actionability, alongside usability.</p>

<p><strong>Measuring Trust Itself</strong>: We can use simple surveys and rating scales before and after an explanation is shown. For instance, we can ask a user on a 5-point scale, “*How much do you trust this recommendation?*” before they see the “Because” statement, and then ask them again afterward. This provides quantitative data on whether our explanations are actually moving the needle on trust.</p>

<p>This process creates a powerful, iterative loop. Research findings inform the initial design. That design is then tested, and the new findings are fed back to the design team for refinement. Maybe the “Because” statement was too jargony, or the “What-If” slider was more confusing than empowering. Through this collaborative validation, we ensure that the final explanations are technically accurate, genuinely understandable, useful, and trust-building for the people using the product.</p>

<h2 id="the-goldilocks-zone-of-explanation">The Goldilocks Zone Of Explanation</h2>

<p>A critical word of caution: it is possible to <em>over-explain</em>. As in the fairy tale, where Goldilocks sought the porridge that was ‘just right’, the goal of a good explanation is to provide the right amount of detail—not too much and not too little. Bombarding a user with every variable in a model will lead to cognitive overload and can actually <em>decrease</em> trust. The goal is not to make the user a data scientist.</p>

<p>One solution is <strong>progressive disclosure</strong>.</p>

<ol>
<li><strong>Start with the simple.</strong> Lead with a concise “Because” statement. For most users, this will be enough.</li>
<li><strong>Offer a path to detail.</strong> Provide a clear, low-friction link like “Learn More” or “See how this was determined.”</li>
<li><strong>Reveal the complexity.</strong> Behind that link, you can offer the interactive sliders, the visualizations, or a more detailed list of contributing factors.</li>
</ol>

<p>This layered approach respects user attention and expertise, providing just the right amount of information for their needs. Let’s imagine you’re using a smart home device that recommends optimal heating based on various factors.</p>

<p><strong>Start with the simple</strong>: “*Your home is currently heated to 72 degrees, which is the optimal temperature for energy savings and comfort.*”</p>

<p><strong>Offer a path to detail</strong>: Below that, a small link or button: “<em>Why is 72 degrees optimal?</em>&ldquo;</p>

<p><strong>Reveal the complexity</strong>: Clicking that link could open a new screen showing:</p>

<ul>
<li>Interactive sliders for outside temperature, humidity, and your preferred comfort level, demonstrating how these adjust the recommended temperature.</li>
<li>A visualization of energy consumption at different temperatures.</li>
<li>A list of contributing factors like “Time of day,” “Current outside temperature,” “Historical energy usage,” and “Occupancy sensors.”</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="449"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png"
			
			sizes="100vw"
			alt="An example of progressive disclosure in three stages"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 3: An example of progressive disclosure in three stages: the simple details with an option to click for more details, more details with the option to understand what will happen if the user changes the settings. (<a href='https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>It’s effective to combine multiple XAI methods and this Goldilocks Zone of Explanation pattern, which advocates for progressive disclosure, implicitly encourages this. You might start with a simple “Because” statement (Pattern 1) for immediate comprehension, and then offer a “Learn More” link that reveals a “What-If” Interactive (Pattern 2) or a “Push-and-Pull Visual” (Pattern 4) for deeper exploration.</p>

<p>For instance, a loan application system could initially state the primary reason for denial (feature importance), then allow the user to interact with a “What-If” tool to see how changes to their income or debt would alter the outcome (counterfactuals), and finally, provide a detailed “Push-and-Pull” chart (value-based explanation) to illustrate the positive and negative contributions of all factors. This layered approach allows users to access the level of detail they need, when they need it, preventing cognitive overload while still providing comprehensive transparency.</p>

<p>Determining which XAI tools and methods to use is primarily a function of thorough UX research. Mental model interviews and AI journey mapping are crucial for pinpointing user needs and pain points related to AI understanding and trust. Mental model interviews help uncover user misconceptions about how the AI works, indicating areas where fundamental explanations (like feature importance or local explanations) are needed. AI journey mapping, on the other hand, identifies critical moments of confusion or distrust in the user’s interaction with the AI, signaling where more granular or interactive explanations (like counterfactuals or value-based explanations) would be most beneficial to rebuild trust and provide agency.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="399"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png"
			
			sizes="100vw"
			alt="An example of a fictitious AI business startup assistant"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 4: An example of a fictitious AI business startup assistant. Here, the AI presents the key factor in how the risk level was determined. When the user asks what would change if they manipulate that factor, the counterfactual statement is shown, confirming the impact of that specific factor in the model. (<a href='https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Ultimately, the <em>best</em> way to choose a technique is to let user research guide your decisions, ensuring that the explanations you design directly address actual user questions and concerns, rather than simply offering technical details for their own sake.</p>

<h2 id="xai-for-deep-reasoning-agents">XAI for Deep Reasoning Agents</h2>

<p>Some of the newest AI systems, known as <a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/faqs-reasoning">deep reasoning agents</a>, produce an explicit “chain of thought” for every complex task. They do not merely cite sources; they show the logical, step-by-step path they took to arrive at a conclusion. While this transparency provides valuable context, a play-by-play that spans several paragraphs can feel overwhelming to a user simply trying to complete a task.</p>

<p>The principles of XAI, especially the Goldilocks Zone of Explanation, apply directly here. We can curate the journey, using progressive disclosure to show only the final conclusion and the most salient step in the thought process first. Users can then opt in to see the full, detailed, multi-step reasoning when they need to double-check the logic or find a specific fact. This approach respects user attention while preserving the agent’s full transparency.</p>

<h2 id="next-steps-empowering-your-xai-journey">Next Steps: Empowering Your XAI Journey</h2>

<p>Explainability is a fundamental pillar for building <strong>trustworthy and effective AI products</strong>. For the advanced practitioner looking to drive this change within their organization, the journey extends beyond design patterns into advocacy and continuous learning.</p>

<p>To deepen your understanding and practical application, consider exploring resources like the <a href="https://research.ibm.com/blog/ai-explainability-360">AI Explainability 360 (AIX360) toolkit</a> from IBM Research or Google’s <a href="https://pair-code.github.io/what-if-tool/">What-If Tool</a>, which offer interactive ways to explore model behavior and explanations. Engaging with communities like the <a href="https://responsibleaiforum.com">Responsible AI Forum</a> or specific research groups focused on human-centered AI can provide invaluable insights and collaboration opportunities.</p>

<p>Finally, be an advocate for XAI within your own organization. Frame explainability as a strategic investment. Consider a brief pitch to your leadership or cross-functional teams:</p>

<blockquote>“By investing in XAI, we’ll go beyond building trust; we’ll accelerate user adoption, reduce support costs by empowering users with understanding, and mitigate significant ethical and regulatory risks by exposing potential biases. This is good design and smart business.”</blockquote>

<p>Your voice, grounded in practical understanding, is crucial in bringing AI out of the black box and into a collaborative partnership with users.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Mansoor Ahmed Khan</author><title>From Chaos To Clarity: Simplifying Server Management With AI And Automation</title><link>https://www.smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/</link><pubDate>Tue, 18 Nov 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/</guid><description>Server chaos doesn’t have to be the norm. AI-ready infrastructure and automation can bring clarity, performance, and focus back to your web work.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/" />
              <title>From Chaos To Clarity: Simplifying Server Management With AI And Automation</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>From Chaos To Clarity: Simplifying Server Management With AI And Automation</h1>
                  
                    
                    <address>Mansoor Ahmed Khan</address>
                  
                  <time datetime="2025-11-18T10:00:00&#43;00:00" class="op-published">2025-11-18T10:00:00+00:00</time>
                  <time datetime="2025-11-18T10:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                <p>This article is sponsored by <b>Cloudways</b></p>
                

<p>If you build or manage websites for a living, you know the feeling. Your day is a constant juggle; one moment you’re fine-tuning a design, the next you’re troubleshooting a slow server or a mysterious error. Daily management of a complex web of plugins, integrations, and performance tools often feels like you’re just reacting to problems—putting out fires instead of building something new.</p>

<p>This reactive cycle is exhausting, and it pulls your focus away from meaningful work and into the technical weeds. A recent industry event, <a href="https://www.cloudways.com/en/bfcm-prepathon.php">Cloudways Prepathon 2025</a>, put a sharp focus on this very challenge. The discussions made it clear: the future of web work demands a better way. It requires an infrastructure that’s ready for AI; one that can actively help you turn this daily chaos into clarity.</p>

<p><em>The stakes for performance are higher than ever.</em></p>

<p>Suhaib Zaheer, SVP of Managed Hosting at DigitalOcean, and Ali Ahmed Khan, Sr. Director of Product Management, shared a telling statistic during their panel: <strong><a href="https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/mobile-site-load-time-statistics/">53% of mobile visitors</a> will leave a site if it takes more than three seconds to load.</strong></p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg"
			
			sizes="100vw"
			alt="Google data showing mobile page speed"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Data from Google underscores the critical importance of mobile page speed for retaining visitors. (Image Source: <a href='https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/mobile-site-load-time-statistics/'>Think with Google</a>) (<a href='https://files.smashing.media/articles/simplifying-server-management-ai-automation/1-google-data-mobile-page-speed.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Think about that for a second, and within half that time, your potential traffic is gone. This isn’t just about a slow website, but about lost trust, abandoned carts, and missed opportunities. Performance is no longer just a feature; it’s the foundation of user experience. And in today’s landscape, automation is the key to maintaining it consistently.</p>

<p>So how do we stop reacting and start preventing?</p>

<h2 id="the-old-way-a-constant-state-of-alert">The Old Way: A Constant State Of Alert</h2>

<p>For too long, server management has worked like this: something breaks, you receive an alert (or worse, a client complaint), and you start digging. You log into your server, check logs, try to correlate different metrics, and eventually (hopefully) find the root cause. Then you manually apply a fix.</p>

<p>This process is fragile and relies on your constant attention while eating up hours that could be spent on development, strategy, or client work. For freelancers and small teams, this time is your most valuable asset. Every minute spent manually diagnosing a disk space issue or a web stack failure is a minute not spent on growing your business.</p>

<p>The problem isn&rsquo;t a lack of tools. It&rsquo;s that most tools just show you the data; they don&rsquo;t help you understand it or act on it. They add to the noise instead of providing clarity.</p>

<h2 id="a-new-approach-from-diagnosis-to-automatic-resolution">A New Approach: From Diagnosis To Automatic Resolution</h2>

<p>This is where a shift towards intelligent automation changes the game. Tools like <a href="https://www.cloudways.com/en/cloudways-ai-copilot.php">Cloudways Copilot</a>, which became generally available earlier this year, are built specifically to simplify this workflow. The goal is straightforward: combine AI-driven diagnostics with automated fixes to predict and resolve performance issues before they affect your users.</p>

<p>Here’s a practical look at how it works.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png"
			
			sizes="100vw"
			alt="Cloudways Copilot workflow"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Cloudways Copilot workflow: Continuous monitoring leads to instant alerts, AI-powered diagnosis, and actionable recommendations. (Image source: <a href='https://www.cloudways.com/en/cloudways-ai-copilot.php'>Cloudways</a>) (<a href='https://files.smashing.media/articles/simplifying-server-management-ai-automation/2-cloudways-copilot-workflow.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Imagine your site starts running slowly. In the past, you&rsquo;d begin the tedious investigation.</p>

<h3 id="1-the-ai-insights">1. The AI Insights</h3>

<p>Instead of a generic &ldquo;high CPU&rdquo; alert, you get a detailed insight. It tells you what happened (e.g., &ldquo;MySQL process is consuming excessive resources&rdquo;), why it happened (e.g., &ldquo;caused by a poorly optimized query from a recent plugin update&rdquo;), and provides a step-by-step guide to fix it manually. This alone cuts diagnosis time from 30-40 minutes down to about five. You understand the problem, not just the diagnosis.</p>

<h3 id="2-the-smartfix">2. The SmartFix</h3>

<p>This is where it moves from helpful to transformative. For common issues, you don’t just get a manual guide. You get a one-click <em>SmartFix</em> button. After reviewing the actions Copilot will take, you can let it automatically resolve the issue. It applies the necessary steps safely and without you needing to touch a command line. This is the clarity we’re talking about. The system doesn’t just tell you about the problem; it solves it for you.</p>

<p>For developers managing multiple sites, this is a fundamental change. It means you can handle routine server issues at scale. A disk cleanup that would have required logging into ten different servers can now be handled with a few clicks. It frees your brain from repetitive troubleshooting and lets you focus on the work that actually requires your expertise.</p>

<h2 id="building-an-ai-ready-foundation">Building An AI-Ready Foundation</h2>

<p>The principles discussed at Prepathon go beyond any single tool. The theme was about building a resilient foundation. Meeky Hwang, CEO at Ndevr, introduced the <em>&ldquo;3E Framework,&rdquo;</em> which perfectly applies here. A strong platform must balance:</p>

<ul>
<li><strong>Audience Experience</strong><br />
What your visitors see and feel—blazing speed and seamless operation.</li>
<li><strong>Creator Experience</strong><br />
The workflow for you and your team—managing content and marketing without technical friction.</li>
<li><strong>Developer Experience</strong><br />
The backend foundation—server management that is secure, stable, and efficient.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png"
			
			sizes="100vw"
			alt="3E Framework"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A balanced platform is a resilient one. The 3E Framework shows how a strong foundation depends on three connected experiences. (Image source: <a href='https://www.cloudways.com/en/video/event-replays/prepathon-2025/from-fragile-to-ai-ready-websites-prepathon-2025'>Meeky Hwang / Ndevr</a>) (<a href='https://files.smashing.media/articles/simplifying-server-management-ai-automation/3-3e-framework.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>AI-driven server management directly strengthens all three. A faster, more stable server improves the <em>Audience Experience</em>. Fewer emergencies and simpler workflows improve the <em>Creator</em> and <em>Developer Experience</em>. When these are aligned, you can scale with confidence.</p>

<h2 id="this-isn-t-about-replacing-you">This Isn’t About Replacing You</h2>

<p>It’s important to be clear. This isn’t about replacing the developer but about augmenting your capabilities. As Vito Peleg, Co-founder &amp; CEO at Atarim, noted during <a href="https://www.cloudways.com/en/video/event-replays/prepathon-2025/whats-truly-working-in-ai-marketing-and-tech-prepathon-2025">Prepathon</a>:</p>

<blockquote>“We're all becoming prompt engineers in the modern world. Our job is no longer to do the task, but to orchestrate the fleet of AI agents that can do it at a scale we never could alone.”<br /><br />&mdash; Vito Peleg, Co-founder & CEO at Atarim</blockquote>

<p>Think of <a href="https://www.cloudways.com/en/cloudways-ai-copilot.php">Cloudways Copilot</a> as an expert sysadmin on your team. It handles the routine, often tedious, work. It alerts you to what’s important and provides clear, actionable context. This gives you back the mental space and time to focus on architecture, innovation, and client strategy.</p>

<blockquote>“The challenge isn’t managing servers anymore &mdash; it’s managing focus,”<br /><br /><a href="https://www.linkedin.com/in/zaheersuhaib/">Suhaib Zaheer</a> noted.<br /><br />“AI-driven infrastructure should help developers spend less time reacting to issues and more time creating better digital experiences.”</blockquote>

<h2 id="a-practical-path-forward">A Practical Path Forward</h2>

<p>For freelancers, WordPress experts, and small agency developers, this shift offers a tangible way to:</p>

<ul>
<li>Drastically reduce the hours spent manually troubleshooting infrastructure issues.</li>
<li>Implement predictive monitoring that catches slowdowns and bottlenecks early.</li>
<li>Manage your entire stack through clear, plain-English AI insights instead of raw data.</li>
<li>Balance speed, security, and uptime without needing an enterprise-scale budget or team.</li>
</ul>

<p>The goal is to make powerful infrastructure simple, while also giving you back control and your time so you can focus on what you do best: creating exceptional web experiences.</p>

<p><em>You can <a href="https://unified.cloudways.com/signup?coupon=BFCM5050">use promo code BFCM5050</a> to get 50% off for 3 months plus 50 Free Migrations using Cloudways. This offer is valid from November 18th to December 4th, 2025.</em></p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Paul Boag</author><title>AI In UX: Achieve More With Less</title><link>https://www.smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/</link><pubDate>Fri, 17 Oct 2025 08:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/</guid><description>A simple but powerful mental model for working with AI: treat it like an enthusiastic intern with no real-world experience. Paul Boag shares lessons learned from real client projects across user research, design, development, and content creation.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/" />
              <title>AI In UX: Achieve More With Less</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>AI In UX: Achieve More With Less</h1>
                  
                    
                    <address>Paul Boag</address>
                  
                  <time datetime="2025-10-17T08:00:00&#43;00:00" class="op-published">2025-10-17T08:00:00+00:00</time>
                  <time datetime="2025-10-17T08:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>I have made a lot of mistakes with AI over the past couple of years. I have wasted hours trying to get it to do things it simply cannot do. I have fed it terrible prompts and received terrible output. And I have definitely spent more time fighting with it than I care to admit.</p>

<p>But I have also discovered that when you stop treating AI like magic and start treating it like what it actually is (a very enthusiastic intern with zero life experience), things start to make more sense.</p>

<p>Let me share what I have learned from working with AI on real client projects across user research, design, development, and content creation.</p>

<h2 id="how-to-work-with-ai">How To Work With AI</h2>

<p>Here is the mental model that has been most helpful for me. Treat AI like an <strong>intern with zero experience</strong>.</p>

<p>An intern fresh out of university has lots of enthusiasm and qualifications, but no real-world experience. You would not trust them to do anything unsupervised. You would explain tasks in detail. You would expect to review their work multiple times. You would give feedback and ask them to try again.</p>

<p>This is exactly how you should work with AI.</p>

<h3 id="the-basics-of-prompting">The Basics Of Prompting</h3>

<p>I am not going to pretend to be an expert. I have just spent way too much time playing with this stuff because I like anything shiny and new. But here is what works for me.</p>

<ul>
<li><strong>Define the role.</strong><br />
Start with something like <em>“Act as a user researcher”</em>  or <em>“Act as a copywriter.”</em>  This gives the AI context for how to respond.</li>
<li><strong>Break it into steps.</strong><br />
Do not just say <em>“Analyze these interview transcripts.”</em> Instead, say <em>“I want you to complete the following steps. One, identify recurring themes. Two, look for questions users are trying to answer. Three, note any objections that come up. Four, output a summary of each.”</em></li>
<li><strong>Define success.</strong><br />
Tell it what good looks like. <em>“I am looking for a report that gives a clear indication of recurring themes and questions in a format I can send to stakeholders. Do not use research terminology because they will not understand it.”</em></li>
<li><strong>Make it think.</strong><br />
Tell it to think deeply about its approach before responding. Get it to create a way to test for success (known as a rubric) and iterate on its work until it passes that test.</li>
</ul>

<p>Here is a real prompt I use for online research:</p>

<blockquote>Act as a user researcher. I would like you to carry out deep research online into [brand name]. In particular, I would like you to focus on what people are saying about the brand, what the overall sentiment is, what questions people have, and what objections people mention. The goal is to create a detailed report that helps me better understand the brand perception.<br /><br />Think deeply about your approach before carrying out the research. Create a rubric for the report to ensure it is as useful as possible. Keep iterating until the report scores extremely high on the rubric. Only then, output the report.</blockquote>

<p>That second paragraph (the bit about thinking deeply and creating a rubric), I basically copy and paste into everything now. It is a universal way to get better output.</p>

<h3 id="learn-when-to-trust-it">Learn When To Trust It</h3>

<p>You should never fully trust AI. Just like you would never fully trust an intern you have only just met.</p>

<p>To begin with, double-check absolutely everything. Over time, you will get a sense of when it is losing its way. You will spot the patterns. You will know when to start a fresh conversation because the current one has gone off the rails.</p>

<p>But even after months of working with it daily, I still check its work. I still challenge it. I still make it <strong>cite sources</strong> and <strong>explain its reasoning</strong>.</p>

<p>The key is that even with all that checking, it is still faster than doing it yourself. Much faster.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="using-ai-for-user-research">Using AI For User Research</h2>

<p>This is where AI has genuinely transformed my work. I use it constantly for five main things.</p>

<h3 id="online-research">Online Research</h3>

<p>I love AI for this. I can ask it to go and research a brand online. What people are saying about it, what questions they have, what they like, and what frustrates them. Then do the same for competitors and compare.</p>

<p>This would have taken me days of trawling through social media and review sites. Now it takes minutes.</p>

<p>I recently did this for an e-commerce client. I wanted to understand what annoyed people about the brand and what they loved. I got detailed insights that shaped the entire conversion optimization strategy. All from one prompt.</p>

<h3 id="analyzing-interviews-and-surveys">Analyzing Interviews And Surveys</h3>

<p>I used to avoid open-ended questions in surveys. They were such a pain to review. Now I use them all the time because AI can analyze hundreds of text responses in seconds.</p>

<p>For interviews, I upload the transcripts and ask it to identify recurring themes, questions, and requests. I always get it to quote directly from the transcripts so I can verify it is not making things up.</p>

<p>The quality is good. Really good. As long as you give it <strong>clear instructions</strong> about what you want.</p>

<h3 id="making-sense-of-data">Making Sense Of Data</h3>

<p>I am terrible with spreadsheets. Put me in front of a person and I can understand them. Put me in front of data, and my eyes glaze over.</p>

<p>AI has changed that. I upload spreadsheets to ChatGPT and just ask questions. <em>“What patterns do you see?”</em> <em>“Can you reformat this?”</em> <em>“Show me this data in a different way.”</em></p>

<p><a href="https://clarity.microsoft.com/">Microsoft Clarity</a> now has Copilot built in, so you can ask it questions about your analytics data. <a href="https://www.triplewhale.com/">Triple Whale</a> does the same for e-commerce sites. These tools are game changers if you struggle with data like I do.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="465"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png"
			
			sizes="100vw"
			alt="Screenshot of the Microsoft Clarity with the built-in Copilot"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Microsoft Clarity has co-pilot built in, making it so much easier to uncover insights. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png'>Large preview</a>)
    </figcaption>
  
</figure>

<div class="partners__lead-place"></div>

<h3 id="research-projects">Research Projects</h3>

<p>This is probably my favorite technique. In ChatGPT and Claude, you can create projects. In other tools, they are called spaces. Think of them as self-contained folders where everything you put in is available to every conversation in that project.</p>

<p>When I start working with a new client, I create a project and throw everything in. Old user research. Personas. Survey results. Interview transcripts. Documentation. Background information. Site copy. Anything I can find.</p>

<p>Then I give it custom instructions. Here is one I use for my own business:</p>

<blockquote>Act as a business consultant and marketing strategy expert with good copywriting skills. Your role is to help me define the future of my <a href="https://boagworld.com/l/ux-consultant/">UX consultant business</a> and better articulate it, especially via my website. When I ask for your help, ask questions to improve your answers and challenge my assumptions where appropriate.</blockquote>

<p>I have even uploaded a virtual board of advisors (people I wish I had on my board) and asked AI to research how they think and respond as they would.</p>

<p>Now I have this project that knows everything about my business. I can ask it questions. Get it to review my work. <strong>Challenge my thinking.</strong> It is like having a co-worker who never gets tired and has a perfect memory.</p>

<p>I do this for every client project now. It is invaluable.</p>

<h3 id="creating-personas">Creating Personas</h3>

<p>AI has reinvigorated my interest in personas. I had lost heart in them a bit. They took too long to create, and clients always said they already had marketing personas and did not want to pay to do them again.</p>

<p>Now I can create what I call <a href="https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/">functional personas</a>. Personas that are actually useful to people who work in UX. Not marketing fluff about what brands people like, but real information about what questions they have and what tasks they are trying to complete.</p>

<p>I upload all my research to a project and say:</p>

<blockquote>Act as a user researcher. Create a persona for [audience type]. For this persona, research the following information: questions they have, tasks they want to complete, goals, states of mind, influences, and success metrics. It is vital that all six criteria are addressed in depth and with equal vigor.</blockquote>

<p>The output is really good. Detailed. Useful. Based on actual data rather than pulled out of thin air.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="2480"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png"
			
			sizes="100vw"
			alt="Ai-generated functional persona"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      AI makes creating detailed personas so much faster. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Here is my challenge to anyone who thinks AI-generated personas are somehow fake. What makes you think your personas are so much better? Every persona is a story of a <strong>hypothetical user</strong>. You make judgment calls when you create personas, too. At least AI can process far more information than you can and is brilliant at pattern recognition.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aMy%20only%20concern%20is%20that%20relying%20too%20heavily%20on%20AI%20could%20disconnect%20us%20from%20real%20users.%20We%20still%20need%20to%20talk%20to%20people.%20We%20still%20need%20that%20empathy.%20But%20as%20a%20tool%20to%20synthesize%20research%20and%20create%20reference%20points?%20It%20is%20excellent.%0a&url=https://smashingmagazine.com%2f2025%2f10%2fai-ux-achieve-more-with-less%2f">
      
My only concern is that relying too heavily on AI could disconnect us from real users. We still need to talk to people. We still need that empathy. But as a tool to synthesize research and create reference points? It is excellent.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h2 id="using-ai-for-design-and-development">Using AI For Design And Development</h2>

<p>Let me start with a warning. AI is not production-ready. Not yet. Not for the kind of client work I do, anyway.</p>

<p>Three reasons why:</p>

<ol>
<li>It is slow if you want something specific or complicated.</li>
<li>It can be frustrating because it gets close but not quite there.</li>
<li>And the quality is often subpar. Unpolished code, questionable design choices, that kind of thing.</li>
</ol>

<p>But that does not mean it is not useful. It absolutely is. Just not for final production work.</p>

<h3 id="functional-prototypes">Functional Prototypes</h3>

<p>If you are not too concerned with matching a specific design, AI can quickly prototype functionality in ways that are hard to match in Figma. Because Figma is terrible at prototyping functionality. You cannot even create an active form field in a Figma prototype. It’s the biggest thing people do online other than click links &mdash; and you cannot test it.</p>

<p>Tools like <a href="https://www.relume.io/">Relume</a> and <a href="https://bolt.new/">Bolt</a> can create quick functional mockups that show roughly how things work. They are great for non-designers who just need to throw together a prototype quickly. For designers, they can be useful for showing developers how you want something to work.</p>

<p>But you can spend ages getting them to put a hamburger menu on the right side of the screen. So use them for quick iteration, not pixel-perfect design.</p>

<h3 id="small-coding-tasks">Small Coding Tasks</h3>

<p>I use AI constantly for small, low-risk coding work. I am not a developer anymore. I used to be, back when dinosaurs roamed the earth, but not for years.</p>

<p>AI lets me create the little tools I need. <a href="https://boagworld.com/boagworks/convince-the-boss/">A calculator that calculates the ROI of my UX work</a>. An app for running top task analysis. Bits of JavaScript for hiding elements on a page. WordPress plugins for updating dates automatically.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="465"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png"
			
			sizes="100vw"
			alt="Screenshot of the Bolt tool"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      I find Bolt an incredibly intuitive tool for building quick prototypes for low-risk apps. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Just before running my workshop on this topic, I needed a tool to create calendar invites for multiple events. All the online services wanted £16 a month. I asked ChatGPT to build me one. One prompt. It worked. It looked rubbish, but I did not care. It did what I needed.</p>

<p>If you are a developer, you should absolutely be using tools like <a href="https://cursor.com/">Cursor</a> by now. They are invaluable for pair programming with AI. But if you are not a developer, just stick with Claude or Bolt for quick throwaway tools.</p>

<h3 id="reviewing-existing-services">Reviewing Existing Services</h3>

<p>There are some great tools for getting quick feedback on existing websites when budget and time are tight.</p>

<p>If you need to conduct a <a href="https://boagworld.com/l/ux-audit/">UX audit</a>, <a href="https://wevo.ai/takeapulse/">Wevo Pulse</a> is an excellent starting point. It automatically reviews a website based on personas and provides visual attention heatmaps, friction scores, and specific improvement recommendations. It generates insights in minutes rather than days.</p>

<p>Now, let me be clear. This does not replace having an experienced person conduct a proper UX audit. You still need that human expertise to understand context, make judgment calls, and spot issues that AI might miss. But as a starting point to identify obvious problems quickly? It is a great tool. Particularly when budget or time constraints mean a full audit is not on the table.</p>

<p>For e-commerce sites, <a href="https://baymard.com/product/ux-ray">Baymard has UX Ray</a>, which analyzes flaws based on their massive database of user research.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="465"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png"
			
			sizes="100vw"
			alt="Screenshot of the Baymard UX-ray"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Baymard UX-ray is an incredibly handy tool for improving the quality of your UX audits. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="checking-your-designs">Checking Your Designs</h3>

<p><a href="https://attentioninsight.com/">Attention Insight</a> has taken thousands of hours of eye-tracking studies and trained AI on it to predict where people will look on a page. It has about 90 to 96 percent accuracy.</p>

<p>You upload a screenshot of your design, and it shows you where attention is going. Then you can play around with your imagery and layout to guide attention to the right place.</p>

<p>It is great for dealing with stakeholders who say, <em>“People won’t see that.”</em> You can prove they will. Or equally, when stakeholders try to crowd the interface with too much stuff, you can show them attention shooting everywhere.</p>

<p>I use this constantly. Here is a real example from a pet insurance company. They had photos of a dog, cat, and rabbit for different types of advice. The dog was far from the camera. The cat was looking directly at the camera, pulling all the attention. The rabbit was half off-frame. Most attention went to the cat’s face.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="421"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png"
			
			sizes="100vw"
			alt="An example from a pet insurance company tested by Attention Insight"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>I redesigned it using AI-generated images, where I could control exactly where each animal looked. Dog looking at the camera. Cat looking right. Rabbit looking left. All the attention drawn into the center. Made a massive difference.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="394"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png"
			
			sizes="100vw"
			alt="Redesigned version of the previous example with AI-generated images."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      AI can be used to create images that are consistent with a brand identity and are designed to draw attention to specific elements. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="creating-the-perfect-image">Creating The Perfect Image</h3>

<p>I use AI all the time for creating images that do a specific job. My preferred tools are <a href="https://www.midjourney.com/">Midjourney</a> and Gemini.</p>

<p>I like Midjourney because, visually, it creates stunning imagery. You can dial in the tone and style you want. The downside is that it is not great at following specific instructions.</p>

<p>So I produce an image in Midjourney that is close, then upload it to Gemini. Gemini is not as good at visual style, but it is much better at following instructions. <em>“Make the guy reach here”</em> or <em>“Add glasses to this person.”</em> I can get pretty much exactly what I want.</p>

<p>The other thing I love about Midjourney is that you can upload a photograph and say, <em>“Replicate this style.”</em> This keeps <strong>consistency</strong> across a website. I have a master image I use as a reference for all my site imagery to keep the style consistent.</p>

<h2 id="using-ai-for-content">Using AI For Content</h2>

<p>Most clients give you terrible copy. Our job is to improve the user experience or conversion rate, and anything we do gets utterly undermined by bad copy.</p>

<p>I have completely stopped asking clients for copy since AI came along. Here is my process.</p>

<h3 id="build-everything-around-questions">Build Everything Around Questions</h3>

<p>Once I have my information architecture, I get AI to generate a massive list of questions users will ask. Then I run a <a href="https://www.smashingmagazine.com/2022/05/top-tasks-focus-what-matters-must-defocus-what-doesnt/">top task analysis</a> where people vote on which questions matter most.</p>

<p>I assign those questions to pages on the site. Every page gets a list of the questions it needs to answer.</p>

<h3 id="get-bullet-point-answers-from-stakeholders">Get Bullet Point Answers From Stakeholders</h3>

<p>I spin up the content management system with a really basic theme. Just HTML with very basic formatting. I go through every page and assign the questions.</p>

<p>Then I go to my clients and say: <em>“I do not want you to write copy. Just go through every page and bullet point answers to the questions. If the answer exists on the old site, copy and paste some text or link to it. But just bullet points.”</em></p>

<p>That is their job done. Pretty much.</p>

<div class="partners__lead-place"></div>

<h3 id="let-ai-draft-the-copy">Let AI Draft The Copy</h3>

<p>Now I take control. I feed ChatGPT the questions and bullet points and say:</p>

<blockquote>Act as an online copywriter. Write copy for a webpage that answers the question [question]. Use the following bullet points to answer that question: [bullet points]. Use the following guidelines: Aim for a ninth-grade reading level or below. Sentences should be short. Use plain language. Avoid jargon. Refer to the reader as you. Refer to the writer as us. Ensure the tone is friendly, approachable, and reassuring. The goal is to [goal]. Think deeply about your approach. Create a rubric and iterate until the copy is excellent. Only then, output it.</blockquote>

<p>I often upload a full style guide as well, with details about how I want it to be written.</p>

<p>The output is genuinely good. As a first draft, it is excellent. Far better than what most stakeholders would give me.</p>

<h3 id="stakeholders-review-and-provide-feedback">Stakeholders Review And Provide Feedback</h3>

<p>That goes into the website, and stakeholders can comment on it. Once I get their feedback, I take the original copy and all their comments back into ChatGPT and say, <em>“Rewrite using these comments.”</em></p>

<p>Job done.</p>

<p>The great thing about this approach is that even if stakeholders make loads of changes, they are making changes to a good foundation. The overall quality still comes out better than if they started with a blank sheet.</p>

<p>It also makes things go smoother because you are not criticizing their content, where they get defensive. They are criticizing AI content.</p>

<h3 id="tools-that-help">Tools That Help</h3>

<p>If your stakeholders are still giving you content, <a href="https://hemingwayapp.com/">Hemingway Editor</a> is brilliant. Copy and paste text in, and it tells you how readable and scannable it is. It highlights long sentences and jargon. You can use this to prove to clients that their content is not good web copy.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="497"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png"
			
			sizes="100vw"
			alt="Screenshot of the Hemingway Editor"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Hemingway Editor is superb at rewriting copy to be more web-friendly. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If you pay for the pro version, you get AI tools that will rewrite the copy to be more readable. It is excellent.</p>

<h2 id="what-this-means-for-you">What This Means for You</h2>

<p>Let me be clear about something. None of this is perfect. AI makes mistakes. It hallucinates. It produces bland output if you do not push it hard enough. It requires constant checking and challenging.</p>

<p>But here is what I know from two years of using this stuff daily. It has made me <strong>faster</strong>. It has made me <strong>better</strong>. It has freed me up to do <strong>more strategic thinking</strong> and <strong>less grunt work</strong>.</p>

<p>A report that would have taken me five days now takes three hours. That is not an exaggeration. That is real.</p>

<p>Overall, AI probably gives me a 25 to 33 percent increase in what I can do. That is significant.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aYour%20value%20as%20a%20UX%20professional%20lies%20in%20your%20ideas,%20your%20questions,%20and%20your%20thinking.%20Not%20your%20ability%20to%20use%20Figma.%20Not%20your%20ability%20to%20manually%20review%20transcripts.%20Not%20your%20ability%20to%20write%20reports%20from%20scratch.%0a&url=https://smashingmagazine.com%2f2025%2f10%2fai-ux-achieve-more-with-less%2f">
      
Your value as a UX professional lies in your ideas, your questions, and your thinking. Not your ability to use Figma. Not your ability to manually review transcripts. Not your ability to write reports from scratch.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>AI cannot innovate. It cannot make creative leaps. It cannot know whether its output is good. It cannot understand what it is like to be human.</p>

<p>That is where you come in. That is where you will always come in.</p>

<p>Start small. Do not try to learn everything at once. Just ask yourself throughout your day: Could I do this with AI? Try it. See what happens. Double-check everything. Learn what works and what does not.</p>

<p>Treat it like an enthusiastic intern with zero life experience. Give it clear instructions. Check its work. Make it try again. Challenge it. Push it further.</p>

<p>And remember, it is not going to take your job. It is going to change it. For the better, I think. As long as we learn to work with it rather than against it.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Yegor Gilyov</author><title>Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)</title><link>https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/</link><pubDate>Fri, 03 Oct 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/</guid><description>Ready to move beyond static mockups? Here is a practical, step-by-step guide to Intent Prototyping &amp;mdash; a disciplined method that uses AI to turn your design intent (UI sketches, conceptual models, and user flows) directly into a live prototype, making it your primary canvas for ideation.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/" />
              <title>Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)</h1>
                  
                    
                    <address>Yegor Gilyov</address>
                  
                  <time datetime="2025-10-03T10:00:00&#43;00:00" class="op-published">2025-10-03T10:00:00+00:00</time>
                  <time datetime="2025-10-03T10:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>In <strong><a href="https://www.smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/">Part 1</a></strong> of this series, we explored the “lopsided horse” problem born from mockup-centric design and demonstrated how the seductive promise of vibe coding often leads to structural flaws. The main question remains:</p>

<blockquote>How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap?</blockquote>

<p>In other words, we need a way to build prototypes that are both fast to create and founded on a clear, unambiguous blueprint.</p>

<p>The answer is a more disciplined process I call <strong>Intent Prototyping</strong> (kudos to Marco Kotrotsos, who coined <a href="https://kotrotsos.medium.com/intent-oriented-programming-bridging-human-thought-and-ai-machine-execution-3a92373cc1b6">Intent-Oriented Programming</a>). This method embraces the power of AI-assisted coding but rejects ambiguity, putting the designer’s explicit <em>intent</em> at the very center of the process. It receives a holistic expression of <em>intent</em> (sketches for screen layouts, conceptual model description, boxes-and-arrows for user flows) and uses it to generate a live, testable prototype.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="491"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg"
			
			sizes="100vw"
			alt="Diagram showing sketches, a conceptual model, and user flows as inputs to Intent Prototyping, which outputs a live prototype."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Intent Prototyping workflow. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This method solves the concerns we’ve discussed in Part 1 in the best way possible:</p>

<ul>
<li><strong>Unlike static mockups,</strong> the prototype is fully interactive and can be easily populated with a large amount of realistic data. This lets us test the system’s underlying logic as well as its surface.</li>
<li><strong>Unlike a vibe-coded prototype</strong>, it is built from a stable, unambiguous specification. This prevents the conceptual model failures and design debt that happen when things are unclear. The engineering team doesn’t need to reverse-engineer a black box or become “code archaeologists” to guess at the designer’s vision, as they receive not only a live prototype but also a clearly documented design intent behind it.</li>
</ul>

<p>This combination makes the method especially suited for designing complex enterprise applications. It allows us to test the system’s most critical point of failure, its underlying structure, at a speed and flexibility that was previously impossible. Furthermore, the process is built for iteration. You can explore as many directions as you want simply by changing the intent and evolving the design based on what you learn from user testing.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="my-workflow">My Workflow</h2>

<p>To illustrate this process in action, let’s walk through a case study. It’s the very same example I’ve used to illustrate the vibe coding trap: a simple tool to track tests to validate product ideas. You can find the complete project, including all the source code and documentation files discussed below, in this <a href="https://github.com/YegorGilyov/reality-check">GitHub repository</a>.</p>

<h3 id="step-1-expressing-an-intent">Step 1: Expressing An Intent</h3>

<p>Imagine we’ve already done proper research, and having mused on the defined problem, I begin to form a vague idea of what the solution might look like. I need to capture this idea immediately, so I quickly sketch it out:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="583"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png"
			
			sizes="100vw"
			alt="A rough sketch of screens to manage product ideas and reality checks."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A low-fidelity sketch of the initial idea. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>In this example, I used Excalidraw, but the tool doesn’t really matter. Note that we deliberately keep it rough, as visual details are not something we need to focus on at this stage. And we are not going to be stuck here: we want to make a leap from this initial sketch directly to a live prototype that we can put in front of potential users. Polishing those sketches would not bring us any closer to achieving our goal.</p>

<p>What we need to move forward is to add to those sketches just enough details so that they may serve as a sufficient input for a junior frontend developer (or, in our case, an AI assistant). This requires explaining the following:</p>

<ul>
<li>Navigational paths (clicking here takes you to).</li>
<li>Interaction details that can’t be shown in a static picture (e.g., non-scrollable areas, adaptive layout, drag-and-drop behavior).</li>
<li>What parts might make sense to build as reusable components.</li>
<li>Which components from the design system (I’m using <a href="https://ant.design/">Ant Design Library</a>) should be used.</li>
<li>Any other comments that help understand how this thing should work (while sketches illustrate how it should look).</li>
</ul>

<p>Having added all those details, we end up with such an annotated sketch:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="399"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png"
			
			sizes="100vw"
			alt="The initial sketch with annotations specifying components, navigation, and interaction details."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The sketch annotated with details. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>As you see, this sketch covers both the Visualization and Flow aspects. You may ask, what about the Conceptual Model? Without that part, the expression of our <em>intent</em> will not be complete. One way would be to add it somewhere in the margins of the sketch (for example, as a UML Class Diagram), and I would do so in the case of a more complex application, where the model cannot be simply derived from the UI. But in our case, we can save effort and ask an LLM to generate a comprehensive description of the conceptual model based on the sketch.</p>

<p>For tasks of this sort, the LLM of my choice is Gemini 2.5 Pro. What is important is that this is a multimodal model that can accept not only text but also images as input (GPT-5 and Claude-4 also fit that criteria). I use Google AI Studio, as it gives me enough control and visibility into what’s happening:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="579"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png"
			
			sizes="100vw"
			alt="Screenshot of Google AI Studio with an annotated sketch as input."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Generating a conceptual model from the sketch using Google AI Studio. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>Note</strong>: <em>All the prompts that I use here and below can be found in the <a href="#appendices">Appendices</a>. The prompts are not custom-tailored to any particular project; they are supposed to be reused as they are.</em></p>

<p>As a result, Gemini gives us a description and the following diagram:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="480"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg"
			
			sizes="100vw"
			alt="UML class diagram showing two connected entities: “ProductIdea” and “RealityCheck”."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      UML class diagram. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The diagram might look technical, but I believe that a clear understanding of all objects, their attributes, and relationships between them is key to good design. That’s why I consider the Conceptual Model to be an essential part of expressing <em>intent</em>, along with the Flow and Visualization.</p>

<p>As a result of this step, our <em>intent</em> is fully expressed in two files: <code>Sketch.png</code> and <code>Model.md</code>. This will be our durable source of truth.</p>

<h3 id="step-2-preparing-a-spec-and-a-plan">Step 2: Preparing A Spec And A Plan</h3>

<p>The purpose of this step is to create a comprehensive technical specification and a step-by-step plan. Most of the work here is done by AI; you just need to keep an eye on it.</p>

<p>I separate the Data Access Layer and the UI layer, and create specifications for them using two different prompts (see <a href="#appendices">Appendices 2 and 3</a>). The output of the first prompt (the Data Access Layer spec) serves as an input for the second one. Note that, as an additional input, we give the guidelines tailored for prototyping needs (see <a href="#appendices">Appendices 8, 9, and 10</a>). They are not specific to this project. The technical approach encoded in those guidelines is out of the scope of this article.</p>

<p>As a result, Gemini provides us with content for <code>DAL.md</code> and <code>UI.md</code>. Although in most cases this result is quite reliable enough, you might want to scrutinize the output. You don’t need to be a real programmer to make sense of it, but some level of programming literacy would be really helpful. However, even if you don’t have such skills, don’t get discouraged. The good news is that if you don’t understand something, you always know who to ask. Do it in Google AI Studio before refreshing the context window. If you believe you’ve spotted a problem, let Gemini know, and it will either fix it or explain why the suggested approach is actually better.</p>

<p>It’s important to remember that by their nature, <strong>LLMs are not deterministic</strong> and, to put it simply, can be forgetful about small details, especially when it comes to details in sketches. Fortunately, you don’t have to be an expert to notice that the “Delete” button, which is in the upper right corner of the sketch, is not mentioned in the spec.</p>

<p>Don’t get me wrong: Gemini does a stellar job most of the time, but there are still times when it slips up. Just let it know about the problems you’ve spotted, and everything will be fixed.</p>

<p>Once we have <code>Sketch.png</code>, <code>Model.md</code>, <code>DAL.md</code>, <code>UI.md</code>, and we have reviewed the specs, we can grab a coffee. We deserve it: our technical design documentation is complete. It will serve as a stable foundation for building the actual thing, without deviating from our original intent, and ensuring that all components fit together perfectly, and all layers are stacked correctly.</p>

<p>One last thing we can do before moving on to the next steps is to prepare a step-by-step plan. We split that plan into two parts: one for the Data Access Layer and another for the UI. You can find prompts I use to create such a plan in <a href="#appendices">Appendices 4 and 5</a>.</p>

<h3 id="step-3-executing-the-plan">Step 3: Executing The Plan</h3>

<p>To start building the actual thing, we need to switch to another category of AI tools. Up until this point, we have relied on Generative AI. It excels at creating new content (in our case, specifications and plans) based on a single prompt. I’m using Google Gemini 2.5 Pro in Google AI Studio, but other similar tools may also fit such one-off tasks: ChatGPT, Claude, Grok, and DeepSeek.</p>

<p>However, at this step, this wouldn’t be enough. Building a prototype based on specs and according to a plan requires an AI that can read context from multiple files, execute a sequence of tasks, and maintain coherence. A simple generative AI can’t do this. It would be like asking a person to build a house by only ever showing them a single brick. What we need is an agentic AI that can be given the full house blueprint and a project plan, and then get to work building the foundation, framing the walls, and adding the roof in the correct sequence.</p>

<p>My coding agent of choice is Google Gemini CLI, simply because Gemini 2.5 Pro serves me well, and I don’t think we need any middleman like Cursor or Windsurf (which would use Claude, Gemini, or GPT under the hood anyway). If I used Claude, my choice would be Claude Code, but since I’m sticking with Gemini, Gemini CLI it is. But if you prefer Cursor or Windsurf, I believe you can apply the same process with your favourite tool.</p>

<p>Before tasking the agent, we need to create a basic template for our React application. I won’t go into this here. You can find plenty of tutorials on how to scaffold an empty React project using Vite.</p>

<p>Then we put all our files into that project:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="666"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png"
			
			sizes="100vw"
			alt="A file directory showing the docs folder containing DAL.md, Model.md, Sketch.png, and UI.md."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Project structure with design intent and spec files. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Once the basic template with all our files is ready, we open Terminal, go to the folder where our project resides, and type “gemini”:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="419"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png"
			
			sizes="100vw"
			alt="Screenshot of a terminal showing the Gemini CLI."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Gemini CLI. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>And we send the prompt to build the Data Access Layer (see <a href="#appendices">Appendix 6</a>). That prompt implies step-by-step execution, so upon completion of each step, I send the following:</p>

<div class="break-out">
<pre><code class="language-markdown">Thank you! Now, please move to the next task.
Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec. 
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.
</code></pre>
</div>

<p>As the last task in the plan, the agent builds a special page where we can test all the capabilities of our Data Access Layer, so that we can manually test it. It may look like this:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="572"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png"
			
			sizes="100vw"
			alt="A basic webpage with forms and buttons to test the Data Access Layer’s CRUD functions."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The AI-generated test page for the Data Access Layer. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>It doesn’t look fancy, to say the least, but it allows us to ensure that the Data Access Layer works correctly before we proceed with building the final UI.</p>

<p>And finally, we clear the Gemini CLI context window to give it more headspace and send the prompt to build the UI (see <a href="#appendices">Appendix 7</a>). This prompt also implies step-by-step execution. Upon completion of each step, we test how it works and how it looks, following the “Manual Testing Plan” from <code>UI-plan.md</code>. I have to say that despite the fact that the sketch has been uploaded to the model context and, in general, Gemini tries to follow it, attention to visual detail is not one of its strengths (yet). Usually, a few additional nudges are needed at each step to improve the look and feel:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="320"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png"
			
			sizes="100vw"
			alt="A before-and-after comparison showing the UI&#39;s visual improvement."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Refining the AI-generated UI to match the sketch. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Once I’m happy with the result of a step, I ask Gemini to move on:</p>

<div class="break-out">
<pre><code class="language-markdown">Thank you! Now, please move to the next task.
Make sure you build the UI according to the sketch; this is very important. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.  
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.
</code></pre>
</div>

<p>Before long, the result looks like this, and in every detail it works exactly as we <em>intended</em>:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="486"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png"
			
			sizes="100vw"
			alt="Screenshots of the final, polished application UI."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The final interactive prototype. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The prototype is up and running and looking nice. Does it mean that we are done with our work? Surely not, the most fascinating part is just beginning.</p>

<div class="partners__lead-place"></div>

<h3 id="step-4-learning-and-iterating">Step 4: Learning And Iterating</h3>

<p>It’s time to put the prototype in front of potential users and learn more about whether this solution relieves their pain or not.</p>

<p>And as soon as we learn something new, we iterate. We adjust or extend the sketches and the conceptual model, based on that new input, we update the specifications, create plans to make changes according to the new specifications, and execute those plans. In other words, for every iteration, we repeat the steps I’ve just walked you through.</p>

<h3 id="is-this-workflow-too-heavy">Is This Workflow Too Heavy?</h3>

<p>This four-step workflow may create an impression of a somewhat heavy process that requires too much thinking upfront and doesn’t really facilitate creativity. But before jumping to that conclusion, consider the following:</p>

<ul>
<li>In practice, only the first step requires real effort, as well as learning in the last step. AI does most of the work in between; you just need to keep an eye on it.</li>
<li>Individual iterations don’t need to be big. You can start with a <a href="https://wiki.c2.com/?WalkingSkeleton">Walking Skeleton</a>: the bare minimum implementation of the thing you have in mind, and add more substance in subsequent iterations. You are welcome to change your mind about the overall direction in between iterations.</li>
<li>And last but not least, maybe the idea of “think before you do” is not something you need to run away from. A clear and unambiguous statement of intent can prevent many unnecessary mistakes and save a lot of effort down the road.</li>
</ul>

<h2 id="intent-prototyping-vs-other-methods">Intent Prototyping Vs. Other Methods</h2>

<p>There is no method that fits all situations, and Intent Prototyping is not an exception. Like any specialized tool, it has a specific purpose. The most effective teams are not those who master a single method, but those who understand which approach to use to mitigate the most significant risk at each stage. The table below gives you a way to make this choice clearer. It puts Intent Prototyping next to other common methods and tools and explains each one in terms of the primary goal it helps achieve and the specific risks it is best suited to mitigate.</p>

<table class="tablesaw break-out" style="grid-column: 3 / 18; font-size: 13pt;">
    <thead>
        <tr>
            <th>Method/Tool</th>
            <th>Goal</th>
            <th>Risks it is best suited to mitigate</th>
            <th width="300">Examples</th>
            <th>Why</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Intent Prototyping</td>
            <td>To rapidly iterate on the fundamental architecture of a data-heavy application with a complex conceptual model, sophisticated business logic, and non-linear user flows.</td>
            <td>Building a system with a flawed or incoherent conceptual model, leading to critical bugs and costly refactoring.</td>
            <td><ul><li>A CRM (Customer Relationship Management system).</li><li>A Resource Management Tool.</li><li>A No-Code Integration Platform (admin’s UI).</li></ul></td>
            <td>It enforces conceptual clarity. This not only de-risks the core structure but also produces a clear, documented blueprint that serves as a superior specification for the engineering handoff.</td>
        </tr>
        <tr>
            <td>Vibe Coding (Conversational)</td>
            <td>To rapidly explore interactive ideas through improvisation.</td>
            <td>Losing momentum because of analysis paralysis.</td>
            <td><ul><li>An interactive data table with live sorting/filtering.</li><li>A novel navigation concept.</li><li>A proof-of-concept for a single, complex component.</li></ul></td>
            <td>It has the smallest loop between an idea conveyed in natural language and an interactive outcome.</td>
        </tr>
        <tr>
            <td>Axure</td>
            <td>To test complicated conditional logic within a specific user journey, without having to worry about how the whole system works.</td>
            <td>Designing flows that break when users don’t follow the “happy path.”</td>
            <td><ul><li>A multi-step e-commerce checkout.</li><li>A software configuration wizard.</li><li>A dynamic form with dependent fields.</li></ul></td>
            <td>It’s made to create complex <code>if-then</code> logic and manage variables visually. This lets you test complicated paths and edge cases in a user journey without writing any code.</td>
        </tr>
        <tr>
            <td>Figma</td>
            <td>To make sure that the user interface looks good, aligns with the brand, and has a clear information architecture.</td>
            <td>Making a product that looks bad, doesn't fit with the brand, or has a layout that is hard to understand.</td>
            <td><ul><li>A marketing landing page.</li><li>A user onboarding flow.</li><li>Presenting a new visual identity.</li></ul></td>
            <td>It excels at high-fidelity visual design and provides simple, fast tools for linking static screens.</td>
        </tr>
        <tr>
            <td>ProtoPie, Framer</td>
            <td>To make high-fidelity micro-interactions feel just right.</td>
            <td>Shipping an application that feels cumbersome and unpleasant to use because of poorly executed interactions.</td>
            <td><ul><li>A custom pull-to-refresh animation.</li><li>A fluid drag-and-drop interface.</li><li>An animated chart or data visualization.</li></ul></td>
            <td>These tools let you manipulate animation timelines, physics, and device sensor inputs in great detail. Designers can carefully work on and test the small things that make an interface feel really polished and fun to use.</td>
        </tr>
        <tr>
            <td>Low-code / No-code Tools (e.g., Bubble, Retool)</td>
            <td>To create a working, data-driven app as quickly as possible.</td>
            <td>The application will never be built because traditional development is too expensive.</td>
            <td><ul><li>An internal inventory tracker.</li><li>A customer support dashboard.</li><li>A simple directory website.</li></ul></td>
            <td>They put a UI builder, a database, and hosting all in one place. The goal is not merely to make a prototype of an idea, but to make and release an actual, working product. This is the last step for many internal tools or MVPs.</td>
        </tr>
    </tbody>
</table>

<p><br /></p>

<p>The key takeaway is that each method is a <strong>specialized tool for mitigating a specific type of risk</strong>. For example, Figma de-risks the visual presentation. ProtoPie de-risks the feel of an interaction. Intent Prototyping is in a unique position to tackle the most foundational risk in complex applications: building on a flawed or incoherent conceptual model.</p>

<div class="partners__lead-place"></div>

<h2 id="bringing-it-all-together">Bringing It All Together</h2>

<p>The era of the “lopsided horse” design, sleek on the surface but structurally unsound, is a direct result of the trade-off between fidelity and flexibility. This trade-off has led to a process filled with redundant effort and misplaced focus. Intent Prototyping, powered by modern AI, eliminates that conflict. It’s not just a shortcut to building faster &mdash; it’s a <strong>fundamental shift in how we design</strong>. By putting a clear, unambiguous <em>intent</em> at the heart of the process, it lets us get rid of the redundant work and focus on architecting a sound and robust system.</p>

<p>There are three major benefits to this renewed focus. First, by going straight to live, interactive prototypes, we shift our validation efforts from the surface to the deep, testing the system’s actual logic with users from day one. Second, the very act of documenting the design <em>intent</em> makes us clear about our ideas, ensuring that we fully understand the system’s underlying logic. Finally, this documented <em>intent</em> becomes a durable source of truth, eliminating the ambiguous handoffs and the redundant, error-prone work of having engineers reverse-engineer a designer’s vision from a black box.</p>

<p>Ultimately, Intent Prototyping changes the object of our work. It allows us to move beyond creating <strong>pictures of a product</strong> and empowers us to become architects of <strong>blueprints for a system</strong>. With the help of AI, we can finally make the live prototype the primary canvas for ideation, not just a high-effort afterthought.</p>

<h3 id="appendices">Appendices</h3>

<p>You can find the full <strong>Intent Prototyping Starter Kit</strong>, which includes all those prompts and guidelines, as well as the example from this article and a minimal boilerplate project, in this <a href="https://github.com/YegorGilyov/intent-prototyping-starter-kit">GitHub repository</a>.</p>

<div class="js-table-accordion accordion book__toc" id="TOC" aria-multiselectable="true">
    <dl class="accordion-list" style="margin-bottom: 1em" data-handler="Accordion">
          <dt tabindex="0" class="accordion-item" id="accordion-item-0" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 1: Sketch to UML Class Diagram
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-0" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Software Architect specializing in Domain-Driven Design. You are tasked with defining a conceptual model for an app based on information from a UI sketch.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the sketch carefully. There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Generate the conceptual model description in the Mermaid format using a UML class diagram.

&#35;&#35; Ground Rules

- Every entity must have the following attributes:
    - `id` (string)
    - `createdAt` (string, ISO 8601 format)
    - `updatedAt` (string, ISO 8601 format)
- Include all attributes shown in the UI: If a piece of data is visually represented as a field for an entity, include it in the model, even if it's calculated from other attributes.
- Do not add any speculative entities, attributes, or relationships ("just in case"). The model should serve the current sketch's requirements only. 
- Pay special attention to cardinality definitions (e.g., if a relationship is optional on both sides, it cannot be `"1" -- "0..*"`, it must be `"0..1" -- "0..*"`).
- Use only valid syntax in the Mermaid diagram.
- Do not include enumerations in the Mermaid diagram.
- Add comments explaining the purpose of every entity, attribute, and relationship, and their expected behavior (not as a part of the diagram, in the Markdown file).

&#35;&#35; Naming Conventions

- Names should reveal intent and purpose.
- Use PascalCase for entity names.
- Use camelCase for attributes and relationships.
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).

&#35;&#35; Final Instructions

- &#42;&#42;No Assumptions:** Base every detail on visual evidence in the sketch, not on common design patterns. 
- &#42;*Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- &#42;&#42;Do not add redundant empty lines between items.&#42;&#42; 

Your final output should be the complete, raw markdown content for `Model.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-1" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 2: Sketch to DAL Spec
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-1" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a comprehensive technical specification for the development team in a structured markdown document, based on a UI sketch and a conceptual model description. 

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- `Model.md`: the conceptual model
- `Sketch.png`: the UI sketch

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
- `Zustand-guidelines.md`: Zustand Best Practices

&#42;&#42;Step 3:&#42;&#42; Create a Markdown specification for the stores and entity-specific hook that implements all the logic and provides all required operations.

---

&#35;&#35; Markdown Output Structure

Use this template for the entire document.

```markdown

&#35; Data Access Layer Specification

This document outlines the specification for the data access layer of the application, following the principles defined in `docs/guidelines/Zustand-guidelines.md`.

&#35;&#35; 1. Type Definitions

Location: `src/types/entities.ts`

&#35;&#35;&#35; 1.1. `BaseEntity`

A shared interface that all entities should extend.

[TypeScript interface definition]

&#35;&#35;&#35; 1.2. `[Entity Name]`

The interface for the [Entity Name] entity.

[TypeScript interface definition]

&#35;&#35; 2. Zustand Stores

&#35;&#35;&#35; 2.1. Store for `[Entity Name]`

&#42;&#42;Location:&#42;&#42; `src/stores/[Entity Name (plural)].ts`

The Zustand store will manage the state of all [Entity Name] items.

&#42;&#42;Store State (`[Entity Name]State`):&#42;&#42;

[TypeScript interface definition]

&#42;&#42;Store Implementation (`use[Entity Name]Store`):&#42;&#42;

- The store will be created using `create&lt;[Entity Name]State&gt;()(...)`.
- It will use the `persist` middleware from `zustand/middleware` to save state to `localStorage`. The persistence key will be `[entity-storage-key]`.
- `[Entity Name (plural, camelCase)]` will be a dictionary (`Record&lt;string, [Entity]&gt;`) for O(1) access.

&#42;&#42;Actions:&#42;&#42;

- &#42;&#42;`add[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`update[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`remove[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`doSomethingElseWith[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
    
&#35;&#35; 3. Custom Hooks

&#35;&#35;&#35; 3.1. `use[Entity Name (plural)]`

&#42;&#42;Location:&#42;&#42; `src/hooks/use[Entity Name (plural)].ts`

The hook will be the primary interface for UI components to interact with [Entity Name] data.

&#42;&#42;Hook Return Value:&#42;&#42;

[TypeScript interface definition]

&#42;&#42;Hook Implementation:&#42;&#42;

[List all properties and methods returned by this hook, and briefly explain the logic behind them, including data transformations, memoization. Do not write the actual code here.]

```

--- 

&#35;&#35; Final Instructions

- &#42;&#42;No Assumptions:&#42;&#42; Base every detail in the specification on the conceptual model or visual evidence in the sketch, not on common design patterns. 
- &#42;&#42;Double-Check:&#42;&#42; After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- &#42;&#42;Do not add redundant empty lines between items.&#42;&#42; 

Your final output should be the complete, raw markdown content for `DAL.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-2" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 3: Sketch to UI Spec
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-2" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a comprehensive technical specification by translating a UI sketch into a structured markdown document for the development team.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully: 

- `Sketch.png`: the UI sketch
  - Note that red lines, red arrows, and red text within the sketch are annotations for you and should not be part of the final UI design. They provide hints and clarification. Never translate them to UI elements directly.
- `Model.md`: the conceptual model
- `DAL.md`: the Data Access Layer spec

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices

&#42;&#42;Step 3:&#42;&#42; Generate the complete markdown content for a new file, `UI.md`.

---

&#35;&#35; Markdown Output Structure

Use this template for the entire document.

```markdown

&#35; UI Layer Specification

This document specifies the UI layer of the application, breaking it down into pages and reusable components based on the provided sketches. All components will adhere to Ant Design's principles and utilize the data access patterns defined in `docs/guidelines/Zustand-guidelines.md`.

&#35;&#35; 1. High-Level Structure

The application is a single-page application (SPA). It will be composed of a main layout, one primary page, and several reusable components. 

&#35;&#35;&#35; 1.1. `App` Component

The root component that sets up routing and global providers.

-   &#42;&#42;Location&#42;&#42;: `src/App.tsx`
-   &#42;&#42;Purpose&#42;&#42;: To provide global context, including Ant Design's `ConfigProvider` and `App` contexts for message notifications, and to render the main page.
-   &#42;&#42;Composition&#42;&#42;:
  -   Wraps the application with `ConfigProvider` and `App as AntApp` from 'antd' to enable global message notifications as per `simple-ice/antd-messages.mdc`.
  -   Renders `[Page Name]`.

&#35;&#35; 2. Pages

&#35;&#35;&#35; 2.1. `[Page Name]`

-   &#42;&#42;Location:&#42;&#42; `src/pages/PageName.tsx`
-   &#42;&#42;Purpose:&#42;&#42; [Briefly describe the main goal and function of this page]
-   &#42;&#42;Data Access:&#42;&#42;
  [List the specific hooks and functions this component uses to fetch or manage its data]
-   &#42;&#42;Internal State:&#42;&#42;
    [Describe any state managed internally by this page using `useState`]
-   &#42;&#42;Composition:&#42;&#42;
    [Briefly describe the content of this page]
-   &#42;&#42;User Interactions:&#42;&#42;
    [Describe how the user interacts with this page] 
-   &#42;&#42;Logic:&#42;&#42;
  [If applicable, provide additional comments on how this page should work]

&#35;&#35; 3. Components

&#35;&#35;&#35; 3.1. `[Component Name]`

-   &#42;&#42;Location:&#42;&#42; `src/components/ComponentName.tsx`
-   &#42;&#42;Purpose:&#42;&#42; [Explain what this component does and where it's used]
-   &#42;&#42;Props:&#42;&#42;
  [TypeScript interface definition for the component's props. Props should be minimal. Avoid prop drilling by using hooks for data access.]
-   &#42;&#42;Data Access:&#42;&#42;
    [List the specific hooks and functions this component uses to fetch or manage its data]
-   &#42;&#42;Internal State:&#42;&#42;
    [Describe any state managed internally by this component using `useState`]
-   &#42;&#42;Composition:&#42;&#42;
    [Briefly describe the content of this component]
-   &#42;&#42;User Interactions:&#42;&#42;
    [Describe how the user interacts with the component]
-   &#42;&#42;Logic:&#42;&#42;
  [If applicable, provide additional comments on how this component should work]
  
```

--- 

&#35;&#35; Final Instructions

- &#42;&#42;No Assumptions:&#42;&#42; Base every detail on the visual evidence in the sketch, not on common design patterns. 
- &#42;&#42;Double-Check:&#42;&#42; After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- &#42;&#42;Do not add redundant empty lines between items.&#42;&#42; 

Your final output should be the complete, raw markdown content for `UI.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-3" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 4: DAL Spec to Plan
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-3" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a plan to build a Data Access Layer for an application based on a spec.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- `DAL.md`: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter.

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
- `Zustand-guidelines.md`: Zustand Best Practices

&#42;&#42;Step 3:&#42;&#42; Create a step-by-step plan to build a Data Access Layer according to the spec. 

Each task should:

- Focus on one concern
- Be reasonably small
- Have a clear start + end
- Contain clearly defined Objectives and Acceptance Criteria

The last step of the plan should include creating a page to test all the capabilities of our Data Access Layer, and making it the start page of this application, so that I can manually check if it works properly. 

I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to review results in between.

&#35;&#35; Final Instructions
 
- Note that we are not starting from scratch; the basic template has already been created using Vite.
- Do not add redundant empty lines between items.

Your final output should be the complete, raw markdown content for `DAL-plan.md`.
</code></pre>
</div></p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 5: UI Spec to Plan
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a plan to build a UI layer for an application based on a spec and a sketch.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- `UI.md`: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- `Sketch.png`: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices

&#42;&#42;Step 3:&#42;&#42; Create a step-by-step plan to build a UI layer according to the spec and the sketch. 

Each task must:

- Focus on one concern.
- Be reasonably small.
- Have a clear start + end.
- Result in a verifiable increment of the application. Each increment should be manually testable to allow for functional review and approval before proceeding.
- Contain clearly defined Objectives, Acceptance Criteria, and Manual Testing Plan.

I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to test in between.

&#35;&#35; Final Instructions

- Note that we are not starting from scratch, the basic template has already been created using Vite, and the Data Access Layer has been built successfully.
- For every task, describe how components should be integrated for verification. You must use the provided hooks to connect to the live Zustand store data—do not use mock data (note that the Data Access Layer has been already built successfully).
- The Manual Testing Plan should read like a user guide. It must only contain actions a user can perform in the browser and must never reference any code files or programming tasks.
- Do not add redundant empty lines between items.

Your final output should be the complete, raw markdown content for `UI-plan.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>         
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 6: DAL Plan to Code
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with building a Data Access Layer for an application based on a spec.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. 

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices
- @docs/guidelines/Zustand-guidelines.md: Zustand Best Practices

&#42;&#42;Step 3:&#42;&#42; Read the plan:

- @docs/plans/DAL-plan.md: The step-by-step plan to build the Data Access Layer of the application.

&#42;&#42;Step 4:&#42;&#42; Build a Data Access Layer for this application according to the spec and following the plan. 

- Complete one task from the plan at a time. 
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 
- Do not do anything else. At this point, we are focused on building the Data Access Layer.

&#35;&#35; Final Instructions

- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
- Do not start the development server, I'll do it by myself.
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 7: UI Plan to Code
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with building a UI layer for an application based on a spec and a sketch.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- @docs/specs/UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- @docs/intent/Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.
- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. That layer is already ready. Use this spec to understand how to work with it. 

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices

&#42;&#42;Step 3:&#42;&#42; Read the plan:

- @docs/plans/UI-plan.md: The step-by-step plan to build the UI layer of the application.

&#42;&#42;Step 4:&#42;&#42; Build a UI layer for this application according to the spec and the sketch, following the step-by-step plan: 

- Complete one task from the plan at a time. 
- Make sure you build the UI according to the sketch; this is very important.
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 

&#35;&#35; Final Instructions

- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
- Follow Ant Design's default styles and components. 
- Do not touch the data access layer: it's ready and it's perfect. 
- Do not start the development server, I'll do it by myself.
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 8: TS-guidelines.md
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">&#35; Guidelines: TypeScript Best Practices

&#35;&#35; Type System & Type Safety

- Use TypeScript for all code and enable strict mode.
- Ensure complete type safety throughout stores, hooks, and component interfaces.
- Prefer interfaces over types for object definitions; use types for unions, intersections, and mapped types.
- Entity interfaces should extend common patterns while maintaining their specific properties.
- Use TypeScript type guards in filtering operations for relationship safety.
- Avoid the 'any' type; prefer 'unknown' when necessary.
- Use generics to create reusable components and functions.
- Utilize TypeScript's features to enforce type safety.
- Use type-only imports (import type { MyType } from './types') when importing types, because verbatimModuleSyntax is enabled.
- Avoid enums; use maps instead.

&#35;&#35; Naming Conventions

- Names should reveal intent and purpose.
- Use PascalCase for component names and types/interfaces.
- Prefix interfaces for React props with 'Props' (e.g., ButtonProps).
- Use camelCase for variables and functions.
- Use UPPER_CASE for constants.
- Use lowercase with dashes for directories, and PascalCase for files with components (e.g., components/auth-wizard/AuthForm.tsx).
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
- Favor named exports for components.

&#35;&#35; Code Structure & Patterns

- Write concise, technical TypeScript code with accurate examples.
- Use functional and declarative programming patterns; avoid classes.
- Prefer iteration and modularization over code duplication.
- Use the "function" keyword for pure functions.
- Use curly braces for all conditionals for consistency and clarity.
- Structure files appropriately based on their purpose.
- Keep related code together and encapsulate implementation details.

&#35;&#35; Performance & Error Handling

- Use immutable and efficient data structures and algorithms.
- Create custom error types for domain-specific errors.
- Use try-catch blocks with typed catch clauses.
- Handle Promise rejections and async errors properly.
- Log errors appropriately and handle edge cases gracefully.

&#35;&#35; Project Organization

- Place shared types in a types directory.
- Use barrel exports (index.ts) for organizing exports.
- Structure files and directories based on their purpose.

&#35;&#35; Other Rules

- Use comments to explain complex logic or non-obvious decisions.
- Follow the single responsibility principle: each function should do exactly one thing.
- Follow the DRY (Don't Repeat Yourself) principle.
- Do not implement placeholder functions, empty methods, or "just in case" logic. Code should serve the current specification's requirements only.
- Use 2 spaces for indentation (no tabs).
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 9: React-guidelines.md
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">&#35; Guidelines: React Best Practices

&#35;&#35; Component Structure

- Use functional components over class components
- Keep components small and focused
- Extract reusable logic into custom hooks
- Use composition over inheritance
- Implement proper prop types with TypeScript
- Structure React files: exported component, subcomponents, helpers, static content, types
- Use declarative TSX for React components
- Ensure that UI components use custom hooks for data fetching and operations rather than receive data via props, except for simplest components

&#35;&#35; React Patterns

- Utilize useState and useEffect hooks for state and side effects
- Use React.memo for performance optimization when needed
- Utilize React.lazy and Suspense for code-splitting
- Implement error boundaries for robust error handling
- Keep styles close to components

&#35;&#35; React Performance

- Avoid unnecessary re-renders
- Lazy load components and images when possible
- Implement efficient state management
- Optimize rendering strategies
- Optimize network requests
- Employ memoization techniques (e.g., React.memo, useMemo, useCallback)

&#35;&#35; React Project Structure

```
/src
- /components - UI components (every component in a separate file)
- /hooks - public-facing custom hooks (every hook in a separate file)
- /providers - React context providers (every provider in a separate file)
- /pages - page components (every page in a separate file)
- /stores - entity-specific Zustand stores (every store in a separate file)
- /styles - global styles (if needed)
- /types - shared TypeScript types and interfaces
```
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 10: Zustand-guidelines.md
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">&#35; Guidelines: Zustand Best Practices

&#35;&#35; Core Principles

- &#42;&#42;Implement a data layer&#42;&#42; for this React application following this specification carefully and to the letter.
- &#42;&#42;Complete separation of concerns&#42;&#42;: All data operations should be accessible in UI components through simple and clean entity-specific hooks, ensuring state management logic is fully separated from UI logic.
- &#42;&#42;Shared state architecture&#42;&#42;: Different UI components should work with the same shared state, despite using entity-specific hooks separately.

&#35;&#35; Technology Stack

- &#42;&#42;State management&#42;&#42;: Use Zustand for state management with automatic localStorage persistence via the `persist` middleware.

&#35;&#35; Store Architecture

- &#42;&#42;Base entity:&#42;&#42; Implement a BaseEntity interface with common properties that all entities extend:
```typescript 
export interface BaseEntity { 
  id: string; 
  createdAt: string; // ISO 8601 format 
  updatedAt: string; // ISO 8601 format 
}
```
- &#42;&#42;Entity-specific stores&#42;&#42;: Create separate Zustand stores for each entity type.
- &#42;&#42;Dictionary-based storage&#42;&#42;: Use dictionary/map structures (`Record<string, Entity>`) rather than arrays for O(1) access by ID.
- &#42;&#42;Handle relationships&#42;&#42;: Implement cross-entity relationships (like cascade deletes) within the stores where appropriate.

&#35;&#35; Hook Layer

The hook layer is the exclusive interface between UI components and the Zustand stores. It is designed to be simple, predictable, and follow a consistent pattern across all entities.

&#35;&#35;&#35; Core Principles

1.  &#42;&#42;One Hook Per Entity&#42;&#42;: There will be a single, comprehensive custom hook for each entity (e.g., `useBlogPosts`, `useCategories`). This hook is the sole entry point for all data and operations related to that entity. Separate hooks for single-item access will not be created.
2.  &#42;&#42;Return reactive data, not getter functions&#42;&#42;: To prevent stale data, hooks must return the state itself, not a function that retrieves state. Parameterize hooks to accept filters and return the derived data directly. A component calling a getter function will not update when the underlying data changes.
3.  &#42;&#42;Expose Dictionaries for O(1) Access&#42;&#42;: To provide simple and direct access to data, every hook will return a dictionary (`Record<string, Entity>`) of the relevant items.

&#35;&#35;&#35; The Standard Hook Pattern

Every entity hook will follow this implementation pattern:

1.  &#42;&#42;Subscribe&#42;&#42; to the entire dictionary of entities from the corresponding Zustand store. This ensures the hook is reactive to any change in the data.
2.  &#42;&#42;Filter&#42;&#42; the data based on the parameters passed into the hook. This logic will be memoized with `useMemo` for efficiency. If no parameters are provided, the hook will operate on the entire dataset.
3.  &#42;&#42;Return a Consistent Shape&#42;&#42;: The hook will always return an object containing:
    &#42;   A &#42;&#42;filtered and sorted array&#42;&#42; (e.g., `blogPosts`) for rendering lists.
    &#42;   A &#42;&#42;filtered dictionary&#42;&#42; (e.g., `blogPostsDict`) for convenient `O(1)` lookup within the component.
    &#42;   All necessary &#42;&#42;action functions&#42;&#42; (`add`, `update`, `remove`) and &#42;&#42;relationship operations&#42;&#42;.
    &#42;   All necessary &#42;&#42;helper functions&#42;&#42; and &#42;&#42;derived data objects&#42;&#42;. Helper functions are suitable for pure, stateless logic (e.g., calculators). Derived data objects are memoized values that provide aggregated or summarized information from the state (e.g., an object containing status counts). They must be derived directly from the reactive state to ensure they update automatically when the underlying data changes.

&#35;&#35; API Design Standards

- &#42;&#42;Object Parameters&#42;&#42;: Use object parameters instead of multiple direct parameters for better extensibility:
```typescript

// ✅ Preferred

add({ title, categoryIds })

// ❌ Avoid

add(title, categoryIds)

```
- &#42;&#42;Internal Methods&#42;&#42;: Use underscore-prefixed methods for cross-store operations to maintain clean separation.

&#35;&#35; State Validation Standards

- &#42;&#42;Existence checks&#42;&#42;: All `update` and `remove` operations should validate entity existence before proceeding.
- &#42;&#42;Relationship validation&#42;&#42;: Verify both entities exist before establishing relationships between them.

&#35;&#35; Error Handling Patterns

- &#42;&#42;Operation failures&#42;&#42;: Define behavior when operations fail (e.g., updating non-existent entities).
- &#42;&#42;Graceful degradation&#42;&#42;: How to handle missing related entities in helper functions.

&#35;&#35; Other Standards

- &#42;&#42;Secure ID generation&#42;&#42;: Use `crypto.randomUUID()` for entity ID generation instead of custom implementations for better uniqueness guarantees and security.
- &#42;&#42;Return type consistency&#42;&#42;: `add` operations return generated IDs for component workflows requiring immediate entity access, while `update` and `remove` operations return `void` to maintain clean modification APIs.
</code></pre>
</div></p>
             </div>
         </dd>    
    <span></span></dl>
</div>
                

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Lyndon Cerejo</author><title>From Prompt To Partner: Designing Your Custom AI Assistant</title><link>https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/</link><pubDate>Fri, 26 Sep 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/</guid><description>What if your best AI prompts didn’t disappear into your unorganized chat history, but came back tomorrow as a reliable assistant? In this article, you’ll learn how to turn one-off “aha” prompts into reusable assistants that are tailored to your audience, grounded in your knowledge, and consistent every time, saving you (and your team) from typing the same 448-word prompt ever again. No coding, just designing, and by the end, you’ll have a custom AI assistant that can augment your team.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/" />
              <title>From Prompt To Partner: Designing Your Custom AI Assistant</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>From Prompt To Partner: Designing Your Custom AI Assistant</h1>
                  
                    
                    <address>Lyndon Cerejo</address>
                  
                  <time datetime="2025-09-26T10:00:00&#43;00:00" class="op-published">2025-09-26T10:00:00+00:00</time>
                  <time datetime="2025-09-26T10:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>In “<a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">A Week In The Life Of An AI-Augmented Designer</a>”, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In “<a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/">Prompting Is A Design Act</a>”, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now we’ll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share.</p>

<p>AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function &mdash; allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team.</p>

<h2 id="why-build-your-own-assistant">Why Build Your Own Assistant?</h2>

<p>If you’ve ever copied and pasted the same mega-prompt for the n<sup>th</sup> time, you’ve experienced the pain. An AI assistant turns a one-off “great prompt” into a dependable teammate. And if you’ve used any of the publicly available AI Assistants, you’ve realized quickly that they’re usually generic and not tailored for your use.</p>

<p>Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in <strong>your voice</strong>, with <strong>your context and constraints</strong> baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster.</p>

<h3 id="benefits-of-reusing-prompts-even-your-own">Benefits Of Reusing Prompts, Even Your Own</h3>

<p>Some of the benefits of building your own AI Assistant over writing or reusing your prompts include:</p>

<ul>
<li><strong>Focused on a real repeating problem</strong><br />
A good AI Assistant isn’t a general-purpose “do everything” bot that you need to keep tweaking. It focuses on a single, recurring problem that takes a long time to complete manually and often results in varying quality depending on who’s doing it (e.g., analyzing customer feedback).</li>
<li><strong>Customized for your context</strong><br />
Most large language models (LLMs, such as ChatGPT) are designed to be everything to everyone. An AI Assistant changes that by allowing you to customize it to automatically work like you want it to, instead of a generic AI.</li>
<li><strong>Consistency at scale</strong><br />
You can use the <a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/#anatomy-structure-it-like-a-designer">WIRE+FRAME prompt framework</a> to create structured, reusable prompts. An AI Assistant is the next logical step: instead of copy-pasting that fine-tuned prompt and sharing contextual information and examples each time, you can bake it into the assistant itself, allowing you and others achieve the same consistent results every time.</li>
<li><strong>Codifying expertise</strong><br />
Every time you turn a great prompt into an AI Assistant, you’re essentially bottling your expertise. Your assistant becomes a living design guide that outlasts projects (and even job changes).</li>
<li><strong>Faster ramp-up for teammates</strong><br />
Instead of new designers starting from a blank slate, they can use pre-tuned assistants. Think of it as knowledge transfer without the long onboarding lecture.

<br /></li>
</ul>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h3 id="reasons-for-your-own-ai-assistant-instead-of-public-ai-assistants">Reasons For Your Own AI Assistant Instead Of Public AI Assistants</h3>

<p>Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own.</p>

<p>A few reasons for building your AI Assistant instead of using a public assistant someone else created include:</p>

<ul>
<li><strong>Fit</strong>: Public assistants are built for the masses. Your work has quirks, tone, and processes they’ll never quite match.</li>
<li><strong>Trust &amp; Security</strong>: You don’t control what instructions or hidden guardrails someone else baked in. With your own assistant, you know exactly what it will (and won’t) do.</li>
<li><strong>Evolution</strong>: An AI Assistant you design and build can grow with your team. You can update files, tweak prompts, and maintain a changelog &mdash; things a public bot won’t do for you.</li>
</ul>

<p>Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your team’s way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply:</p>

<p><em>Don’t share anything you wouldn’t want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data.</em></p>

<p><strong><em>Note</em></strong>: <em>We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the model’s training, mood, and flair for creative hallucinations.</em></p>

<h3 id="when-not-to-build-an-ai-assistant-yet">When Not to Build An AI Assistant (Yet)</h3>

<p>An AI Assistant is great when the <em>same</em> audience has the <em>same</em> problem <em>often</em>. When the fit isn’t there, the risk is high; you should skip building an AI Assistant for now, as explained below:</p>

<ul>
<li><strong>One-off or rare tasks</strong><br />
If it won’t be reused at least monthly, I’d recommend keeping it as a saved WIRE+FRAME prompt. For example, something for a one-time audit or creating placeholder content for a specific screen.</li>
<li><strong>Sensitive or regulated data</strong><br />
If you need to build in personally identifiable information (PII), health, finance, legal, or trade secrets, err on the side of not building an AI Assistant. Even if the AI platform promises not to use your data, I’d strongly suggest using redaction or an approved enterprise tool with necessary safeguards in place (company-approved enterprise versions of Microsoft Copilot, for instance).</li>
<li><strong>Heavy orchestration or logic</strong><br />
Multi-step workflows, API calls, database writes, and approvals go beyond the scope of an AI Assistant into Agentic territory (as of now). I’d recommend not trying to build an AI Assistant for these cases.</li>
<li><strong>Real-time information</strong><br />
AI Assistants may not be able to access real-time data like prices, live metrics, or breaking news. If you need these, you can upload near-real-time data (as we do below) or connect with data sources that you or your company controls, rather than relying on the open web.</li>
<li><strong>High-stakes outputs</strong><br />
For cases related to compliance, legal, medical, or any other area requiring auditability, consider implementing process guardrails and training to keep humans in the loop for proper review and accountability.</li>
<li><strong>No measurable win</strong><br />
If you can’t name a success metric (such as time saved, first-draft quality, or fewer re-dos), I’d recommend keeping it as a saved WIRE+FRAME prompt.</li>
</ul>

<p>Just because these are signs that you should not build your AI Assistant now, doesn’t mean you shouldn’t ever. Revisit this decision when you notice that you’re starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly.</p>

<p>In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed.</p>

<h2 id="as-always-start-with-the-user">As Always, Start with the User</h2>

<p>This should go without saying to UX professionals, but it’s worth a reminder: if you’re building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything.</p>

<ul>
<li>Who will use this assistant?</li>
<li>What’s the specific pain or task they struggle with today?</li>
<li>What language, tone, and examples will feel natural to them?</li>
</ul>

<p>Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable.</p>

<h2 id="from-prompt-to-assistant">From Prompt To Assistant</h2>

<p>You’ve already done the heavy lifting with WIRE+FRAME. Now you’re just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant.</p>

<ul>
<li><strong>M: Map your prompt</strong><br />
Port your successful WIRE+FRAME prompt into the AI assistant.</li>
<li><strong>A: Add knowledge and training</strong><br />
Ground the assistant in <em>your</em> world. Upload knowledge files, examples, or guides that make it uniquely yours.</li>
<li><strong>T: Tailor for audience</strong><br />
Make it feel natural to the people who will use it. Give it the right capabilities, but also adjust its settings, tone, examples, and conversation starters so they land with your audience.</li>
<li><strong>C: Check, test, and refine</strong><br />
Test the preview with different inputs and refine until you get the results you expect.</li>
<li><strong>H: Hand off and maintain</strong><br />
Set sharing options and permissions, share the link, and maintain it.</li>
</ul>

<p>A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were:</p>

<ul>
<li><strong>Prototype Prodigy</strong>: Transform rough ideas into prototypes and export them into Figma to refine.</li>
<li><strong>Critique Coach</strong>: Review wireframes or mockups and point out accessibility and usability gaps.</li>
</ul>

<p>But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: <em>“An assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.”</em></p>

<p>And that’s the one we will build in this article &mdash; say hello to <strong>Insight Interpreter.</strong></p>

<div class="partners__lead-place"></div>

<h2 id="walkthrough-insight-interpreter">Walkthrough: Insight Interpreter</h2>

<p>Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and it’s nobody’s idea of fun. Here’s where an AI assistant like the Insight Interpreter can help. We’ll turn the example prompt created using the WIRE+FRAME framework in <a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/">Prompting Is A Design Act</a> into a CustomGPT.</p>

<p>When you start building a CustomGPT by visiting <a href="https://chat.openai.com/gpts/editor?utm_source=chatgpt.com">https://chat.openai.com/gpts/editor</a>, you’ll see two paths:</p>

<ul>
<li><strong>Conversational interface</strong><br />
Vibe-chat your way &mdash; it’s easy and quick, but similar to unstructured prompts, your inputs get baked in a little messily, so you may end up with vague or inconsistent instructions.</li>
<li><strong>Configure interface</strong><br />
The structured form where you type instructions, upload files, and toggle capabilities. Less instant gratification, less winging it, but more control. This is the option you’ll want for assistants you plan to share or depend on regularly.</li>
</ul>

<p>The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and we’ll walk through using it in configure mode as a more formal checklist in this article.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="451"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png"
			
			sizes="100vw"
			alt="CustomGPT Configure Interface"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      CustomGPT Configure Interface. (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="m-map-your-prompt">M: Map Your Prompt</h3>

<p>Paste your full WIRE+FRAME prompt into the <em>Instructions</em> section exactly as written. As a refresher, I’ve included the mapping and snippets of the detailed prompt from before:</p>

<ul>
<li><strong>W</strong>ho &amp; What: The AI persona and the core deliverable (<em>“…senior UX researcher and customer insights analyst… specialize in synthesizing qualitative data from diverse sources…”</em>).</li>
<li><strong>I</strong>nput Context: Background or data scope to frame the task (<em>“…analyzing customer feedback uploaded from sources such as…”</em>).</li>
<li><strong>R</strong>ules &amp; Constraints: Boundaries (<em>“…do not fabricate pain points, representative quotes, journey stages, or patterns…”</em>).</li>
<li><strong>E</strong>xpected Output: Format and fields of the deliverable (<em>“…a structured list of themes. For each theme, include…”</em>).</li>
<li><strong>F</strong>low: Explicit, ordered sub-tasks (<em>“Recommended flow of tasks: Step 1…”</em>).</li>
<li><strong>R</strong>eference Voice: Tone, mood, or reference (<em>“…concise, pattern-driven, and objective…”</em>).</li>
<li><strong>A</strong>sk for Clarification: Ask questions if unclear (<em>“…if data is missing or unclear, ask before continuing…”</em>).</li>
<li><strong>M</strong>emory: Memory to recall earlier definitions (<em>“Unless explicitly instructed otherwise, keep using this process…”</em>).</li>
<li><strong>E</strong>valuate &amp; Iterate: Have the AI self-critique outputs (<em>“…critically evaluate…suggest improvements…”</em>).</li>
</ul>

<p>If you’re building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective <em>Instructions</em> sections.</p>

<h3 id="a-add-knowledge-and-training">A: Add Knowledge And Training</h3>

<p>In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: <em>reviews_Q2_2025.csv</em> beats <em>latestfile_final2.csv</em>. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include:</p>

<ul>
<li>Taxonomy of themes;</li>
<li>Instructions on parsing uploaded data;</li>
<li>Examples of real UX research reports using this structure;</li>
<li>Scoring guidelines for severity and effort, e.g., what makes something a 3 vs. a 5 in severity;</li>
<li>Customer journey map stages;</li>
<li>Customer feedback file templates (not actual data).</li>
</ul>

<p>An example of a file to help it parse uploaded data is shown below:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="447"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png"
			
			sizes="100vw"
			alt="GPT file parsing instructions"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="t-tailor-for-audience">T: Tailor For Audience</h3>

<ul>
<li><strong>Audience tailoring</strong><br />
If you are building this for others, your prompt should have addressed tone in the “Reference Voice” section. If you didn’t, do it now, so the CustomGPT can be tailored to the tone and expertise level of users who will use it. In addition, use the <em>Conversation starters</em> section to add a few examples or common prompts for users to start using the CustomGPT, again, worded for your users. For instance, we could use “Analyze feedback from the attached file” for our Insights Interpreter to make it more self-explanatory for anyone, instead of “Analyze data,” which may be good enough if you were using it alone. For my Designerly Curiosity GPT, assuming that users may not know what it could do, I use “What are the types of curiosity?” and “Give me a micro-practice to spark curiosity”.</li>
<li><strong>Functional tailoring</strong><br />
Fill in the CustomGPT name, icon, description, and capabilities.

<ul>
<li><em>Name</em>: Pick one that will make it clear what the CustomGPT does. Let’s use “Insights Interpreter &mdash; Customer Feedback Analyzer”. If needed, you can also add a version number. This name will show up in the sidebar when people use it or pin it, so make the first part memorable and easily identifiable.</li>
<li><em>Icon</em>: Upload an image or generate one. Keep it simple so it can be easily recognized in a smaller dimension when people pin it in their sidebar.</li>
<li><em>Description</em>: A brief, yet clear description of what the CustomGPT can do. If you plan to list it in the GPT store, this will help people decide if they should pick yours over something similar.</li>
<li><em>Recommended Model</em>: If your CustomGPT needs the capabilities of a particular model (e.g., needs GPT-5 thinking for detailed analysis), select it. In most cases, you can safely leave it up to the user or select the most common model.</li>
<li><em>Capabilities</em>: Turn off anything you won’t need. We’ll turn off “Web Search” to allow the CustomGPT to focus only on uploaded data, without expanding the search online, and we will turn on “Code Interpreter &amp; Data Analysis” to allow it to understand and process uploaded files. “Canvas” allows users to work on a shared canvas with the GPT to edit writing tasks; “Image generation” - if the CustomGPT needs to create images.</li>
<li><em>Actions</em>: Making <a href="https://platform.openai.com/docs/actions/introduction">third-party APIs</a> available to the CustomGPT, advanced functionality we don’t need.</li>
<li><em>Additional Settings</em>: Sneakily hidden and opted in by default, I opt out of training OpenAI’s models.</li>
</ul></li>
</ul>

<h3 id="c-check-test-refine">C: Check, Test &amp; Refine</h3>

<p>Do one last visual check to make sure you’ve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing.</p>

<p>Use the <em>Preview</em> panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesn’t now, check whether new instructions or knowledge files are overriding it.</p>

<p>When things don’t look right, here are quick debugging fixes:</p>

<ul>
<li><strong>Generic answers?</strong><br />
Tighten <em>Input Context</em> or update the knowledge files.</li>
<li><strong>Hallucinations?</strong><br />
Revisit your <em>Rules</em> section. Turn off web browsing if you don’t need external data.</li>
<li><strong>Wrong tone?</strong><br />
Strengthen <em>Reference Voice</em> or swap in clearer examples.</li>
<li><strong>Inconsistent?</strong><br />
Test across models in preview and set the most reliable one as “Recommended.”</li>
</ul>

<h3 id="h-hand-off-and-maintain">H: Hand Off And Maintain</h3>

<p>When your CustomGPT is ready, you can publish it via the “Create” option. Select the appropriate access option:</p>

<ul>
<li><strong>Only me</strong>: Private use. Perfect if you’re still experimenting or keeping it personal.</li>
<li><strong>Anyone with the link</strong>: Exactly what it means. Shareable but not searchable. Great for pilots with a team or small group. Just remember that links can be reshared, so treat them as semi-public.</li>
<li><strong>GPT Store</strong>: Fully public. Your assistant is listed and findable by anyone browsing the store. <em>(This is the option we’ll use.)</em></li>
<li><strong>Business workspace</strong> (if you’re on GPT Business): Share with others within your business account only &mdash; the easiest way to keep it in-house and controlled.</li>
</ul>

<p>But hand off doesn’t end with hitting publish, you should maintain it to keep it relevant and useful:</p>

<ul>
<li><strong>Collect feedback</strong>: Ask teammates what worked, what didn’t, and what they had to fix manually.</li>
<li><strong>Iterate</strong>: Apply changes directly or duplicate the GPT if you want multiple versions in play. You can find all your CustomGPTs at: <a href="https://chatgpt.com/gpts/mine">https://chatgpt.com/gpts/mine</a></li>
<li><strong>Track changes</strong>: Keep a simple changelog (date, version, updates) for traceability.</li>
<li><strong>Refresh knowledge</strong>: Update knowledge files and examples on a regular cadence so answers don’t go stale.</li>
</ul>

<p>And that’s it! <a href="https://go.cerejo.com/insights-interpreter">Our Insights Interpreter is now live!</a></p>

<p>Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="325"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png"
			
			sizes="100vw"
			alt="Results of the structured WIRE&#43;FRAME prompt from the previous article"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Results of the structured WIRE+FRAME prompt from the previous article. (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png'>Large preview</a>)
    </figcaption>
  
</figure>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="276"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png"
			
			sizes="100vw"
			alt="Results of the Insights Interpreter CustomGPT based on the same prompt"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Results of the Insights Interpreter CustomGPT based on the same prompt. (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The results are similar, with slight differences, and that’s expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance.</p>

<p>Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you won’t get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording.</p>

<p>While I’d love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs &mdash; including the tone, context, output formats, and get the real AI Assistant you need!</p>

<div class="partners__lead-place"></div>

<h2 id="inspiration-for-other-ai-assistants">Inspiration For Other AI Assistants</h2>

<p>We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant:</p>

<ul>
<li><strong>Workshop Wizard</strong>: Generates workshop agendas, produces icebreaker questions, and follows up survey drafts.</li>
<li><strong>Research Roundup Buddy</strong>: Summarizes raw transcripts into key themes, then creates highlight reels (quotes + visuals) for team share-outs.</li>
<li><strong>Persona Refresher</strong>: Updates stale personas with the latest customer feedback, then rewrites them in different tones (boardroom formal vs. design-team casual).</li>
<li><strong>Content Checker</strong>: Proofs copy for tone, accessibility, and reading level before it ever hits your site.</li>
<li><strong>Trend Tamer</strong>: Scans competitor reviews and identifies emerging patterns you can act on before they reach your roadmap.</li>
<li><strong>Microcopy Provocateur</strong>: Tests alternate copy options by injecting different tones (sassy, calm, ironic, nurturing) and role-playing how users might react, especially useful for error states or Call to Actions.</li>
<li><strong>Ethical UX Debater</strong>: Challenges your design decisions and deceptive designs by simulating the voice of an ethics board or concerned user.</li>
</ul>

<p>The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants.</p>

<h2 id="ask-me-anything-about-assistants">Ask Me Anything About Assistants</h2>

<ul>
<li><strong>What are some limitations of a CustomGPT?</strong><br />
Right now, the best parallels for AI are a very smart intern with access to a lot of information. CustomGPTs are still running on LLM models that are basically trained on a lot of information and programmed to predictively generate responses based on that data, including possible bias, misinformation, or incomplete information. Keeping that in mind, you can make that intern provide better and more relevant results by using your uploads as onboarding docs, your guardrails as a job description, and your updates as retraining.</li>
<li><strong>Can I copy someone else’s public CustomGPT and tweak it?</strong><br />
Not directly, but if you get inspired by another CustomGPT, you can look at how it’s framed and rebuild your own using WIRE+FRAME &amp; MATCH. That way, you make it your own and have full control of the instructions, files, and updates. But you can do that with Google’s equivalent &mdash; Gemini Gems. Shared Gems behave similarly to shared Google Docs, so once shared, any Gem instructions and files that you have uploaded can be viewed by any user with access to the Gem. Any user with edit access to the Gem can also update and delete the Gem.</li>
<li><strong>How private are my uploaded files?</strong><br />
The files you upload are stored and used to answer prompts to your CustomGPT. If your CustomGPT is not private or you didn’t disable the hidden setting to allow CustomGPT conversations to improve the model, that data could be referenced. Don’t upload sensitive, confidential, or personal data you wouldn’t want circulating. Enterprise accounts do have some protections, so check with your company.</li>
<li><strong>How many files can I upload, and does size matter?</strong><br />
Limits vary by platform, but smaller, specific files usually perform better than giant docs. Think “chapter” instead of “entire book.” At the time of publishing, CustomGPTs allow up to 20 files, Copilot Agents up to 200 (if you need anywhere near that many, chances are your agent is not focused enough), and Gemini Gems up to 10.</li>
<li><strong>What’s the difference between a CustomGPT and a Project?</strong><br />
A CustomGPT is a focused assistant, like an intern trained to do one role well (like “Insight Interpreter”). A Project is more like a workspace where you can group multiple prompts, files, and conversations together for a broader effort. CustomGPTs are specialists. Projects are containers. If you want something reusable, shareable, and role-specific, go to CustomGPT. If you want to organize broader work with multiple tools and outputs, and shared knowledge, Projects are the better fit.</li>
</ul>

<h2 id="from-reading-to-building">From Reading To Building</h2>

<p>In this AI x Design series, we’ve gone from messy prompting (“<a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">A Week In The Life Of An AI-Augmented Designer</a>”) to a structured prompt framework, WIRE+FRAME (“<a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/">Prompting Is A Design Act</a>”). And now, in this article, your very own reusable AI sidekick.</p>

<p>CustomGPTs don’t replace designers but augment them. The real magic isn’t in the tool itself, but in <em>how</em> you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They <strong>extend your craft</strong>, <strong>codify your expertise</strong>, and give your team leverage that generic AI models can’t.</p>

<p>Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Yegor Gilyov</author><title>Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)</title><link>https://www.smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/</link><pubDate>Wed, 24 Sep 2025 17:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/</guid><description>Yegor Gilyov examines the problem of over-reliance on static high-fidelity mockups, which often leave the conceptual model and user flows dangerously underdeveloped. He then explores whether AI-powered prototyping is the answer, questioning whether the path forward is the popular “vibe coding” approach or a more structured, intent-driven approach.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/" />
              <title>Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)</h1>
                  
                    
                    <address>Yegor Gilyov</address>
                  
                  <time datetime="2025-09-24T17:00:00&#43;00:00" class="op-published">2025-09-24T17:00:00+00:00</time>
                  <time datetime="2025-09-24T17:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>There is a spectrum of opinions on how dramatically all creative professions will be changed by the coming wave of agentic AI, from the very skeptical to the wildly optimistic and even apocalyptic. I think that even if you are on the “skeptical” end of the spectrum, it makes sense to explore ways this new technology can help with your everyday work. As for my everyday work, I’ve been doing UX and product design for about 25 years now, and I’m always keen to learn new tricks and share them with colleagues. Right now, I’m interested in <strong>AI-assisted prototyping</strong>, and I’m here to share my thoughts on how it can change the process of designing digital products.</p>

<p>To set your expectations up front: this exploration focuses on a specific part of the product design lifecycle. Many people know about the Double Diamond framework, which shows the path from problem to solution. However, I think it’s the <a href="https://uxdesign.cc/why-the-double-diamond-isnt-enough-adaa48a8aec1">Triple Diamond model</a> that makes an important point for our needs. It explicitly separates the solution space into two phases: Solution Discovery (ideating and validating the right concept) and Solution Delivery (engineering the validated concept into a final product). This article is focused squarely on that middle diamond: <strong>Solution Discovery</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="593"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png"
			
			sizes="100vw"
			alt="Diagram of the Triple Diamond model: Problem Discovery, Solution Discovery, and Solution Delivery."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Triple Diamond model and the prototyping sweet spot. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>How AI can help with the preceding (Problem Discovery) and the following (Solution Delivery) stages is out of the scope of this article. Problem Discovery is less about prototyping and more about research, and while I believe AI can revolutionize the research process as well, I’ll leave that to people more knowledgeable in the field. As for Solution Delivery, it is more about engineering optimization. There’s no doubt that software engineering in the AI era is undergoing dramatic changes, but I’m not an engineer &mdash; I’m a designer, so let me focus on my “sweet spot”.</p>

<p>And my “sweet spot” has a specific flavor: <strong>designing enterprise applications</strong>. In this world, the main challenge is taming complexity: dealing with complicated data models and guiding users through non-linear workflows. This background has had a big impact on my approach to design, putting a lot of emphasis on the underlying logic and structure. This article explores the potential of AI through this lens.</p>

<p>I’ll start by outlining the typical artifacts designers create during Solution Discovery. Then, I’ll examine the problems with how this part of the process often plays out in practice. Finally, we’ll explore whether AI-powered prototyping can offer a better approach, and if so, whether it aligns with what people call “vibe coding,” or calls for a more deliberate and disciplined way of working.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="what-we-create-during-solution-discovery">What We Create During Solution Discovery</h2>

<p>The Solution Discovery phase begins with the key output from the preceding research: <strong>a well-defined problem</strong> and <strong>a core hypothesis for a solution</strong>. This is our starting point. The artifacts we create from here are all aimed at turning that initial hypothesis into a tangible, testable concept.</p>

<p>Traditionally, at this stage, designers can produce artifacts of different kinds, progressively increasing fidelity: from napkin sketches, boxes-and-arrows, and conceptual diagrams to hi-fi mockups, then to interactive prototypes, and in some cases even live prototypes. Artifacts of lower fidelity allow fast iteration and enable the exploration of many alternatives, while artifacts of higher fidelity help to understand, explain, and validate the concept in all its details.</p>

<p>It’s important to <strong>think holistically</strong>, considering different aspects of the solution. I would highlight three dimensions:</p>

<ol>
<li><strong>Conceptual model</strong>: Objects, relations, attributes, actions;</li>
<li><strong>Visualization</strong>: Screens, from rough sketches to hi-fi mockups;</li>
<li><strong>Flow</strong>: From the very high-level user journeys to more detailed ones.</li>
</ol>

<p>One can argue that those are layers rather than dimensions, and each of them builds on the previous ones (for example, according to <a href="https://www.interaction-design.org/literature/article/the-magic-of-semantic-interaction-design?srsltid=AfmBOoq4-4YG8RR7SDZn7CX1GJ1ZKNdiZx-trER7oKCefud3V2TjeumD">Semantic IxD</a> by Daniel Rosenberg), but I see them more as different facets of the same thing, so the design process through them is not necessarily linear: you may need to switch from one perspective to another many times.</p>

<p>This is how different types of design artifacts map to these dimensions:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="596"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png"
			
			sizes="100vw"
			alt="Diagram mapping design artifacts to dimensions of Conceptual Model, Visualization, and Flow."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Mapping design artifacts to dimensions of Conceptual Model, Visualization, and Flow. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>As Solution Discovery progresses, designers move from the left part of this map to the right, from low-fidelity to high-fidelity, from ideating to validating, from diverging to converging.</p>

<p>Note that at the beginning of the process, different dimensions are supported by artifacts of different types (boxes-and-arrows, sketches, class diagrams, etc.), and only closer to the end can you build a live prototype that encompasses all three dimensions: conceptual model, visualization, and flow.</p>

<p>This progression shows a classic trade-off, like the difference between a pencil drawing and an oil painting. The drawing lets you explore ideas in the most flexible way, whereas the painting has a lot of detail and overall looks much more realistic, but is hard to adjust. Similarly, as we go towards artifacts that integrate all three dimensions at higher fidelity, our ability to iterate quickly and explore divergent ideas goes down. This inverse relationship has long been an accepted, almost unchallenged, limitation of the design process.</p>

<h2 id="the-problem-with-the-mockup-centric-approach">The Problem With The Mockup-Centric Approach</h2>

<p>Faced with this difficult trade-off, often teams opt for the easiest way out. On the one hand, they need to show that they are making progress and create things that appear detailed. On the other hand, they rarely can afford to build interactive or live prototypes. This leads them to over-invest in one type of artifact that seems to offer the best of both worlds. As a result, the neatly organized “bento box” of design artifacts we saw previously gets shrunk down to just one compartment: creating static high-fidelity mockups.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="388"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png"
			
			sizes="100vw"
			alt="The artifact map diagram, with “Hi-fi Mockup” enlarged to show an over-reliance on it."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The mockup-centric approach. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This choice is understandable, as several forces push designers in this direction. Stakeholders are always eager to see nice pictures, while artifacts representing user flows and conceptual models receive much less attention and priority. They are too high-level and hardly usable for validation, and usually, not everyone can understand them.</p>

<p>On the other side of the fidelity spectrum, interactive prototypes require too much effort to create and maintain, and creating live prototypes in code used to require special skills (and again, effort). And even when teams make this investment, they do so at the end of Solution Discovery, during the convergence stage, when it is often too late to experiment with fundamentally different ideas. With so much effort already sunk, there is little appetite to go back to the drawing board.</p>

<p>It’s no surprise, then, that many teams default to the perceived safety of <strong>static mockups</strong>, seeing them as a middle ground between the roughness of the sketches and the overwhelming complexity and fragility that prototypes can have.</p>

<p>As a result, validation with users doesn’t provide enough confidence that the solution will actually solve the problem, and teams are forced to make a leap of faith to start building. To make matters worse, they do so without a clear understanding of the conceptual model, the user flows, and the interactions, because from the very beginning, designers’ attention has been heavily skewed toward visualization.</p>

<p>The result is often a design artifact that resembles the famous “horse drawing” meme: beautifully rendered in the parts everyone sees first (the mockups), but dangerously underdeveloped in its underlying structure (the conceptual model and flows).</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="541"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg"
			
			sizes="100vw"
			alt="The “horse drawing” meme, where the front is detailed and the back is a simple sketch."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The “lopsided horse” problem. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>While this is a familiar problem across the industry, its severity <strong>depends on the nature of the project</strong>. If your core challenge is to optimize a well-understood, linear flow (like many B2C products), a mockup-centric approach can be perfectly adequate. The risks are contained, and the “lopsided horse” problem is unlikely to be fatal.</p>

<p>However, it’s different for the systems I specialize in: complex applications defined by intricate data models and non-linear, interconnected user flows. Here, the biggest risks are not on the surface but in the underlying structure, and a lack of attention to the latter would be a recipe for disaster.</p>

<div class="partners__lead-place"></div>

<h2 id="transforming-the-design-process">Transforming The Design Process</h2>

<p>This situation makes me wonder:</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aHow%20might%20we%20close%20the%20gap%20between%20our%20design%20intent%20and%20a%20live%20prototype,%20so%20that%20we%20can%20iterate%20on%20real%20functionality%20from%20day%20one?%0a&url=https://smashingmagazine.com%2f2025%2f09%2fintent-prototyping-pure-vibe-coding-enterprise-ux%2f">
      
How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one?

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="397"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png"
			
			sizes="100vw"
			alt="Diagram showing bridging the gap between “Design Intent” and “Live Prototype.”"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      How might we bridge the gap between design intent and a live prototype? (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If we were able to answer this question, we would:</p>

<ul>
<li><strong>Learn faster.</strong><br />
By going straight from intent to a testable artifact, we cut the feedback loop from weeks to days.</li>
<li><strong>Gain more confidence.</strong><br />
Users interact with real logic, which gives us more proof that the idea works.</li>
<li><strong>Enforce conceptual clarity.</strong><br />
A live prototype cannot hide a flawed or ambiguous conceptual model.</li>
<li><strong>Establish a clear and lasting source of truth.</strong><br />
A live prototype, combined with a clearly documented design intent, provides the engineering team with an unambiguous specification.</li>
</ul>

<p>Of course, the desire for such a process is not new. This vision of a truly <strong>prototype-driven workflow</strong> is especially compelling for enterprise applications, where the benefits of faster learning and forced conceptual clarity are the best defense against costly structural flaws. But this ideal was still out of reach because prototyping in code took so much work and specialized talents. Now, the rise of powerful AI coding assistants changes this equation in a big way.</p>

<h2 id="the-seductive-promise-of-vibe-coding">The Seductive Promise Of “Vibe Coding”</h2>

<p>And the answer seems to be obvious: <strong>vibe coding</strong>!</p>

<blockquote>“Vibe coding is an artificial intelligence-assisted software development style popularized by Andrej Karpathy in early 2025. It describes a fast, improvisational, collaborative approach to creating software where the developer and a large language model (LLM) tuned for coding is acting rather like pair programmers in a conversational loop.”<br /><br />&mdash; <a href="https://en.wikipedia.org/wiki/Vibe_coding">Wikipedia</a></blockquote>

<p>The original tweet by Andrej Karpathy:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://x.com/karpathy/status/1886192184808149383">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="552"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png"
			
			sizes="100vw"
			alt="Screenshot of Andrej Karpathy&#39;s tweet defining Vibe Coding."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Andrej Karpathy’s tweet that popularized the term “vibe coding”. (Image source: <a href='https://x.com/karpathy/status/1886192184808149383'>X</a>) (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The allure of this approach is undeniable. If you are not a developer, you are bound to feel awe when you describe a solution in plain language, and moments later, you can interact with it. This seems to be the ultimate fulfillment of our goal: a direct, frictionless path from an idea to a live prototype. But <strong>is this method reliable enough</strong> to build our new design process around it?</p>

<h3 id="the-trap-a-process-without-a-blueprint">The Trap: A Process Without A Blueprint</h3>

<p>Vibe coding mixes up a description of the UI with a description of the system itself, resulting in a <strong>prototype based on changing assumptions rather than a clear, solid model</strong>.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aThe%20pitfall%20of%20vibe%20coding%20is%20that%20it%20encourages%20us%20to%20express%20our%20intent%20in%20the%20most%20ambiguous%20way%20possible:%20by%20having%20a%20conversation.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fintent-prototyping-pure-vibe-coding-enterprise-ux%2f">
      
The pitfall of vibe coding is that it encourages us to express our intent in the most ambiguous way possible: by having a conversation.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>This is like hiring a builder and telling them what to do one sentence at a time without ever presenting them a blueprint. They could make a wall that looks great, but you can’t be sure that it can hold weight.</p>

<p>I’ll give you one example illustrating problems you may face if you try to jump over the chasm between your idea and a live prototype relying on pure vibe coding in the spirit of Andrej Karpathy’s tweet. Imagine I want to prototype a solution to keep track of tests to validate product ideas. I open my vibe coding tool of choice (I intentionally don’t disclose its name, as I believe they all are awesome yet prone to similar pitfalls) and start with the following prompt:</p>

<div class="break-out">
<pre><code class="language-markdown">I need an app to track tests. For every test, I need to fill out the following data:
- Hypothesis (we believe that...) 
- Experiment (to verify that, we will...)
- When (a single date, or a period) 
- Status (New/Planned/In Progress/Proven/Disproven)
</code></pre>
</div>

<p>And in a minute or so, I get a working prototype:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="610"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png"
			
			sizes="100vw"
			alt="Screenshot of a simple Test Tracker app."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The initial prototype. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Inspired by success, I go further:</p>

<div class="break-out">
<pre><code class="language-markdown">Please add the ability to specify a product idea for every test. Also, I want to filter tests by product ideas and see how many tests each product idea has in each status.
</code></pre>
</div>

<p>And the result is still pretty good:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="610"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png"
			
			sizes="100vw"
			alt="The Test Tracker app screenshot, now with filtering by product ideas."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The prototype updated to include filtering tests by product ideas. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>But then I want to extend the functionality related to product ideas:</p>

<div class="break-out">
<pre><code class="language-markdown">Okay, one more thing. For every product idea, I want to assess the impact score, the confidence score, and the ease score, and get the overall ICE score. Perhaps I need a separate page focused on the product idea, with all the relevant information and related tests.
</code></pre>
</div>

<p>And from this point on, the results are getting more and more confusing.</p>

<p>The flow of creating tests hasn’t changed much. I can still create a bunch of tests, and they seem to be organized by product ideas. But when I click “Product Ideas” in the top navigation, I see nothing:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="518"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png"
			
			sizes="100vw"
			alt="Screenshot of the app’s blank Product Ideas page."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Product Ideas page is empty. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>I need to create my ideas from scratch, and they are not connected to the tests I created before:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="519"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png"
			
			sizes="100vw"
			alt="Screenshot of the Product Ideas page with newly created ideas not connected to tests."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The newly created product ideas are disconnected from existing tests. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Moreover, when I go back to “Tests”, I see that they are all gone. Clearly something went wrong, and my AI assistant confirms that:</p>

<blockquote>No, this is not expected behavior &mdash; it’s a bug! The issue is that tests are being stored in two separate places (local state in the Index page and App state), so tests created on the main page don’t sync with the product ideas page.</blockquote>

<p>Sure, eventually it fixed that bug, but note that we encountered this just on the third step, when we asked to slightly extend the functionality of a very simple app. The more layers of complexity we add, the more roadblocks of this sort we are bound to face.</p>

<p>Also note that this specific problem of a not fully thought-out relationship between two entities (product ideas and tests) is not isolated at the technical level, and therefore, it didn’t go away once the technical bug was fixed. The underlying conceptual model is still broken, and it manifests in the UI as well.</p>

<p>For example, you can still create “orphan” tests that are not connected to any item from the “Product Ideas” page. As a result, you may end up with different numbers of ideas and tests on different pages of the app:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="305"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png"
			
			sizes="100vw"
			alt="Diagram showing conflicting data between the Tests page and the Product Ideas page."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A poorly defined conceptual model leads to data inconsistencies across the app. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Let’s diagnose what really happened here. The AI’s response that this is a “bug” is only half the story. The true root cause is a <strong>conceptual model failure</strong>. My prompts never explicitly defined the relationship between product ideas and tests. The AI was forced to guess, which led to the broken experience. For a simple demo, this might be a fixable annoyance. But for a data-heavy enterprise application, this kind of structural ambiguity is fatal. It demonstrates <strong>the fundamental weakness of building without a blueprint</strong>, which is precisely what vibe coding encourages.</p>

<p>Don’t take this as a criticism of vibe coding tools. They are creating real magic. However, the fundamental truth about “garbage in, garbage out” is still valid. If you don’t express your intent clearly enough, chances are the result won’t fulfill your expectations.</p>

<p>Another problem worth mentioning is that even if you wrestle it into a state that works, <strong>the artifact is a black box</strong> that can hardly serve as reliable specifications for the final product. The initial meaning is lost in the conversation, and all that’s left is the end result. This makes the development team “code archaeologists,” who have to figure out what the designer was thinking by reverse-engineering the AI’s code, which is frequently very complicated. Any speed gained at the start is lost right away because of this friction and uncertainty.</p>

<div class="partners__lead-place"></div>

<h2 id="from-fast-magic-to-a-solid-foundation">From Fast Magic To A Solid Foundation</h2>

<p>Pure vibe coding, for all its allure, encourages building without a blueprint. As we’ve seen, this results in <strong>structural ambiguity</strong>, which is not acceptable when designing complex applications. We are left with a seemingly quick but fragile process that creates a black box that is difficult to iterate on and even more so to hand off.</p>

<p>This leads us back to our main question: how might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? The answer lies in a more methodical, disciplined, and therefore trustworthy process.</p>

<p>In <a href="https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/"><strong>Part 2</strong></a> of this series, “A Practical Guide to Building with Clarity”, I will outline the entire workflow for <strong>Intent Prototyping.</strong> This method places the explicit <em>intent</em> of the designer at the forefront of the process while embracing the potential of AI-assisted coding.</p>

<p>Thank you for reading, and I look forward to seeing you in <a href="https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/"><strong>Part 2</strong></a>.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Victor Yocco</author><title>The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence</title><link>https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/</link><pubDate>Fri, 19 Sep 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/</guid><description>When AI “hallucinates,” it’s more than just a glitch — it’s a collapse of trust. As generative AI becomes part of more digital products, trust has become the invisible user interface. But trust isn’t mystical. It can be understood, measured, and designed for. Here is a practical guide for designing more trustworthy and ethical AI systems.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/" />
              <title>The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence</h1>
                  
                    
                    <address>Victor Yocco</address>
                  
                  <time datetime="2025-09-19T10:00:00&#43;00:00" class="op-published">2025-09-19T10:00:00+00:00</time>
                  <time datetime="2025-09-19T10:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>Misuse and misplaced trust of AI is becoming an unfortunate <a href="https://www.damiencharlotin.com/hallucinations/">common event</a>. For example, lawyers trying to leverage the power of generative AI for research submit court filings citing multiple compelling legal precedents. The problem? The AI had confidently, eloquently, and completely fabricated the cases cited. The resulting sanctions and public embarrassment can become <a href="https://www.lawnext.com/2025/05/ai-hallucinations-strike-again-two-more-cases-where-lawyers-face-judicial-wrath-for-fake-citations.html">a viral cautionary tale</a>, shared across social media as a stark example of AI’s fallibility.</p>

<p>This goes beyond a technical glitch; it’s a catastrophic <strong>failure of trust in AI tools</strong> in an industry where accuracy and trust are critical. The trust issue here is twofold &mdash; the law firms are submitting briefs in which they have blindly over-trusted the AI tool to return accurate information. The subsequent fallout can lead to a strong distrust in AI tools, to the point where platforms featuring AI might not be considered for use until trust is reestablished.</p>

<p>Issues with trusting AI aren’t limited to the legal field. We are seeing the impact of fictional AI-generated information in critical fields such as <a href="https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14">healthcare</a> and <a href="https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/">education</a>. On a more personal scale, many of us have had the experience of asking Siri or Alexa to perform a task, only to have it done incorrectly or not at all, for no apparent reason. I’m guilty of sending more than one out-of-context hands-free text to an unsuspecting contact after Siri mistakenly pulls up a completely different name than the one I’d requested.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="410"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png"
			
			sizes="100vw"
			alt="Cartoon illustration split into two panels. On the left, a man in a blue hoodie speaks into his phone, saying, “Siri, text Dave, I’m waiting outside of your door.” On the right, a cheerful cartoon phone with a face and arms replies, “I have just texted Martha, I am standing outside of your door.”"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 1: Siri and Alexa often tend to confuse the recipient of my message, causing me to distrust using them when accuracy matters. Image generated using Gemini Pro. (<a href='https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, <strong>trust has become the invisible user interface</strong>. When it works, our interactions are seamless and powerful. When it breaks, the entire experience collapses, with potentially devastating consequences. As UX professionals, we’re on the front lines of a new twist on a common challenge. How do we build products that users can rely on? And how do we even begin to measure something as ephemeral as trust in AI?</p>

<p>Trust isn’t a mystical quality. It is a psychological construct built on predictable factors. I won’t dive deep into academic literature on trust in this article. However, it is important to understand that trust is a concept that can be <strong>understood</strong>, <strong>measured</strong>, and <strong>designed for</strong>. This article will provide a <strong>practical guide</strong> for UX researchers and designers. We will briefly explore the psychological anatomy of trust, offer concrete methods for measuring it, and provide actionable strategies for designing more trustworthy and ethical AI systems.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="the-anatomy-of-trust-a-psychological-framework-for-ai">The Anatomy of Trust: A Psychological Framework for AI</h2>

<p>To build trust, we must first understand its components. Think of trust like a four-legged stool. If any one leg is weak, the whole thing becomes unstable. Based on classic <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10083508/#:~:text=The%20model%20of%20interpersonal%20trust,in%20human%20interpersonal%20trust%20development.">psychological models</a>, we can adapt these “legs” for the AI context.</p>

<h3 id="1-ability-or-competence">1. Ability (or Competence)</h3>

<p>This is the most straightforward pillar: Does the AI have the <strong>skills</strong> to perform its function accurately and effectively? If a weather app is consistently wrong, you stop trusting it. If an AI legal assistant creates fictitious cases, it has failed the basic test of ability. This is the functional, foundational layer of trust.</p>

<h3 id="2-benevolence">2. Benevolence</h3>

<p>This moves from function to <strong>intent</strong>. Does the user believe the AI is acting in their best interest? A GPS that suggests a toll-free route even if it’s a few minutes longer might be perceived as benevolent. Conversely, an AI that aggressively pushes sponsored products feels self-serving, eroding this sense of benevolence. This is where user fears, such as concerns about job displacement, directly challenge trust—the user starts to believe the AI is not on their side.</p>

<h3 id="3-integrity">3. Integrity</h3>

<p>Does AI operate on predictable and ethical principles? This is about <strong>transparency</strong>, <strong>fairness</strong>, and <strong>honesty</strong>. An AI that clearly states how it uses personal data demonstrates integrity. A system that quietly changes its terms of service or uses dark patterns to get users to agree to something violates integrity. An AI job recruiting tool that has subtle yet extremely harmful social biases, existing in the algorithm, violates integrity.</p>

<h3 id="4-predictability-reliability">4. Predictability &amp; Reliability</h3>

<p>Can the user form a <strong>stable and accurate mental model</strong> of how the AI will behave? Unpredictability, even if the outcomes are occasionally good, creates anxiety. A user needs to know, roughly, what to expect. An AI that gives a radically different answer to the same question asked twice is unpredictable and, therefore, hard to trust.</p>

<h2 id="the-trust-spectrum-the-goal-of-a-well-calibrated-relationship">The Trust Spectrum: The Goal of a Well-Calibrated Relationship</h2>

<p>Our goal as UX professionals shouldn’t be to maximize trust at all costs. An employee who blindly trusts every email they receive is a security risk. Likewise, a user who blindly trusts every AI output can be led into dangerous situations, such as the legal briefs referenced at the beginning of this article. The goal is <em>well-calibrated</em> trust.</p>

<p>Think of it as a spectrum where the upper-mid level is the ideal state for a truly trustworthy product to achieve:</p>

<ul>
<li><strong>Active Distrust</strong><br />
The user believes the AI is incompetent or malicious. They will avoid it or actively work against it.</li>
<li><strong>Suspicion &amp; Scrutiny</strong><br />
The user interacts cautiously, constantly verifying the AI’s outputs. This is a common and often healthy state for users of new AI.</li>
<li><strong>Calibrated Trust (The Ideal State)</strong><br />
This is the sweet spot. The user has an accurate understanding of the AI’s capabilities—its strengths and, crucially, its weaknesses. They know when to rely on it and when to be skeptical.</li>
<li><strong>Over-trust &amp; Automation Bias</strong><br />
The user unquestioningly accepts the AI’s outputs. This is where users follow flawed AI navigation into a field or accept a fictional legal brief as fact.</li>
</ul>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aOur%20job%20is%20to%20design%20experiences%20that%20guide%20users%20away%20from%20the%20dangerous%20poles%20of%20Active%20Distrust%20and%20Over-trust%20and%20toward%20that%20healthy,%20realistic%20middle%20ground%20of%20Calibrated%20Trust.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fpsychology-trust-ai-guide-measuring-designing-user-confidence%2f">
      
Our job is to design experiences that guide users away from the dangerous poles of Active Distrust and Over-trust and toward that healthy, realistic middle ground of Calibrated Trust.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="307"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png"
			
			sizes="100vw"
			alt="The trust spectrum"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 2: Build user trust in your AI product, avoiding both distrust and over-reliance. Image generated using Gemini Pro. (<a href='https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="the-researcher-s-toolkit-how-to-measure-trust-in-ai">The Researcher’s Toolkit: How to Measure Trust In AI</h2>

<p>Trust feels abstract, but it leaves measurable fingerprints. Academics in the social sciences have done much to define both what trust looks like and how it might be measured. As researchers, we can capture these signals through a mix of <strong>qualitative</strong>, <strong>quantitative</strong>, and <strong>behavioral</strong> methods.</p>

<h3 id="qualitative-probes-listening-for-the-language-of-trust">Qualitative Probes: Listening For The Language Of Trust</h3>

<p>During interviews and usability tests, go beyond <em>“Was that easy to use?”</em> and listen for the underlying psychology. Here are some questions you can start using tomorrow:</p>

<ul>
<li><strong>To measure Ability:</strong><br />
<em>“Tell me about a time this tool’s performance surprised you, either positively or negatively.”</em></li>
<li><strong>To measure Benevolence:</strong><br />
<em>“Do you feel this system is on your side? What gives you that impression?”</em></li>
<li><strong>To measure Integrity:</strong><br />
<em>“If this AI made a mistake, how would you expect it to handle it? What would be a fair response?”</em></li>
<li><strong>To measure Predictability:</strong><br />
<em>“Before you clicked that button, what did you expect the AI to do? How closely did it match your expectation?”</em></li>
</ul>

<h3 id="investigating-existential-fears-the-job-displacement-scenario">Investigating Existential Fears (The Job Displacement Scenario)</h3>

<p>One of the most potent challenges to an AI’s Benevolence is the fear of job displacement. When a participant expresses this, it is a critical research finding. It requires a specific, ethical probing technique.</p>

<p>Imagine a participant says, <em>“Wow, it does that part of my job pretty well. I guess I should be worried.”</em></p>

<p>An untrained researcher might get defensive or dismiss the comment. An ethical, trained researcher validates and explores:</p>

<blockquote>“Thank you for sharing that; it’s a vital perspective, and it’s exactly the kind of feedback we need to hear. Can you tell me more about what aspects of this tool make you feel that way? In an ideal world, how would a tool like this work <strong>with</strong> you to make your job better, not to replace it?”</blockquote>

<p>This approach respects the participant, validates their concern, and reframes the feedback into an actionable insight about designing a collaborative, augmenting tool rather than a replacement. Similarly, your findings should reflect the concern users expressed about replacement. We shouldn’t pretend this fear doesn’t exist, nor should we pretend that every AI feature is being implemented with pure intention. Users know better than that, and we should be prepared to argue on their behalf for how the technology might best co-exist within their roles.</p>

<h3 id="quantitative-measures-putting-a-number-on-confidence">Quantitative Measures: Putting A Number On Confidence</h3>

<p>You can quantify trust without needing a data science degree. After a user completes a task with an AI, supplement your standard usability questions with a few simple Likert-scale items:</p>

<ul>
<li><em>“The AI’s suggestion was reliable.”</em> (1-7, Strongly Disagree to Strongly Agree)</li>
<li><em>“I am confident in the AI’s output.”</em> (1-7)</li>
<li><em>“I understood why the AI made that recommendation.”</em> (1-7)</li>
<li><em>“The AI responded in a way that I expected.”</em> (1-7)</li>
<li><em>“The AI provided consistent responses over time.”</em> (1-7)</li>
</ul>

<p>Over time, these metrics can track how trust is changing as your product evolves.</p>

<p><strong>Note</strong>: <em>If you want to go beyond these simple questions that I’ve made up, there are numerous scales (measurements) of trust in technology that exist in academic literature. It might be an interesting endeavor to measure some relevant psychographic and demographic characteristics of your users and see how that correlates with trust in AI/your product. <a href="#table-1-published-academic-scales-measuring-trust-in-automated-systems">Table 1 at the end of the article</a> contains four examples of current scales you might consider using to measure trust. You can decide which is best for your application, or you might pull some of the items from any of the scales if you aren’t looking to publish your findings in an academic journal, yet want to use items that have been subjected to some level of empirical scrutiny.</em></p>

<h3 id="behavioral-metrics-observing-what-users-do-not-just-what-they-say">Behavioral Metrics: Observing What Users Do, Not Just What They Say</h3>

<p>People’s true feelings are often revealed in their actions. You can use behaviors that reflect the specific context of use for your product. Here are a few general metrics that might apply to most AI tools that give insight into users’ behavior and the trust they place in your tool.</p>

<ul>
<li><strong>Correction Rate</strong><br />
How often do users manually edit, undo, or ignore the AI’s output? A high correction rate is a powerful signal of low trust in its Ability.</li>
<li><strong>Verification Behavior</strong><br />
Do users switch to Google or open another application to double-check the AI’s work? This indicates they don’t trust it as a standalone source of truth. It can also potentially be positive that they are calibrating their trust in the system when they use it up front.</li>
<li><strong>Disengagement</strong><br />
Do users turn the AI feature off? Do they stop using it entirely after one bad experience? This is the ultimate behavioral vote of no confidence.</li>
</ul>

<div class="partners__lead-place"></div>

<h2 id="designing-for-trust-from-principles-to-pixels">Designing For Trust: From Principles to Pixels</h2>

<p>Once you’ve researched and measured trust, you can begin to design for it. This means translating psychological principles into tangible interface elements and user flows.</p>

<h3 id="designing-for-competence-and-predictability">Designing for Competence and Predictability</h3>

<ul>
<li><strong>Set Clear Expectations</strong><br />
Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. A simple <em>“I’m still learning about [topic X], so please double-check my answers”</em> can work wonders.</li>
<li><strong>Show Confidence Levels</strong><br />
Instead of just giving an answer, have the AI signal its own uncertainty. A weather app that says <em>“70% chance of rain”</em> is more trustworthy than one that just says <em>“It will rain”</em> and is wrong. An AI could say, <em>“I’m 85% confident in this summary,”</em> or highlight sentences it’s less sure about.</li>
</ul>

<h3 id="the-role-of-explainability-xai-and-transparency">The Role of Explainability (XAI) and Transparency</h3>

<p>Explainability isn’t about showing users the code. It’s about providing a <em>useful, human-understandable rationale</em> for a decision.</p>

<blockquote><strong>Instead of:</strong><br />“Here is your recommendation.”<br /><br /><strong>Try:</strong><br />“Because you frequently read articles about UX research methods, I’m recommending this new piece on measuring trust in AI.”</blockquote>

<p>This addition transforms AI from an opaque oracle to a transparent logical partner.</p>

<p>Many of the popular AI tools (e.g., ChatGPT and Gemini) show the thinking that went into the response they provide to a user. Figure 3 shows the steps Gemini went through to provide me with a non-response when I asked it to help me generate the masterpiece displayed above in Figure 2. While this might be more information than most users care to see, it provides a useful resource for a user to audit how the response came to be, and it has provided me with instructions on how I might proceed to address my task.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="740"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png"
			
			sizes="100vw"
			alt="Gemini explains its process and why it can’t complete a task"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 3: Gemini shows its process and why it can’t complete a task I’ve asked it to perform. Smartly, it suggests an alternative way to achieve what I’ve requested. (<a href='https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Figure 4 shows an example of a <a href="https://openai.com/index/gpt-4o-system-card/">scorecard</a> OpenAI makes available as an attempt to increase users’ trust. These scorecards are available for each ChatGPT model and go into the specifics of how the models perform as it relates to key areas such as hallucinations, health-based conversations, and much more. In reading the scorecards closely, you will see that no AI model is perfect in any area. The user must remain in a trust but verify mode to make the relationship between human reality and AI work in a way that avoids potential catastrophe. There should never be blind trust in an LLM.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="363"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png"
			
			sizes="100vw"
			alt="Example of OpenAI scorecard for GPT-4o"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 4: Example of OpenAI scorecard for GPT-4o. (<a href='https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="designing-for-trust-repair-graceful-error-handling-and-not-knowing-an-answer">Designing For Trust Repair (Graceful Error Handling) And Not Knowing an Answer</h3>

<p>Your AI will make mistakes.</p>

<blockquote>Trust is not determined by the absence of errors, but by how those errors are handled.</blockquote>

<ul>
<li><strong>Acknowledge Errors Humbly.</strong><br />
When the AI is wrong, it should be able to state that clearly. <em>“My apologies, I misunderstood that request. Could you please rephrase it?”</em> is far better than silence or a nonsensical answer.</li>
<li><strong>Provide an Easy Path to Correction.</strong><br />
Make feedback mechanisms (like thumbs up/down or a correction box) obvious. More importantly, show that the feedback is being used. A <em>“Thank you, I’m learning from your correction”</em> can help rebuild trust after a failure. As long as this is true.</li>
</ul>

<p>Likewise, your AI can’t know everything. You should acknowledge this to your users.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aUX%20practitioners%20should%20work%20with%20the%20product%20team%20to%20ensure%20that%20honesty%20about%20limitations%20is%20a%20core%20product%20principle.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fpsychology-trust-ai-guide-measuring-designing-user-confidence%2f">
      
UX practitioners should work with the product team to ensure that honesty about limitations is a core product principle.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>This can include the following:</p>

<ul>
<li><strong>Establish User-Centric Metrics:</strong> Instead of only measuring engagement or task completion, UXers can work with product managers to define and track metrics like:

<ul>
<li><strong>Hallucination Rate:</strong> The frequency with which the AI provides verifiably false information.</li>
<li><strong>Successful Fallback Rate:</strong> How often the AI correctly identifies its inability to answer and provides a helpful, honest alternative.</li>
</ul></li>
<li><strong>Prioritize the “I Don’t Know” Experience:</strong> UXers should frame the “I don’t know” response not as an error state, but as a critical feature. They must lobby for the engineering and content resources needed to design a high-quality, helpful fallback experience.</li>
</ul>

<h2 id="ux-writing-and-trust">UX Writing And Trust</h2>

<p>All of these considerations highlight the critical role of <a href="https://lmsanchez.medium.com/what-is-ux-writing-1eb71b0f0606">UX writing</a> in the development of trustworthy AI. UX writers are the architects of the AI’s voice and tone, ensuring that its communication is clear, honest, and empathetic. They translate complex technical processes into user-friendly explanations, craft helpful error messages, and design conversational flows that build confidence and rapport. Without <strong>thoughtful UX writing</strong>, even the most technologically advanced AI can feel opaque and untrustworthy.</p>

<p>The words and phrases an AI uses are its primary interface with users. UX writers are uniquely positioned to shape this interaction, ensuring that every tooltip, prompt, and response contributes to a positive and trust-building experience. Their expertise in <strong>human-centered language and design</strong> is indispensable for creating AI systems that not only perform well but also earn and maintain the trust of their users.</p>

<p>A few key areas for UX writers to focus on when writing for AI include:</p>

<ul>
<li><strong>Prioritize Transparency</strong><br />
Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if its responses are generated rather than factual. Use phrases that indicate the AI’s nature, such as <em>“As an AI, I can&hellip;”</em> or <em>“This is a generated response.”</em></li>
<li><strong>Design for Explainability</strong><br />
When the AI provides a recommendation, decision, or complex output, strive to explain the reasoning behind it in an understandable way. This builds trust by showing the user how the AI arrived at its conclusion.</li>
<li><strong>Emphasize User Control</strong><br />
Empower users by providing clear ways to provide feedback, correct errors, or opt out of certain AI features. This reinforces the idea that the user is in control and the AI is a tool to assist them.</li>
</ul>

<h2 id="the-ethical-tightrope-the-researcher-s-responsibility">The Ethical Tightrope: The Researcher’s Responsibility</h2>

<p>As the people responsible for understanding and advocating for users, we walk an ethical tightrope. Our work comes with profound responsibilities.</p>

<h3 id="the-danger-of-trustwashing">The Danger Of “Trustwashing”</h3>

<p>We must draw a hard line between designing for <em>calibrated trust</em> and designing to <em>manipulate</em> users into trusting a flawed, biased, or harmful system. For example, if an AI system designed for loan approvals consistently discriminates against certain demographics but presents a user interface that implies fairness and transparency, this would be an instance of trustwashing.</p>

<p>Another example of trustwashing would be if an AI medical diagnostic tool occasionally misdiagnoses conditions, but the user interface makes it seem infallible. To avoid trustwashing, the system should clearly communicate the potential for error and the need for human oversight.</p>

<p>Our goal must be to create genuinely trustworthy systems, not just the perception of trust. Using these principles to lull users into a false sense of security is a betrayal of our professional ethics.</p>

<p><strong>To avoid and prevent trustwashing, researchers and UX teams should:</strong></p>

<ul>
<li><strong>Prioritize genuine transparency.</strong><br />
Clearly communicate the limitations, biases, and uncertainties of AI systems. Don’t overstate capabilities or obscure potential risks.</li>
<li><strong>Conduct rigorous, independent evaluations.</strong><br />
Go beyond internal testing and seek external validation of system performance, fairness, and robustness.</li>
<li><strong>Engage with diverse stakeholders.</strong><br />
Involve users, ethics experts, and impacted communities in the design, development, and evaluation processes to identify potential harms and build genuine trust.</li>
<li><strong>Be accountable for outcomes.</strong><br />
Take responsibility for the societal impact of AI systems, even if unintended. Establish mechanisms for redress and continuous improvement.</li>
<li><strong>Be accountable for outcomes.</strong><br />
Establish clear and accessible mechanisms for redress when harm occurs, ensuring that individuals and communities affected by AI decisions have avenues for recourse and compensation.</li>
<li><strong>Educate the public.</strong><br />
Help users understand how AI works, its limitations, and what to look for when evaluating AI products.</li>
<li><strong>Advocate for ethical guidelines and regulations.</strong><br />
Support the development and implementation of industry standards and policies that promote responsible AI development and prevent deceptive practices.</li>
<li><strong>Be wary of marketing hype.</strong><br />
Critically assess claims made about AI systems, especially those that emphasize “trustworthiness” without clear evidence or detailed explanations.</li>
<li><strong>Publish negative findings.</strong><br />
Don’t shy away from reporting challenges, failures, or ethical dilemmas encountered during research. Transparency about limitations is crucial for building long-term trust.</li>
<li><strong>Focus on user empowerment.</strong><br />
Design systems that give users control, agency, and understanding rather than just passively accepting AI outputs.</li>
</ul>

<h4 id="the-duty-to-advocate">The Duty To Advocate</h4>

<p>When our research uncovers deep-seated distrust or potential harm &mdash; like the fear of job displacement &mdash; our job has only just begun. We have an ethical duty to advocate for that user. In my experience directing research teams, I’ve seen that the hardest part of our job is often carrying these uncomfortable truths into rooms where decisions are made. We must champion these findings and advocate for <strong>design and strategy shifts that prioritize user well-being, even when it challenges the product roadmap</strong>.</p>

<p>I personally try to approach presenting this information as an opportunity for growth and improvement, rather than a negative challenge.</p>

<p>For example, instead of stating <em>“Users don’t trust our AI because they fear job displacement,”</em> I might frame it as <em>“Addressing user concerns about job displacement presents a significant opportunity to build deeper trust and long-term loyalty by demonstrating our commitment to responsible AI development and exploring features that enhance human capabilities rather than replace them.”</em> This reframing can shift the conversation from a defensive posture to a proactive, problem-solving mindset, encouraging collaboration and innovative solutions that ultimately benefit both the user and the business.</p>

<p>It’s no secret that one of the more appealing areas for businesses to use AI is in workforce reduction. In reality, there will be many cases where businesses look to cut 10&ndash;20% of a particular job family due to the perceived efficiency gains of AI. However, giving users the opportunity to shape the product may steer it in a direction that makes them <strong>feel safer</strong> than if they do not provide feedback. We should not attempt to convince users they are wrong if they are distrustful of AI. We should appreciate that they are willing to provide feedback, creating an experience that is informed by the human experts who have long been doing the task being automated.</p>

<div class="partners__lead-place"></div>

<h2 id="conclusion-building-our-digital-future-on-a-foundation-of-trust">Conclusion: Building Our Digital Future On A Foundation Of Trust</h2>

<p>The rise of AI is not the first major technological shift our field has faced. However, it presents one of the most significant psychological challenges of our current time. Building products that are not just usable but also <strong>responsible</strong>, <strong>humane</strong>, and <strong>trustworthy</strong> is our obligation as UX professionals.</p>

<p><strong>Trust is not a soft metric.</strong> It is the fundamental currency of any successful human-technology relationship. By understanding its psychological roots, measuring it with rigor, and designing for it with intent and integrity, we can move from creating “intelligent” products to building a future where users can place their confidence in the tools they use every day. A trust that is earned and deserved.</p>

<h3 id="table-1-published-academic-scales-measuring-trust-in-automated-systems">Table 1: Published Academic Scales Measuring Trust In Automated Systems</h3>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Survey Tool Name</th>
            <th>Focus</th>
      <th>Key Dimensions of Trust</th>
      <th>Citation</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Trust in Automation Scale</td>
            <td>12-item questionnaire to assess trust between people and automated systems.</td>
      <td>Measures a general level of trust, including reliability, predictability, and confidence.</td>
      <td>Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). <a href="https://www.researchgate.net/publication/247502831_Foundations_for_an_Empirically_Determined_Scale_of_Trust_in_Automated_Systems">Foundations for an empirically determined scale of trust in automated systems</a>. International Journal of Cognitive Ergonomics, 4(1), 53–71.</td>
        </tr>
        <tr>
            <td>Trust of Automated Systems Test (TOAST)</td>
            <td>9-items used to measure user trust in a variety of automated systems, designed for quick administration.</td>
      <td>Divided into two main subscales: Understanding (user’s comprehension of the system) and Performance (belief in the system’s effectiveness).</td>
      <td>Wojton, H. M., Porter, D., Lane, S. T., Bieber, C., & Madhavan, P. (2020). <a href="https://research.testscience.org/post/2019-initial-validation-of-the-trust-of-automated-systems-test-toast/paper.pdf">Initial validation of the trust of automated systems test (TOAST)</a>. (PDF) The Journal of Social Psychology, 160(6), 735–750.</td>
        </tr>
        <tr>
            <td>Trust in Automation Questionnaire</td>
            <td>A 19-item questionnaire capable of predicting user reliance on automated systems. A 2-item subscale is available for quick assessments; the full tool is recommended for a more thorough analysis.</td>
      <td>Measures 6 factors: Reliability, Understandability, Propensity to trust, Intentions of developers, Familiarity, Trust in automation</td>
      <td>Körber, M. (2018). <a href="https://www.researchgate.net/publication/323611886_Theoretical_considerations_and_development_of_a_questionnaire_to_measure_trust_in_automation">Theoretical considerations and development of a questionnaire to measure trust in automation</a>. In Proceedings 20th Triennial Congress of the IEA. Springer.</td>
        </tr>
    <tr>
            <td>Human Computer Trust Scale</td>
            <td>12-item questionnaire created to provide an empirically sound tool for assessing user trust in technology.</td>
      <td>Divided into two key factors:<ol><li><strong>Benevolence and Competence</strong>: This dimension captures the positive attributes of the technology</li><li><strong>Perceived Risk</strong>: This factor measures the user’s subjective assessment of the potential for negative consequences when using a technical artifact.</li></ol></td>
      <td>Siddharth Gulati, Sonia Sousa & David Lamas (2019): <a href="https://www.researchgate.net/profile/Sonia-Sousa-9/publication/335667672_Towards_an_empirically_developed_scale_for_measuring_trust/links/5f6f36d7458515b7cf508e88/Towards-an-empirically-developed-scale-for-measuring-trust.pdf">Design, development and evaluation of a human-computer trust scale</a>, (PDF) Behaviour & Information Technology</td>
        </tr>
    </tbody>
</table>

<h3 id="appendix-a-trust-building-tactics-checklist">Appendix A: Trust-Building Tactics Checklist</h3>

<p>To design for calibrated trust, consider implementing the following tactics, organized by the four pillars of trust:</p>

<h4 id="1-ability-competence-predictability">1. Ability (Competence) &amp; Predictability</h4>

<ul>
<li>✅ <strong>Set Clear Expectations:</strong> Use onboarding, tooltips, and empty states to honestly communicate the AI’s strengths and weaknesses.</li>
<li>✅ <strong>Show Confidence Levels:</strong> Display the AI’s uncertainty (e.g., “70% chance,” “85% confident”) or highlight less certain parts of its output.</li>
<li>✅ <strong>Provide Explainability (XAI):</strong> Offer useful, human-understandable rationales for the AI’s decisions or recommendations (e.g., “Because you frequently read X, I’m recommending Y”).</li>
<li>✅ <strong>Design for Graceful Error Handling:</strong>

<ul>
<li>✅ Acknowledge errors humbly (e.g., “My apologies, I misunderstood that request.”).</li>
<li>✅ Provide easy paths to correction (e. ] g., prominent feedback mechanisms like thumbs up/down).</li>
<li>✅ Show that feedback is being used (e.g., “Thank you, I’m learning from your correction”).</li>
</ul></li>
<li>✅ <strong>Design for “I Don’t Know” Responses:</strong>

<ul>
<li>✅ Acknowledge limitations honestly.</li>
<li>✅ Prioritize a high-quality, helpful fallback experience when the AI cannot answer.</li>
</ul></li>
<li>✅ <strong>Prioritize Transparency:</strong> Clearly communicate the AI’s capabilities and limitations, especially if responses are generated.</li>
</ul>

<h4 id="2-benevolence-1">2. Benevolence</h4>

<ul>
<li>✅ <strong>Address Existential Fears:</strong> When users express concerns (e.g., job displacement), validate their concerns and reframe the feedback into actionable insights about collaborative tools.</li>
<li>✅ <strong>Prioritize User Well-being:</strong> Advocate for design and strategy shifts that prioritize user well-being, even if it challenges the product roadmap.</li>
<li>✅ <strong>Emphasize User Control:</strong> Provide clear ways for users to give feedback, correct errors, or opt out of AI features.</li>
</ul>

<h4 id="3-integrity-1">3. Integrity</h4>

<ul>
<li>✅ <strong>Adhere to Ethical Principles:</strong> Ensure the AI operates on predictable, ethical principles, demonstrating fairness and honesty.</li>
<li>✅ <strong>Prioritize Genuine Transparency:</strong> Clearly communicate the limitations, biases, and uncertainties of AI systems; avoid overstating capabilities or obscuring risks.</li>
<li>✅ <strong>Conduct Rigorous, Independent Evaluations:</strong> Seek external validation of system performance, fairness, and robustness to mitigate bias.</li>
<li>✅ <strong>Engage Diverse Stakeholders:</strong> Involve users, ethics experts, and impacted communities in the design and evaluation processes.</li>
<li>✅ <strong>Be Accountable for Outcomes:</strong> Establish clear mechanisms for redress and continuous improvement for societal impacts, even if unintended.</li>
<li>✅ <strong>Educate the Public:</strong> Help users understand how AI works, its limitations, and how to evaluate AI products.</li>
<li>✅ <strong>Advocate for Ethical Guidelines:</strong> Support the development and implementation of industry standards and policies that promote responsible AI.</li>
<li>✅ <strong>Be Wary of Marketing Hype:</strong> Critically assess claims about AI “trustworthiness” and demand verifiable data.</li>
<li>✅ <strong>Publish Negative Findings:</strong> Be transparent about challenges, failures, or ethical dilemmas encountered during research.</li>
</ul>

<h4 id="4-predictability-reliability-1">4. Predictability &amp; Reliability</h4>

<ul>
<li>✅ <strong>Set Clear Expectations:</strong> Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle.</li>
<li>✅ <strong>Show Confidence Levels:</strong> Instead of just giving an answer, have the AI signal its own uncertainty.</li>
<li>✅ <strong>Provide Explainability (XAI) and Transparency:</strong> Offer a useful, human-understandable rationale for AI decisions.</li>
<li>✅ <strong>Design for Graceful Error Handling:</strong> Acknowledge errors humbly and provide easy paths to correction.</li>
<li>✅ <strong>Prioritize the “I Don’t Know” Experience:</strong> Frame “I don’t know” as a feature and design a high-quality fallback experience.</li>
<li>✅ <strong>Prioritize Transparency (UX Writing):</strong> Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if responses are generated.</li>
<li>✅ <strong>Design for Explainability (UX Writing):</strong> Explain the reasoning behind AI recommendations, decisions, or complex outputs.</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Paul Boag</author><title>Functional Personas With AI: A Lean, Practical Workflow</title><link>https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/</link><pubDate>Tue, 16 Sep 2025 08:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/</guid><description>For too long, personas have been something that many of us just created, despite the considerable work that goes into them, only to find they have limited usefulness. Paul Boag shows how to breathe new life into this stale UX asset and demonstrates that it’s possible to create truly useful functional personas in a lightweight way.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/" />
              <title>Functional Personas With AI: A Lean, Practical Workflow</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Functional Personas With AI: A Lean, Practical Workflow</h1>
                  
                    
                    <address>Paul Boag</address>
                  
                  <time datetime="2025-09-16T08:00:00&#43;00:00" class="op-published">2025-09-16T08:00:00+00:00</time>
                  <time datetime="2025-09-16T08:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>Traditional personas suck for UX work. They obsess over marketing metrics like age, income, and job titles while missing what actually matters in design: what people are trying to accomplish.</p>

<p><a href="https://boagworld.com/usability/personas/">Functional personas</a>, on the other hand, focus on what people are trying to do, not who they are on paper. With a simple AI‑assisted workflow, you can build and maintain personas that actually guide design, content, and conversion decisions.</p>

<ul>
<li>Keep users front of mind with task‑driven personas,</li>
<li>Skip fragile demographics; center on goals, questions, and blockers,</li>
<li>Use AI to process your messy inputs fast and fill research gaps,</li>
<li>Validate lightly, ship confidently, and keep them updated.</li>
</ul>

<p>In this article, I want to breathe new life into a stale UX asset.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/traditional-demographic-personas.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="483"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/traditional-demographic-personas.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/traditional-demographic-personas.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/traditional-demographic-personas.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/traditional-demographic-personas.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/traditional-demographic-personas.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/traditional-demographic-personas.png"
			
			sizes="100vw"
			alt="Traditional demographic personas"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Traditional demographic personas look good but quickly get outdated, need constant updating, and rarely offer practical UX guidance. (<a href='https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/traditional-demographic-personas.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>For too long, personas have been something that many of us just created, despite the considerable work that goes into them, only to find they have limited usefulness.</p>

<p>I know that many of you may have given up on them entirely, but I am hoping in this post to encourage you that it is possible to create truly useful personas in a lightweight way.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="why-personas-still-matter">Why Personas Still Matter</h2>

<p>Personas give you a shared lens. When everyone uses the same reference point, you cut debate and make better calls. For UX designers, developers, and digital teams, that shared lens keeps you from designing in silos and helps you prioritize work that genuinely improves the experience.</p>

<p>I use personas as a quick test: <em>Would this change help this user complete their task faster, with fewer doubts?</em> If the answer is no (or a shrug), it’s probably a sign the idea isn’t worth pursuing.</p>

<h2 id="from-demographics-to-function">From Demographics To Function</h2>

<p>Traditional personas tell you someone’s age, job title, or favorite brand. That makes a nice poster, but it rarely changes design or copy.</p>

<p><strong>Functional personas flip the script.</strong> They describe:</p>

<ul>
<li><strong>Goals &amp; tasks:</strong> What the person is here to achieve.</li>
<li><strong>Questions &amp; objections:</strong> What they need to know before they act.</li>
<li><strong>Touchpoints:</strong> How the person interacts with the organization.</li>
<li><strong>Service gaps:</strong> How the company might be letting this persona down.</li>
</ul>

<p>When you center on tasks and friction, you get direct lines from user needs to UI decisions, content, and conversion paths.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/persona-templates.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="2354"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/persona-templates.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/persona-templates.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/persona-templates.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/persona-templates.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/persona-templates.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/persona-templates.png"
			
			sizes="100vw"
			alt="Persona templates"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Persona templates should be customized for each organization’s specific needs and contexts. (<a href='https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/persona-templates.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>But remember, this list isn’t set in stone &mdash; adapt it to what’s actually useful in your specific situation.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aOne%20of%20the%20biggest%20problems%20with%20traditional%20personas%20was%20following%20a%20rigid%20template%20regardless%20of%20whether%20it%20made%20sense%20for%20your%20project.%20We%20must%20not%20fall%20into%20that%20same%20mistake%20with%20functional%20personas.%0a&url=https://smashingmagazine.com%2f2025%2f09%2ffunctional-personas-ai-lean-practical-workflow%2f">
      
One of the biggest problems with traditional personas was following a rigid template regardless of whether it made sense for your project. We must not fall into that same mistake with functional personas.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h2 id="the-benefits-of-functional-personas">The Benefits of Functional Personas</h2>

<p>For small startups, functional personas <strong>reduce wasted effort</strong>. For enterprise teams, they keep sprawling projects grounded in what matters most.</p>

<p>However, because of the way we are going to produce our personas, they provide certain benefits in either case:</p>

<ul>
<li><strong>Lighten the load:</strong> They’re easier to update without large research cycles.</li>
<li><strong>Stay current:</strong> Because they are easy to produce, we can update them more often.</li>
<li><strong>Tie to outcomes:</strong> Tasks, objections, and proof points map straight to funnels, flows, and product decisions.</li>
</ul>

<p>We can deliver these benefits because we are going to use AI to help us, rather than carrying out a lot of time-consuming new research.</p>

<h2 id="how-ai-helps-us-get-there">How AI Helps Us Get There</h2>

<p>Of course, doing fresh research is always preferable. But in many cases, it is not feasible due to time or budget constraints. I would argue that using AI to help us create personas based on existing assets is preferable to having no focus on user attention at all.</p>

<p>AI tools can chew through the inputs you already have (surveys, analytics, chat logs, reviews) and surface patterns you can act on. They also help you scan public conversations around your product category to fill gaps.</p>

<p>I therefore recommend using AI to:</p>

<ul>
<li><strong>Synthesize inputs:</strong> Turn scattered notes into clean themes.</li>
<li><strong>Spot segments by need:</strong> Group people by jobs‑to‑be‑done, not demographics.</li>
<li><strong>Draft quickly:</strong> Produce first‑pass personas and sample journeys in minutes.</li>
<li><strong>Iterate with stakeholders:</strong> Update on the fly as you get feedback.</li>
</ul>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aAI%20doesn%e2%80%99t%20remove%20the%20need%20for%20traditional%20research.%20Rather,%20it%20is%20a%20way%20of%20extracting%20more%20value%20from%20the%20scattered%20insights%20into%20users%20that%20already%20exist%20within%20an%20organization%20or%20online.%0a&url=https://smashingmagazine.com%2f2025%2f09%2ffunctional-personas-ai-lean-practical-workflow%2f">
      
AI doesn’t remove the need for traditional research. Rather, it is a way of extracting more value from the scattered insights into users that already exist within an organization or online.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<div class="partners__lead-place"></div>

<h2 id="the-workflow">The Workflow</h2>

<p>Here’s how to move from scattered inputs to usable personas. Each step builds on the last, so treat it as a cycle you can repeat as projects evolve.</p>

<h3 id="1-set-up-a-dedicated-workspace">1. Set Up A Dedicated Workspace</h3>

<p>Create a dedicated space within your AI tool for this work. Most AI platforms offer project management features that let you organize files and conversations:</p>

<ul>
<li>In ChatGPT and Claude, use “Projects” to store context and instructions.</li>
<li>In Perplexity, Gemini and CoPilot similar functionality is referred to as “Spaces.”</li>
</ul>

<p>This project space becomes your central repository where all uploaded documents, research data, and generated personas live together. The AI will maintain context between sessions, so you won’t have to re-upload materials each time you iterate. This structured approach makes your workflow more efficient and helps the AI deliver more consistent results.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/chatgpt-projects-persona-development.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="525"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/chatgpt-projects-persona-development.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/chatgpt-projects-persona-development.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/chatgpt-projects-persona-development.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/chatgpt-projects-persona-development.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/chatgpt-projects-persona-development.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/chatgpt-projects-persona-development.png"
			
			sizes="100vw"
			alt="ChatGPT Project for persona development"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      ChatGPT Projects serve as an effective tool for gathering and analyzing user research data in ways that directly support persona development. (<a href='https://files.smashing.media/articles/functional-personas-ai-lean-practical-workflow/chatgpt-projects-persona-development.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="2-write-clear-instructions">2. Write Clear Instructions</h3>

<p>Next, you can brief your AI project so that it understands what it wants from you. For example:</p>

<blockquote>“Act as a user researcher. Create realistic, functional personas using the project files and public research. Segment by needs, tasks, questions, pain points, and goals. Show your reasoning.”</blockquote>

<p>Asking for a rationale gives you a paper trail you can defend to stakeholders.</p>

<h3 id="3-upload-what-you-ve-got-even-if-it-s-messy">3. Upload What You’ve Got (Even If It’s Messy)</h3>

<p>This is where things get really powerful.</p>

<p>Upload everything (and I mean everything) you can put your hands on relating to the user. Old surveys, past personas, analytics screenshots, FAQs, support tickets, review snippets; dump them all in. The more varied the sources, the stronger the triangulation.</p>

<h3 id="4-run-focused-external-research">4. Run Focused External Research</h3>

<p>Once you have done that, you can supplement that data by getting AI to carry out “deep research” about your brand. Have AI scan recent (I often focus on the last year) public conversations for your brand, product space, or competitors. Look for:</p>

<ul>
<li>Who’s talking and what they’re trying to do;</li>
<li>Common questions and blockers;</li>
<li>Phrases people use (great for copywriting).</li>
</ul>

<p>Save the report you get back into your project.</p>

<h3 id="5-propose-segments-by-need">5. Propose Segments By Need</h3>

<p>Once you have done that, ask AI to suggest segments based on tasks and friction points (not demographics). Push back until each segment is <strong>distinct, observable, and actionable</strong>. If two would behave the same way in your flow, merge them.</p>

<p>This takes a little bit of trial and error and is where your experience really comes into play.</p>

<h3 id="6-generate-draft-personas">6. Generate Draft Personas</h3>

<p>Now you have your segments, the next step is to draft your personas. Use a simple template so the document is read and used. If your personas become too complicated, people will not read them. Each persona should:</p>

<ul>
<li>State goals and tasks,</li>
<li>List objections and blockers,</li>
<li>Highlight pain points,</li>
<li>Show touchpoints,</li>
<li>Identify service gaps.</li>
</ul>

<p>Below is a sample template you can work with:</p>

<pre><code class="language-markdown">&#35; Persona Title: e.g. Savvy Shopper
- Person's Name: e.g. John Smith.
- Age: e.g. 24
- Job: e.g. Social Media Manager

"A quote that sums up the persona's general attitude"

&#35;&#35; Primary Goal
What they’re here to achieve (1–2 lines).

&#35;&#35; Key Tasks
• Task 1
• Task 2
• Task 3

&#35;&#35; Questions & Objections
• What do they need to know before they act?
• What might make them hesitate?

&#35;&#35; Pain Points
• Where do they get stuck?
• What feels risky, slow, or confusing?

&#35;&#35; Touchpoints
• What channels are they most commonly interacting with?

&#35;&#35; Service Gaps
• How is the organization currently failing this persona?
</code></pre>

<p>Remember, you should customize this to reflect what will prove useful within your organization.</p>

<h3 id="7-validate">7. Validate</h3>

<p>It is important to validate that what the AI has produced is realistic. Obviously, no persona is a true representation as it is a snapshot in time of a Hypothetical user. However, we do want it to be as accurate as possible.</p>

<p>Share your drafts with colleagues who interact regularly with real users &mdash; people in support cells or research teams. Where possible, test with a handful of users. Then cut anything that you can’t defend or correct any errors that are identified.</p>

<div class="partners__lead-place"></div>

<h2 id="troubleshooting-guardrails">Troubleshooting &amp; Guardrails</h2>

<p>As you work through the above process, you will encounter problems. Here are common pitfalls and how to avoid them:</p>

<ul>
<li><strong>Too many personas?</strong><br />
Merge until each one changes a design or copy decision. Three strong personas beat seven weak ones.</li>
<li><strong>Stakeholder wants demographics?</strong><br />
Only include details that affect behavior. Otherwise, leave them out. Suggest separate personas for other functions (such as marketing).</li>
<li><strong>AI hallucinations?</strong><br />
Always ask for a rationale or sources. Cross‑check with your own data and customer‑facing teams.</li>
<li><strong>Not enough data?</strong><br />
Mark assumptions clearly, then validate with quick interviews, surveys, or usability tests.</li>
</ul>

<h2 id="making-personas-useful-in-practice">Making Personas Useful In Practice</h2>

<p>The most important thing to remember is to actually use your personas once they’ve been created. They can easily become forgotten PDFs rather than active tools. Instead, personas should shape your work and be referenced regularly. Here are some ways you can put personas to work:</p>

<ul>
<li><strong>Navigation &amp; IA:</strong> Structure menus by top tasks.</li>
<li><strong>Content &amp; Proof:</strong> Map objections to FAQs, case studies, and microcopy.</li>
<li><strong>Flows &amp; UI:</strong> Streamline steps to match how people think.</li>
<li><strong>Conversion:</strong> Match CTAs to personas’ readiness, goals, and pain points.</li>
<li><strong>Measurement:</strong> Track KPIs that map to personas, not vanity metrics.</li>
</ul>

<p>With this approach, personas evolve from static deliverables into <strong>dynamic reference points</strong> your whole team can rely on.</p>

<h2 id="keep-them-alive">Keep Them Alive</h2>

<p>Treat personas as a <strong>living toolkit</strong>. Schedule a refresh every quarter or after major product changes. Rerun the research pass, regenerate summaries, and archive outdated assumptions. The goal isn’t perfection; it’s keeping them relevant enough to guide decisions.</p>

<h2 id="bottom-line">Bottom Line</h2>

<p>Functional personas are faster to build, easier to maintain, and better aligned with real user behavior. By combining AI’s speed with human judgment, you can create personas that don’t just sit in a slide deck; they actively shape better products, clearer interfaces, and smoother experiences.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Lyndon Cerejo</author><title>Prompting Is A Design Act: How To Brief, Guide And Iterate With AI</title><link>https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/</link><pubDate>Fri, 29 Aug 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/</guid><description>Prompting is more than giving AI some instructions. You could think of it as a design act, part creative brief and part conversation design. This second article on AI augmenting design work introduces a designerly approach to prompting: one that blends creative briefing, interaction design, and structural clarity.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/" />
              <title>Prompting Is A Design Act: How To Brief, Guide And Iterate With AI</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Prompting Is A Design Act: How To Brief, Guide And Iterate With AI</h1>
                  
                    
                    <address>Lyndon Cerejo</address>
                  
                  <time datetime="2025-08-29T10:00:00&#43;00:00" class="op-published">2025-08-29T10:00:00+00:00</time>
                  <time datetime="2025-08-29T10:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>In “<a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">A Week In The Life Of An AI-Augmented Designer</a>”, we followed Kate’s weeklong journey of her first AI-augmented design sprint. She had three realizations through the process:</p>

<ol>
<li><strong>AI isn’t a co-pilot (yet); it’s more like a smart, eager intern</strong>.<br />
One with access to a lot of information, good recall, fast execution, but no context. That mindset defined how she approached every interaction with AI: not as magic, but as management.</li>
<li><strong>Don’t trust; guide, coach, and always verify.</strong><br />
Like any intern, AI needs coaching and supervision, and that’s where her designerly skills kicked in. Kate relied on curiosity to explore, observation to spot bias, empathy to humanize the output, and critical thinking to challenge what didn’t feel right. Her learning mindset helped her keep up with advances, and experimentation helped her learn by doing.</li>
<li><strong>Prompting is part creative brief, and part conversation design, just with an AI instead of a person.</strong><br />
When you prompt an AI, you’re not just giving instructions, but designing how it responds, behaves, and outputs information. If AI is like an intern, then the prompt is your creative brief that frames the task, sets the tone, and clarifies what good looks like. It’s also your conversation script that guides how it responds, how the interaction flows, and how ambiguity is handled.</li>
</ol>

<p>As designers, we’re used to designing interactions for people. Prompting is us designing our own interactions with machines &mdash; it uses the same mindset with a new medium. It shapes an AI’s behavior the same way you’d guide a user with structure, clarity, and intent.</p>

<p>If you’ve bookmarked, downloaded, or saved prompts from others, you’re not alone. We’ve all done that during our AI journeys. But while someone else’s prompts are a good starting point, you will get better and more relevant results if you can write your own prompts tailored to your goals, context, and style. Using someone else’s prompt is like using a Figma template. It gets the job done, but mastery comes from understanding and applying the fundamentals of design, including layout, flow, and reasoning. Prompts have a structure too. And when you learn it, you stop guessing and start designing.</p>

<p><strong>Note</strong>: <em>All prompts in this article were tested using ChatGPT &mdash; not because it’s the only game in town, but because it’s friendly, flexible, and lets you talk like a person, yes, even after the recent GPT-5 “update”. That said, any LLM with a decent attention span will work. Results for the same prompt may vary based on the AI model you use, the AI’s training, mood, and how confidently it can hallucinate.</em></p>

<p><strong>Privacy PSA</strong>: <em>As always, don’t share anything you wouldn’t want leaked, logged, or accidentally included in the next AI-generated meme. Keep it safe, legal, and user-respecting.</em></p>

<p>With that out of the way, let’s dive into the mindset, anatomy, and methods of effective prompting as another tool in your design toolkit.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="mindset-prompt-like-a-designer">Mindset: Prompt Like A Designer</h2>

<p>As designers, we storyboard journeys, wireframe interfaces to guide users, and write UX copy with intention. However, when prompting AI, we treat it differently: “Summarize these insights”, “Make this better”, “Write copy for this screen”, and then wonder why the output feels generic, off-brand, or just meh. It’s like expecting a creative team to deliver great work from a one-line Slack message. We wouldn’t brief a freelancer, much less an intern, with “Design a landing page,” so why brief AI that way?</p>

<h3 id="prompting-is-a-creative-brief-for-a-machine">Prompting Is A Creative Brief For A Machine</h3>

<p>Think of a good prompt as a <strong>creative brief</strong>, just for a non-human collaborator. It needs similar elements, including a clear role, defined goal, relevant context, tone guidance, and output expectations. Just as a well-written creative brief unlocks alignment and quality from your team, a well-structured prompt helps the AI meet your expectations, even though it doesn’t have real instincts or opinions.</p>

<h3 id="prompting-is-also-conversation-design">Prompting Is Also Conversation Design</h3>

<p>A good prompt goes beyond defining the task and sets the tone for the exchange by designing a conversation: guiding how the AI interprets, sequences, and responds. You shape the flow of tasks, how ambiguity is handled, and how refinement happens &mdash; that’s conversation design.</p>

<h2 id="anatomy-structure-it-like-a-designer">Anatomy: Structure It Like A Designer</h2>

<p>So how do you write a designer-quality prompt? That’s where the <strong>W.I.R.E.+F.R.A.M.E.</strong> prompt design framework comes in &mdash; a UX-inspired framework for writing intentional, structured, and reusable prompts. Each letter represents a key design direction, grounded in the way UX designers already think: Just as a wireframe doesn’t dictate final visuals, this WIRE+FRAME framework doesn’t constrain creativity, but guides the AI with structured information it needs.</p>

<blockquote>“Why not just use a series of back-and-forth chats with AI?”</blockquote>

<p>You can, and many people do. But without structure, AI fills in the gaps on its own, often with vague or generic results. A good prompt upfront saves time, reduces trial and error, and improves consistency. And whether you’re working on your own or across a team, a framework means you’re not reinventing a prompt every time but reusing what works to get better results faster.</p>

<p>Just as we build wireframes before adding layers of fidelity, the WIRE+FRAME framework has two parts:</p>

<ul>
<li><strong>WIRE</strong> is the must-have skeleton. It gives the prompt its shape.</li>
<li><strong>FRAME</strong> is the set of enhancements that bring polish, logic, tone, and reusability &mdash; like building a high-fidelity interface from the wireframe.</li>
</ul>

<p>Let’s improve <a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">Kate’s original research synthesis prompt</a> (<em>“Read this customer feedback and tell me how we can improve financial literacy for Gen Z in our app”</em>). To better reflect how people actually prompt in practice, let’s tweak it to a more broadly applicable version: <em>“Read this customer feedback and tell me how we can improve our app for Gen Z users.”</em> This one-liner mirrors the kinds of prompts we often throw at AI tools: short, simple, and often lacking structure.</p>

<p>Now, we’ll take that prompt and rebuild it using the first four elements of the <strong>W.I.R.E.</strong> framework &mdash; the core building blocks that provide AI with the main information it needs to deliver useful results.</p>

<h3 id="w-who-what">W: Who &amp; What</h3>

<p><em>Define who the AI should be, and what it’s being asked to deliver.</em></p>

<p>A creative brief starts with assigning the right hat. Are you briefing a copywriter? A strategist? A product designer? The same logic applies here. Give the AI a clear identity and task. Treat AI like a trusted freelancer or intern. Instead of saying “help me”, tell it who it should act as and what’s expected.</p>

<p><strong>Example</strong>: <em>“You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.”</em></p>

<h3 id="i-input-context">I: Input Context</h3>

<p><em>Provide background that frames the task.</em></p>

<p>Creative partners don’t work in a vacuum. They need context: the audience, goals, product, competitive landscape, and what’s been tried already. This is the “What you need to know before you start” section of the brief. Think: key insights, friction points, business objectives. The same goes for your prompt.</p>

<p><strong>Example</strong>: <em>“You are analyzing customer feedback for Fintech Brand’s app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.”</em></p>

<h3 id="r-rules-constraints">R: Rules &amp; Constraints</h3>

<p><em>Clarify any limitations, boundaries, and exclusions.</em></p>

<p>Good creative briefs always include boundaries &mdash; what to avoid, what’s off-brand, or what’s non-negotiable. Things like brand voice guidelines, legal requirements, or time and word count limits. Constraints don’t limit creativity &mdash; they focus it. AI needs the same constraints to avoid going off the rails.</p>

<p><strong>Example</strong>: <em>“Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.”</em></p>

<h3 id="e-expected-output">E: Expected Output</h3>

<p><em>Spell out what the deliverable should look like.</em></p>

<p>This is the deliverable spec: What does the finished product look like? What tone, format, or channel is it for? Even if the task is clear, the format often isn’t. Do you want bullet points or a story? A table or a headline? If you don’t say, the AI will guess, and probably guess wrong. Even better, include an example of the output you want, an effective way to help AI know what you’re expecting. If you’re using GPT-5, you can also mix examples across formats (text, images, tables) together.</p>

<p><strong>Example</strong>: <em>“Return a structured list of themes. For each theme, include:</em></p>

<ul>
<li><strong><em>Theme Title</em></strong></li>
<li><strong><em>Summary of the Issue</em></strong></li>
<li><strong><em>Problem Statement</em></strong></li>
<li><strong><em>Opportunity</em></strong></li>
<li><strong><em>Representative Quotes (from data only)</em></strong></li>
<li><strong><em>Journey Stage(s)</em></strong></li>
<li><strong><em>Frequency (count from data)</em></strong></li>
<li><strong><em>Severity Score (1–5)</em></strong> <em>where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issue</em></li>
<li><strong><em>Estimated Effort (Low / Medium / High)</em></strong>, <em>where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes.”</em></li>
</ul>

<p><strong>WIRE</strong> gives you everything you need to stop guessing and start designing your prompts with purpose. When you start with WIRE, your prompting is like a briefing, treating AI like a collaborator.</p>

<p>Once you’ve mastered this core structure, you can layer in additional fidelity, like tone, step-by-step flow, or iterative feedback, using the <strong>FRAME</strong> elements. These five elements provide additional guidance and clarity to your prompt by layering clear deliverables, thoughtful tone, reusable structure, and space for creative iteration.</p>

<h3 id="f-flow-of-tasks">F: Flow of Tasks</h3>

<p><em>Break complex prompts into clear, ordered steps.</em></p>

<p>This is your project plan or creative workflow that lays out the stages, dependencies, or sequence of execution. When the task has multiple parts, don’t just throw it all into one sentence. You are doing the thinking and guiding AI. Structure it like steps in a user journey or modules in a storyboard. In this example, it fits as the blueprint for the AI to use to generate the table described in “E: Expected Output”</p>

<p><strong>Example</strong>: <em>“Recommended flow of tasks:<br />
Step 1: Parse the uploaded data and extract discrete pain points.<br />
Step 2: Group them into themes based on pattern similarity.<br />
Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort.<br />
Step 4: Map each theme to the appropriate customer journey stage(s).<br />
Step 5: For each theme, write a clear problem statement and opportunity based only on what’s in the data.”</em></p>

<h3 id="r-reference-voice-or-style">R: Reference Voice or Style</h3>

<p><em>Name the desired tone, mood, or reference brand.</em></p>

<p>This is the brand voice section or style mood board &mdash; reference points that shape the creative feel. Sometimes you want buttoned-up. Other times, you want conversational. Don’t assume the AI knows your tone, so spell it out.</p>

<p><strong>Example</strong>: <em>“Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.”</em></p>

<h3 id="a-ask-for-clarification">A: Ask for Clarification</h3>

<p><em>Invite the AI to ask questions before generating, if anything is unclear.</em></p>

<p>This is your <em>“Any questions before we begin?”</em> moment &mdash; a key step in collaborative creative work. You wouldn’t want a freelancer to guess what you meant if the brief was fuzzy, so why expect AI to do better? Ask AI to reflect or clarify before jumping into output mode.</p>

<p><strong>Example</strong>: <em>“If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.”</em></p>

<h3 id="m-memory-within-the-conversation">M: Memory (Within The Conversation)</h3>

<p><em>Reference earlier parts of the conversation and reuse what’s working.</em></p>

<p>This is similar to keeping visual tone or campaign language consistent across deliverables in a creative brief. Prompts are rarely one-shot tasks, so this reminds AI of the tone, audience, or structure already in play. GPT-5 got better with memory, but this still remains a useful element, especially if you switch topics or jump around.</p>

<p><strong>Example</strong>: <em>“Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.”</em></p>

<div class="partners__lead-place"></div>

<h3 id="e-evaluate-iterate">E: Evaluate &amp; Iterate</h3>

<p><em>Invite the AI to critique, improve, or generate variations.</em></p>

<p>This is your revision loop &mdash; your way of prompting for creative direction, exploration, and refinement. Just like creatives expect feedback, your AI partner can handle review cycles if you ask for them. Build iteration into the brief to get closer to what you actually need. Sometimes, you may see ChatGPT test two versions of a response on its own by asking for your preference.</p>

<p><strong>Example</strong>: <em>“After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort).</em></p>

<p><em>For that top-priority theme:</em></p>

<ul>
<li><em>Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?</em></li>
<li><em>Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).</em></li>
<li><em>Rewrite the theme entry with that improvement applied.</em></li>
<li><em>Briefly explain why the revision is stronger and more useful for product or design teams.”</em></li>
</ul>

<p>Here’s a quick recap of the WIRE+FRAME framework:</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Framework Component</th>
            <th>Description</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><strong>W: Who & What</strong></td>
            <td>Define the AI persona and the core deliverable.</td>
        </tr>
        <tr>
            <td><strong>I: Input Context</strong></td>
            <td>Provide background or data scope to frame the task.</td>
        </tr>
        <tr>
            <td><strong>R: Rules & Constraints</strong></td>
            <td>Set boundaries</td>
        </tr>
    <tr>
            <td><strong>E: Expected Output</strong></td>
            <td>Spell out the format and fields of the deliverable.</td>
        </tr>
    <tr>
            <td><strong>F: Flow of Tasks</strong></td>
            <td>Break the work into explicit, ordered sub-tasks.</td>
        </tr>
    <tr>
            <td><strong>R: Reference Voice/Style</strong></td>
            <td>Name the tone, mood, or reference brand to ensure consistency.</td>
        </tr>
    <tr>
            <td><strong>A: Ask for Clarification</strong></td>
            <td>Invite AI to pause and ask questions if any instructions or data are unclear before proceeding.</td>
        </tr>
    <tr>
            <td><strong>M: Memory</strong></td>
            <td>Leverage in-conversation memory to recall earlier definitions, examples, or phrasing without restating them.</td>
        </tr>
    <tr>
            <td><strong>E: Evaluate & Iterate</strong></td>
            <td>After generation, have the AI self-critique the top outputs and refine them.</td>
        </tr>
    </tbody>
</table>

<p>And here’s the full WIRE+FRAME prompt:</p>

<blockquote><strong>(W)</strong> You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.<br /><br /><strong>(I)</strong> You are analyzing customer feedback for Fintech Brand’s app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.<br /><br /><strong>(R)</strong> Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.<br /><br /><strong>(E)</strong> Return a structured list of themes. For each theme, include:<ul><li><strong>Theme Title</strong></li><li><strong>Summary of the Issue</strong></li><li><strong>Problem Statement</strong></li><li><strong>Opportunity</strong></li><li><strong>Representative Quotes (from data only)</strong></li><li><strong>Journey Stage(s)</strong></li><li><strong>Frequency (count from data)</strong></li><li><strong>Severity Score (1–5)</strong> where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issue</li><li><strong>Estimated Effort (Low / Medium / High)</strong>, where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes</li></ul><strong>(F)</strong> Recommended flow of tasks:<br />Step 1: Parse the uploaded data and extract discrete pain points.<br />Step 2: Group them into themes based on pattern similarity.<br />Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort.<br />Step 4: Map each theme to the appropriate customer journey stage(s).<br />Step 5: For each theme, write a clear problem statement and opportunity based only on what’s in the data.<br /><br /><strong>(R)</strong> Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.<br /><br /><strong>(A)</strong> If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.<br /><br /><strong>(M)</strong> Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.<br /><br /><strong>(E)</strong> After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort).<br />For that top-priority theme:<ul><li>Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?</li><li>Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).</li><li>Rewrite the theme entry with that improvement applied.</li><li>Briefly explain why the revision is stronger and more useful for product or design teams.</li></ul></blockquote>

<p>You could use “##” to label the sections (e.g., “##FLOW”) more for your readability than for AI. At over 400 words, this Insights Synthesis prompt example is a detailed, structured prompt, but it isn’t customized for you and your work. The intent wasn’t to give you a specific prompt (the proverbial fish), but to show how you can use a prompt framework like WIRE+FRAME to create a customized, relevant prompt that will help AI augment your work (teaching you to fish).</p>

<p>Keep in mind that prompt length isn’t a common concern, but rather a lack of quality and structure is. As of the time of writing, AI models can easily process prompts that are thousands of words long.</p>

<p>Not every prompt needs all the FRAME components; WIRE is often enough to get the job done. But when the work is strategic or highly contextual, pick components from FRAME &mdash; the extra details can make a difference. Together, WIRE+FRAME give you a detailed framework for creating a well-structured prompt, with the crucial components first, followed by optional components:</p>

<ul>
<li><strong>WIRE</strong> builds a clear, focused prompt with role, input, rules, and expected output.</li>
<li><strong>FRAME</strong> adds refinement like tone, reusability, and iteration.</li>
</ul>

<p>Here are some scenarios and recommendations for using WIRE or WIRE+FRAME:</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Scenarios</th>
            <th>Description</th>
      <th>Recommended</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><strong>Simple, One-Off Analyses</strong></td>
            <td>Quick prompting with minimal setup and no need for detailed process transparency.</td>
      <td>WIRE</td>
        </tr>
        <tr>
            <td><strong>Tight Sprints or Hackathons</strong></td>
            <td>Rapid turnarounds, and times you don’t need embedded review and iteration loops.</td>
      <td>WIRE</td>
        </tr>
        <tr>
            <td><strong>Highly Iterative Exploratory Work</strong></td>
            <td>You expect to tweak results constantly and prefer manual control over each step.</td>
      <td>WIRE</td>
        </tr>
    <tr>
            <td><strong>Complex Multi-Step Playbooks</strong></td>
            <td>Detailed workflows that benefit from a standardized, repeatable, visible sequence.</td>
      <td>WIRE+FRAME</td>
        </tr>
    <tr>
            <td><strong>Shared or Hand-Off Projects</strong></td>
            <td>When different teams will rely on embedded clarification, memory, and consistent task flows for recurring analyses.</td>
      <td>WIRE+FRAME</td>
        </tr>
    <tr>
            <td><strong>Built-In Quality Control</strong></td>
            <td>You want the AI to flag top issues, self-critique, and refine, minimizing manual QC steps.</td>
      <td>WIRE+FRAME</td>
        </tr>
    </tbody>
</table>

<p>Prompting isn’t about getting it right the first time. It’s about designing the interaction and redesigning when needed. With WIRE+FRAME, you’re going beyond basic prompting and designing the interaction between you and AI.</p>

<h3 id="from-gut-feel-to-framework-a-prompt-makeover">From Gut Feel To Framework: A Prompt Makeover</h3>

<p>Let’s compare the results of Kate’s first AI-augmented design sprint prompt (to synthesize customer feedback into design insights) with one based on the WIRE+FRAME prompt framework, with the same data and focusing on the top results:</p>

<p><em>Original prompt: Read this customer feedback and tell me how we can improve our app for Gen Z users.</em></p>

<p>Initial ChatGPT Results:</p>

<ul>
<li>Improve app reliability to reduce crashes and freezing.</li>
<li>Provide better guidance or tutorials for financial tools like budgeting or goal setting.</li>
<li>Enhance the transparency of Zelle transfers by showing confirmation messages.</li>
<li>Speed up app loading and reduce lag on key actions.</li>
</ul>

<p>With this version, you’d likely need to go back and forth with follow-up questions, rewrite the output for clarity, and add structure before sharing with your team.</p>

<p><em>WIRE+FRAME prompt above (with defined role, scope, rules, expected format, tone, flow, and evaluation loop).</em></p>

<p>Initial ChatGPT Results:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/prompting-design-act-brief-guide-iterate-ai/1-wire-frame-prompt.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="325"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/prompting-design-act-brief-guide-iterate-ai/1-wire-frame-prompt.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/prompting-design-act-brief-guide-iterate-ai/1-wire-frame-prompt.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/prompting-design-act-brief-guide-iterate-ai/1-wire-frame-prompt.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/prompting-design-act-brief-guide-iterate-ai/1-wire-frame-prompt.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/prompting-design-act-brief-guide-iterate-ai/1-wire-frame-prompt.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/prompting-design-act-brief-guide-iterate-ai/1-wire-frame-prompt.png"
			
			sizes="100vw"
			alt="Results of the structured WIRE&#43;FRAME prompt"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Results of the structured WIRE+FRAME prompt. (<a href='https://files.smashing.media/articles/prompting-design-act-brief-guide-iterate-ai/1-wire-frame-prompt.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>You can clearly see the very different results from the two prompts, both using the exact same data. While the first prompt returns a quick list of ideas, the detailed WIRE+FRAME version doesn’t just summarize feedback but structures it. Themes are clearly labeled, supported by user quotes, mapped to customer journey stages, and prioritized by frequency, severity, and effort.</p>

<p>The structured prompt results can be used as-is or shared without needing to reformat, rewrite, or explain them (see disclaimer below). The first prompt output needs massaging: it’s not detailed, lacks evidence, and would require several rounds of clarification to be actionable. The first prompt may work when the stakes are low and you are exploring. But when your prompt is feeding design, product, or strategy, structure comes to the rescue.</p>

<h4 id="disclaimer-know-your-data">Disclaimer: Know Your Data</h4>

<p>A well-structured prompt can make AI output more useful, but it shouldn’t be the final word, or your single source of truth. AI models are powerful pattern predictors, not fact-checkers. If your data is unclear or poorly referenced, even the best prompt may return confident nonsense. Don’t blindly trust what you see. <strong>Treat AI like a bright intern</strong>: fast, eager, and occasionally delusional. You should always be familiar with your data and validate what AI spits out. For example, in the WIRE+FRAME results above, AI rated the effort as low for financial tool onboarding. That could easily be a medium or high. <strong>Good prompting should be backed by good judgment.</strong></p>

<h3 id="try-this-now">Try This Now</h3>

<p>Start by using the WIRE+FRAME framework to create a prompt that will help AI augment your work. You could also rewrite the last prompt you were not satisfied with, using the WIRE+FRAME, and compare the output.</p>

<p>Feel free to use <a href="https://wireframe-prompt-framework.lovable.app">this simple tool</a> to guide you through the framework.</p>

<h2 id="methods-from-lone-prompts-to-a-prompt-system">Methods: From Lone Prompts to a Prompt System</h2>

<p>Just as design systems have reusable components, your prompts can too. You can use the WIRE+FRAME framework to write detailed prompts, but you can also use the structure to create reusable components that are pre-tested, plug-and-play pieces you can assemble to build high-quality prompts faster. Each part of WIRE+FRAME can be transformed into a prompt component: small, reusable modules that reflect your team’s standards, voice, and strategy.</p>

<p>For instance, if you find yourself repeatedly using the same content for different parts of the WIRE+FRAME framework, you could save them as reusable components for you and your team. In the example below, we have two different reusable components for “W: Who &amp; What” &mdash; an insights analyst and an information architect.</p>

<h3 id="w-who-what-1">W: Who &amp; What</h3>

<ol>
<li><em>You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.</em></li>
<li><em>You are an experienced information architect specializing in organizing enterprise content on intranets. Your task is to reorganize the content and features into categories that reflect user goals, reduce cognitive load, and increase findability.</em></li>
</ol>

<p>Create and save prompt components and variations for each part of the WIRE+FRAME framework, allowing your team to quickly assemble new prompts by combining components when available, rather than starting from scratch each time.</p>

<div class="partners__lead-place"></div>

<h2 id="behind-the-prompts-questions-about-prompting">Behind The Prompts: Questions About Prompting</h2>

<p><em>Q: If I use a prompt framework like WIRE+FRAME every time, will the results be predictable?</em></p>

<p>A: Yes and no. Yes, your outputs will be guided by a consistent set of instructions (e.g., <strong>R</strong>ules, <strong>E</strong>xamples, <strong>R</strong>eference Voice / Style) that will guide the AI to give you a predictable format and style of results. And no, while the framework provides structure, it doesn’t flatten the generative nature of AI, but focuses it on what’s important to you. In the next article, we will look at how you can use this to your advantage to quickly reuse your best repeatable prompts as we build your AI assistant.</p>

<p><em>Q: Could changes to AI models break the WIRE+FRAME framework?</em></p>

<p>A: AI models are evolving more rapidly than any other technology we’ve seen before &mdash; in fact, ChatGPT was recently updated to GPT-5 to mixed reviews. The update didn’t change the core principles of prompting or the WIRE+FRAME prompt framework. With future releases, some elements of how we write prompts today may change, but the need to communicate clearly with AI won’t. Think of how you delegate work to an intern vs. someone with a few years’ experience: you still need detailed instructions the first time either is doing a task, but the level of detail may change. WIRE+FRAME isn’t built only for today’s models; the components help you clarify your intent, share relevant context, define constraints, and guide tone and format &mdash; all timeless elements, no matter how smart the model becomes. The skill of shaping clear, structured interactions with non-human AI systems will remain valuable.</p>

<p><em>Q: Can prompts be more than text? What about images or sketches?</em></p>

<p>A: Absolutely. With tools like GPT-5 and other multimodal models, you can upload screenshots, pictures, whiteboard sketches, or wireframes. These visuals become part of your <strong>I</strong>nput Context or help define the <strong>E</strong>xpected Output. The same WIRE+FRAME principles still apply: you’re setting context, tone, and format, just using images and text together. Whether your input is a paragraph or an image and text, you’re still designing the interaction.</p>

<p>Have a prompt-related question of your own? Share it in the comments, and I’ll either respond there or explore it further in the next article in this series.</p>

<h2 id="from-designerly-prompting-to-custom-assistants">From Designerly Prompting To Custom Assistants</h2>

<p>Good prompts and results don’t come from using others’ prompts, but from writing prompts that are customized for you and your context. The WIRE+FRAME framework helps with that and makes prompting a tool you can use to guide AI models like a creative partner instead of hoping for magic from a one-line request.</p>

<p>Prompting uses the designerly skills you already use every day to collaborate with AI:</p>

<ul>
<li><strong>Curiosity</strong> to explore what the AI can do and frame better prompts.</li>
<li><strong>Observation</strong> to detect bias or blind spots.</li>
<li><strong>Empathy</strong> to make machine outputs human.</li>
<li><strong>Critical thinking</strong> to verify and refine.</li>
<li><strong>Experiment &amp; Iteration</strong> to learn by doing and improve the interaction over time.</li>
<li><strong>Growth Mindset</strong> to keep up with new technology like AI and prompting.</li>
</ul>

<p>Once you create and refine prompt components and prompts that work for you, make them reusable by documenting them. But wait, there’s more &mdash; what if your best prompts, or the elements of your prompts, could live inside your own AI assistant, available on demand, fluent in your voice, and trained on your context? That’s where we’re headed next.</p>

<p>In the next article, “Design Your Own Design Assistant”, we’ll take what you’ve learned so far and turn it into a Custom AI assistant (aka Custom GPT), a design-savvy, context-aware assistant that works like you do. We’ll walk through that exact build, from defining the assistant’s job description to uploading knowledge, testing, and sharing it with others.</p>

<h3 id="resources">Resources</h3>

<ul>
<li><a href="https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide">GPT-5 Prompting Guide</a></li>
<li><a href="https://cookbook.openai.com/examples/gpt4-1_prompting_guide">GPT-4.1 Prompting Guide</a></li>
<li><a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview">Anthropic Prompt Engineering</a></li>
<li><a href="https://cloud.google.com/discover/what-is-prompt-engineering?hl=en">Prompt Engineering by Google</a></li>
<li><a href="https://docs.perplexity.ai/guides/prompt-guide">Perplexity</a></li>
<li><a href="https://wireframe-prompt-framework.lovable.app">Webapp to guide you through the WIRE+FRAME framework</a></li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Lyndon Cerejo</author><title>A Week In The Life Of An AI-Augmented Designer</title><link>https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/</link><pubDate>Fri, 22 Aug 2025 08:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/</guid><description>If you are new to using AI in design or curious about integrating AI into your UX process without losing your human touch, this article offers a grounded, day-by-day look at introducing AI into your design workflow.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/" />
              <title>A Week In The Life Of An AI-Augmented Designer</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>A Week In The Life Of An AI-Augmented Designer</h1>
                  
                    
                    <address>Lyndon Cerejo</address>
                  
                  <time datetime="2025-08-22T08:00:00&#43;00:00" class="op-published">2025-08-22T08:00:00+00:00</time>
                  <time datetime="2025-08-22T08:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>Artificial Intelligence isn’t new, but in November 2022, something changed. The launch of ChatGPT brought AI out of the background and into everyday life. Suddenly, interacting with a machine didn’t feel technical &mdash; it felt <strong>conversational</strong>.</p>

<p>Just this March, ChatGPT overtook Instagram and TikTok as the most downloaded app in the world. That level of adoption shows that millions of everyday users, not just developers or early adopters, are comfortable using AI in casual, conversational ways. People are using AI not just to get answers, but to <a href="https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025">think, create, plan, and even to help with mental health and loneliness</a>.</p>

<p>In the past two and a half years, people have moved through the <a href="https://www.ekrfoundation.org/5-stages-of-grief/change-curve/">Kübler-Ross Change Curve</a> &mdash; only instead of grief, it’s AI-induced uncertainty. UX designers, like Kate (who you’ll meet shortly), have experienced something like this:</p>

<ul>
<li><strong>Denial</strong>: “AI can’t design like a human; it won’t affect my workflow.”</li>
<li><strong>Anger</strong>: “AI will ruin creativity. It’s a threat to our craft.”</li>
<li><strong>Bargaining</strong>: “Okay, maybe just for the boring tasks.”</li>
<li><strong>Depression</strong>: “I can’t keep up. What’s the future of my skills?”</li>
<li><strong>Acceptance</strong>: “Alright, AI can free me up for more strategic, human work.”</li>
</ul>

<p>As designers move into experimentation, they’re not asking, <em>Can I use AI?</em> but <em>How might I use it well?</em>.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aUsing%20AI%20isn%e2%80%99t%20about%20chasing%20the%20latest%20shiny%20object%20but%20about%20learning%20how%20to%20stay%20human%20in%20a%20world%20of%20machines,%20and%20use%20AI%20not%20as%20a%20shortcut,%20but%20as%20a%20creative%20collaborator.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fweek-in-life-ai-augmented-designer%2f">
      
Using AI isn’t about chasing the latest shiny object but about learning how to stay human in a world of machines, and use AI not as a shortcut, but as a creative collaborator.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>It isn’t about finding, bookmarking, downloading, or hoarding prompts, but <strong>experimenting</strong> and writing your own prompts.</p>

<p>To bring this to life, we’ll follow Kate, a mid-level designer at a FinTech company, navigating her first AI-augmented design sprint. You’ll see her ups and downs as she experiments with AI, tries to balance human-centered skills with AI tools, when she relies on intuition over automation, and how she reflects critically on the role of AI at each stage of the sprint.</p>

<p>The next two planned articles in this series will explore how to design prompts (Part 2) and guide you through building your own AI assistant (aka CustomGPT; Part 3). Along the way, we’ll spotlight the <a href="https://www.smashingmagazine.com/2023/04/skills-designers-ai-cant-replicate/">designerly skills AI can’t replicate</a> like curiosity, empathy, critical thinking, and experimentation that will set you apart in a world where <strong>automation is easy, but people and human-centered design matter even more</strong>.</p>

<p><strong>Note</strong>: <em>This article was written by a human (with feelings, snacks, and deadlines). The prompts are real, the AI replies are straight from the source, and no language models were overworked &mdash; just politely bossed around. All em dashes are the handiwork of MS Word’s autocorrect &mdash; not AI. Kate is fictional, but her week is stitched together from real tools, real prompts, real design activities, and real challenges designers everywhere are navigating right now. She will primarily be using ChatGPT, reflecting the popularity of this jack-of-all-trades AI as the place many start their AI journeys before branching out. If you stick around to the end, you’ll find other AI tools that may be better suited for different design sprint activities. Due to the pace of AI advances, your outputs may vary (YOMV), possibly by the time you finish reading this sentence.</em></p>

<p><strong>Cautionary Note</strong>: <em>AI is helpful, but not always private or secure. Never share sensitive, confidential, or personal information with AI tools &mdash; even the helpful-sounding ones. When in doubt, treat it like a coworker who remembers everything and may not be particularly good at keeping secrets.</em></p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="prologue-meet-kate-as-she-preps-for-the-upcoming-week">Prologue: Meet Kate (As She Preps For The Upcoming Week)</h2>

<p>Kate stared at the digital mountain of feedback on her screen: transcripts, app reviews, survey snippets, all waiting to be synthesized. Deadlines loomed. Her calendar was a nightmare. Meanwhile, LinkedIn was ablaze with AI hot takes and success stories. Everyone seemed to have found their “AI groove” &mdash; except her. She wasn’t anti-AI. She just hadn’t figured out how it actually fit into her work. She had tried some of the prompts she saw online, played with some AI plugins and extensions, but it felt like an add-on, not a core part of her design workflow.</p>

<p>Her team was focusing on improving financial confidence for Gen Z users of their FinTech app, and Kate planned to use one of her favorite frameworks: <a href="https://www.gv.com/sprint/">the Design Sprint</a>, a five-day, high-focus process that condenses months of product thinking into a single week. Each day tackles a distinct phase: Understand, Sketch, Decide, Prototype, and Test. All designed to move fast, make ideas tangible, and learn from real users before making big bets.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/1-stages-design-sprint.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="265"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/1-stages-design-sprint.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/1-stages-design-sprint.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/1-stages-design-sprint.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/1-stages-design-sprint.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/1-stages-design-sprint.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/1-stages-design-sprint.png"
			
			sizes="100vw"
			alt="Stages of a 5-Day Design Sprint"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Stages of a 5-Day Design Sprint. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/1-stages-design-sprint.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This time, she planned to experiment with a very lightweight version of the design sprint, almost <em>“solo-ish”</em> since her PM and engineer were available for check-ins and decisions, but not present every day. That gave her both space and a constraint, and made it the perfect opportunity to explore how AI could augment each phase of the sprint.</p>

<p>She decided to lean on her designerly behavior of experimentation and learning and integrate AI intentionally into her sprint prep, using it as both a <strong>creative partner</strong> and a <strong>thinking aid</strong>. Not with a rigid plan, but with a working hypothesis that AI would at the very least speed her up, if nothing else.</p>

<p>She wouldn’t just be designing and testing a prototype, but prototyping and testing what it means to design with AI, while still staying in the driver’s seat.</p>

<p>Follow Kate along her journey through her first AI-powered design sprint: from curiosity to friction and from skepticism to insight.</p>

<h2 id="monday-understanding-the-problem-aka-kate-vs-digital-pile-of-notes">Monday: Understanding the Problem (aka: Kate Vs. Digital Pile Of Notes)</h2>

<p><em>The first day of a design sprint is spent understanding the user, their problems, business priorities, and technical constraints, and narrowing down the problem to solve that week.</em></p>

<p>This morning, Kate had transcripts from recent user interviews and customer feedback from the past year from app stores, surveys, and their customer support center. Typically, she would set aside a few days to process everything, coming out with glazed eyes and a few new insights. This time, she decided to use ChatGPT to summarize that data: <em>“Read this customer feedback and tell me how we can improve financial literacy for Gen Z in our app.”</em></p>

<p>ChatGPT’s outputs were underwhelming to say the least. Disappointed, she was about to give up when she remembered an infographic about good prompting that she had emailed herself. She updated her prompt based on those recommendations:</p>

<ul>
<li>Defined a role for the AI (“product strategist”),</li>
<li>Provided context (user group and design sprint objectives), and</li>
<li>Clearly outlined what she was looking for (financial literacy related patterns in pain points, blockers, confusion, lack of confidence; synthesis to identify top opportunity areas).</li>
</ul>

<p>By the time she Aero-pressed her next cup of coffee, ChatGPT had completed its analysis, highlighting blockers like jargon, lack of control, fear of making the wrong choice, and need for blockchain wallets. Wait, what? That last one felt off.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/2-ai-results-hallucinations.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="501"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/2-ai-results-hallucinations.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/2-ai-results-hallucinations.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/2-ai-results-hallucinations.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/2-ai-results-hallucinations.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/2-ai-results-hallucinations.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/2-ai-results-hallucinations.png"
			
			sizes="100vw"
			alt="AI results may sometimes include hallucinations – don’t trust; always verify."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      AI results may sometimes include hallucinations: don’t trust, always verify. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/2-ai-results-hallucinations.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Kate searched her sources and confirmed her hunch: AI hallucination! Despite the best of prompts, AI sometimes makes things up based on trendy concepts from its training data rather than actual data. Kate updated her prompt with <strong>constraints</strong> to make ChatGPT only use data she had uploaded, and to cite examples from that data in its results. 18 seconds later, the updated results did not mention blockchain or other unexpected results.</p>

<p>By lunch, Kate had the makings of a research summary that would have taken much, much longer, and a whole lot of caffeine.</p>

<p>That afternoon, Kate and her product partner plotted the pain points on the Gen Z app journey. The emotional mapping highlighted the most critical moment: the first step of a financial decision, like setting a savings goal or choosing an investment option. That was when fear, confusion, and lack of confidence held people back.</p>

<p>AI synthesis combined with human insight helped them define the <strong>problem statement</strong> as: <em>“How might we help Gen Z users confidently take their first financial action in our app, in a way that feels simple, safe, and puts them in control?”</em></p>

<h3 id="kate-s-reflection">Kate’s Reflection</h3>

<p>As she wrapped up for the day, Kate jotted down her reflections on her first day as an AI-augmented designer:</p>

<blockquote>There’s nothing like learning by doing. I’ve been reading about AI and tinkering around, but took the plunge today. Turns out AI is much more than a tool, but I wouldn’t call it a co-pilot. Yet. I think it’s like a sharp intern: it has a lot of information, is fast, eager to help, but it lacks context, needs supervision, and can surprise you. You have to give it clear instructions, double-check its work, and guide and supervise it. Oh, and maintain boundaries by not sharing anything I wouldn’t want others to know.<br /><br />Today was about listening &mdash; to users, to patterns, to my own instincts. AI helped me sift through interviews fast, but I had to stay <strong>curious</strong> to catch what it missed. Some quotes felt too clean, like the edges had been smoothed over. That’s where <strong>observation</strong> and <strong>empathy</strong> kicked in. I had to ask myself: what’s underneath this summary?<br /><br /><strong>Critical thinking</strong> was the designerly skill I had to exercise most today. It was tempting to take the AI’s synthesis at face value, but I had to push back by re-reading transcripts, questioning assumptions, and making sure I wasn’t outsourcing my judgment. Turns out, the thinking part still belongs to me.</blockquote>

<h2 id="tuesday-sketching-aka-kate-and-the-sea-of-okish-ideas">Tuesday: Sketching (aka: Kate And The Sea of OKish Ideas)</h2>

<p><em>Day 2 of a design sprint focuses on solutions, starting by remixing and improving existing ideas, followed by people sketching potential solutions.</em></p>

<p>Optimistic, yet cautious after her experience yesterday, Kate started thinking about ways she could use AI today, while brewing her first cup of coffee. By cup two, she was wondering if AI could be a creative teammate. Or a creative intern at least. She decided to ask AI for a list of relevant UX patterns across industries. Unlike yesterday’s complex analysis, Kate was asking for inspiration, not insight, which meant she could use a simpler prompt: <em>“Give me 10 unique examples of how top-rated apps reduce decision anxiety for first-time users &mdash; from FinTech, health, learning, or ecommerce.”</em></p>

<p>She received her results in a few seconds, but there were only 6, not the 10 she asked for. She expanded her prompt for examples from a wider range of industries. While reviewing the AI examples, Kate realized that one had accessibility issues. To be fair, the results met Kate’s ask since she had not specified accessibility considerations. She then went pre-AI and brainstormed examples with her product partner, coming up with a few unique local examples.</p>

<p>Later that afternoon, Kate went full human during Crazy 8s by putting a marker to paper and sketching 8 ideas in 8 minutes to rapidly explore different directions. Wondering if AI could live up to its generative nature, she uploaded pictures of her top 3 sketches and prompted AI to act as <em>“a product design strategist experienced in Gen Z behavior, digital UX, and behavioral science”</em>, gave it context about the problem statement, stage in the design sprint, and explicitly asked AI the following:</p>

<ol>
<li>Analyze the 3 sketch concepts and identify core elements or features that resonated with the goal.</li>
<li>Generate 5 new concept directions, each of which should:

<ul>
<li>Address the original design sprint challenge.</li>
<li>Reflect Gen Z design language, tone, and digital behaviors.</li>
<li>Introduce a unique twist, remix, or conceptual inversion of the ideas in the sketches.</li>
</ul></li>
<li>For each concept, provide:

<ul>
<li>Name (e.g., “Monopoly Mode,” “Smart Start”);</li>
<li>1&ndash;2 sentence concept summary;</li>
<li>Key differentiator from the original sketches;</li>
<li>Design tone and/or behavioral psychology technique used.</li>
</ul></li>
</ol>

<p>The results included ideas that Kate and her product partner hadn’t considered, including a progress bar that started at 20% (to build confidence), and a sports-like “stock bracket” for first-time investors.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/3-ai-generated-remixed-concepts.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="559"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/3-ai-generated-remixed-concepts.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/3-ai-generated-remixed-concepts.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/3-ai-generated-remixed-concepts.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/3-ai-generated-remixed-concepts.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/3-ai-generated-remixed-concepts.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/3-ai-generated-remixed-concepts.png"
			
			sizes="100vw"
			alt="AI-generated remixed concepts after sharing three of the Crazy 8’s concepts"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      AI-generated remixed concepts after sharing three of the Crazy 8’s concepts. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/3-ai-generated-remixed-concepts.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Not bad, thought Kate, as she cherry-picked elements, combined and built on these ideas in her next round of sketches. By the end of the day, they had a diverse set of sketched solutions &mdash; some original, some AI-augmented, but all exploring how to reduce fear, simplify choices, and build confidence for Gen Z users taking their first financial step. With five concept variations and a few rough storyboards, Kate was ready to start converging on day 3.</p>

<h3 id="kate-s-reflection-1">Kate’s Reflection</h3>

<blockquote>Today was creatively energizing yet a little overwhelming! I leaned hard on AI to act as a creative teammate. It delivered a few unexpected ideas and remixed my Crazy 8s into variations I never would’ve thought of!<br /><br />It also reinforced the need to stay grounded in the human side of design. AI was fast &mdash; too fast, sometimes. It spit out polished-sounding ideas that sounded right, but I had to slow down, observe carefully, and ask: Does this feel right for our users? Would a first-time user feel safe or intimidated here?<br /><br /><strong>Critical thinking</strong> helped me separate what mattered from what didn’t. <strong>Empathy</strong> pulled me back to what Gen Z users actually said, and kept their voices in mind as I sketched. <strong>Curiosity</strong> and <strong>experimentation</strong> were my fuel. I kept tweaking prompts, remixing inputs, and seeing how far I could stretch a concept before it broke. <strong>Visual communication</strong> helped translate fuzzy AI ideas into something I could react to &mdash; and more importantly, test.</blockquote>

<div class="partners__lead-place"></div>

<h2 id="wednesday-deciding-aka-kate-tries-to-get-ai-to-pick-a-side">Wednesday: Deciding (aka Kate Tries to Get AI to Pick a Side)</h2>

<p><em>Design sprint teams spend Day 3 critiquing each of their potential solutions to shortlist those that have the best chance of achieving their long-term goal. The winning scenes from the sketches are then woven into a prototype storyboard.</em></p>

<p>Design sprint Wednesdays were Kate’s least favorite day. After all the generative energy during Sketching Tuesday, today, she would have to decide on one clear solution to prototype and test. She was unsure if AI would be much help with judging tradeoffs or narrowing down options, and it wouldn’t be able to critique like a team. Or could it?</p>

<p>Kate reviewed each of the five concepts, noting strengths, open questions, and potential risks. Curious about how AI would respond, she uploaded images of three different design concepts and prompted ChatGPT for strengths and weaknesses. AI’s critique was helpful in summarizing the pros and cons of different concepts, including a few points she had not considered &mdash; like potential privacy concerns.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/4-speed-critique-uploaded-concept.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="538"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/4-speed-critique-uploaded-concept.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/4-speed-critique-uploaded-concept.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/4-speed-critique-uploaded-concept.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/4-speed-critique-uploaded-concept.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/4-speed-critique-uploaded-concept.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/4-speed-critique-uploaded-concept.png"
			
			sizes="100vw"
			alt="Speed Critique (Strengths and Weaknesses) of an uploaded concept"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Speed Critique (Strengths and Weaknesses) of an uploaded concept. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/4-speed-critique-uploaded-concept.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>She asked a few follow-up questions to confirm the actual reasoning. Wondering if she could simulate a team critique by prompting ChatGPT differently, Kate asked it to use the <a href="https://www.debonogroup.com/services/core-programs/six-thinking-hats/">6 thinking hats technique</a>. The results came back dense, overwhelming, and unfocused. The AI couldn’t prioritize, and it couldn’t see the gaps Kate instinctively noticed: friction in onboarding, misaligned tone, unclear next steps.</p>

<p>In that moment, the promise of AI felt overhyped. Kate stood up, stretched, and seriously considered ending her experiments with the AI-driven process. But she paused. Maybe the problem wasn’t the tool. Maybe it was <em>how</em> she was using it. She made a note to experiment when she wasn’t on a design sprint clock.</p>

<p>She returned to her sketches, this time laying them out on the wall. No screens, no prompts. Just markers, sticky notes, and Sharpie scribbles. Human judgment took over. Kate worked with her product partner to finalize the solution to test on Friday and spent the next hour storyboarding the experience in Figma.</p>

<p>Kate re-engaged with AI as a reviewer, not a decider. She prompted it for feedback on the storyboard and was surprised to see it spit out detailed design, content, and micro-interaction suggestions for each of the steps of the storyboarded experience. A lot of food for thought, but she’d have to judge what mattered when she created her prototype. But that wasn’t until tomorrow!</p>

<h3 id="kate-s-reflection-2">Kate’s Reflection</h3>

<blockquote>AI exposed a few of my blind spots in the critique, which was good, but it basically pointed out that multiple options “could work”. I had to rely on my <strong>critical thinking</strong> and instincts to weigh options logically, emotionally, and contextually in order to choose a direction that was the most testable and aligned with the user feedback from Day 1.<br /><br />I was also surprised by the suggestions it came up with while reviewing my final storyboard, but I will need a fresh pair of eyes and all the human judgement I can muster tomorrow.<br /><br /><strong>Empathy</strong> helped me walk through the flow like I was a new user. <strong>Visual communication</strong> helped pull it all together by turning abstract steps into a real storyboard for the team to see instead of imagining.<br /><br /><strong>TO DO</strong>: Experiment prompting around the 6 Thinking Hats for different perspectives.</blockquote>

<h2 id="thursday-prototype-aka-kate-and-faking-it">Thursday: Prototype (aka Kate And Faking It)</h2>

<p><em>On Day 4, the team usually turns the storyboard from the previous day into a prototype that can be tested with users on Day 5. The prototype doesn’t need to be fully functional; a simulated experience is sufficient to gather user feedback.</em></p>

<p>Kate’s prototype day often consisted of marathon Figma Design sessions and late-night pizza dinners. She was hoping AI would change that today. She fed yesterday’s storyboard to ChatGPT and asked it for screens. It took a while to generate, but she was excited to see a screen flow gradually appear on her screen, except that it had 3 ¾ screens, instead of the 6 frames from her storyboard, as you can see in the image below.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/5-chat-gpt-half-baked-screens.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="533"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/5-chat-gpt-half-baked-screens.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/5-chat-gpt-half-baked-screens.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/5-chat-gpt-half-baked-screens.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/5-chat-gpt-half-baked-screens.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/5-chat-gpt-half-baked-screens.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/5-chat-gpt-half-baked-screens.png"
			
			sizes="100vw"
			alt="ChatGPT’s half-baked (actually, 3 ¾th baked) screens."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      ChatGPT’s half-baked (actually, 3 ¾<sup>th</sup> baked) screens. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/5-chat-gpt-half-baked-screens.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>She tried different prompts but couldn’t get a complete flow. She was excited when it offered to create a Figma file. She waited. And waited. And reminded it that she was right there waiting. It finally gave her a link &mdash; a link that didn’t work. That made her question why she needed prototype screens in Figma at all. Could one of those no-code tools do the job? Kate remembered that she had just received beta access to <a href="https://www.figma.com/make/">Figma Make</a> (Figma’s AI prompt-to-prototype tool) and decided to put it to the test.</p>

<p>She shared her storyboard, added context about the prototype for a design sprint, the problem, and a brief description of the audience. In a few seconds, she saw code being generated and a running description on the left. It seemed like it would take some time, so she headed to the cafeteria for lunch.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/6-initial-prototype-figma-make.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="559"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/6-initial-prototype-figma-make.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/6-initial-prototype-figma-make.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/6-initial-prototype-figma-make.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/6-initial-prototype-figma-make.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/6-initial-prototype-figma-make.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/6-initial-prototype-figma-make.png"
			
			sizes="100vw"
			alt="Initial prototype created by Figma Make based on a prompt with the storyboard"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Initial prototype created by Figma Make based on a prompt with the storyboard. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/6-initial-prototype-figma-make.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>When Kate returned, she was surprised to see Figma Make had created an app along with a detailed explanation:</p>

<blockquote>“I’ve created a complete finance app prototype with five key screens that guide users through setting financial goals in a friendly, low-pressure way. The app features smooth animations with Framer Motion, a progress indicator to show users where they are in the journey, and emotionally supportive visuals and copy throughout the experience. Each screen serves a specific purpose: welcoming users, helping them define relatable goals, building their plan with visual feedback, presenting a simple summary, and offering next steps without pressure.<br /><br />You might want to:<ul><li>Add personalization options like user profiles or avatars.</li><li>Implement a dark mode toggle for accessibility.</li><li>Create additional goal templates and saving strategies.”</li></ul></blockquote>

<p>Since Figma Make had built the prototype, not her, Kate felt like she was User 0, and clicked her way through the interactive prototype, making a mental note of what worked, what didn’t. She was impressed to see that she could not only select a common goal (buy a home, save for education, and so on) but she could also create her own goal (<em>hello De’Longhi Coffee Maker</em>) and it carried over for the rest of the experience. This was something she had never been able to do in Figma Design!</p>

<p>Despite some obvious misses like a missing header and navigation, and some buttons not working, she was impressed! Kate tried the option to ‘Publish’ and it gave her a link that she immediately shared with her product and engineering partners. A few minutes later, they joined her in the conference room, exploring it together. The engineer scanned the code, didn’t seem impressed, but said it would work as a disposable prototype.</p>

<p>Kate prompted Figma Make to add an orange header and app navigation, and this time the trio kept their eyes peeled as they saw the progress in code and in English. The results were pretty good. They spent the next hour making changes to get it ready for testing. Even though he didn’t admit it, the engineer seemed impressed with the result, if not the code.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/7-finalized-prototype-screenshots-figma-make.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="302"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/7-finalized-prototype-screenshots-figma-make.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/7-finalized-prototype-screenshots-figma-make.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/7-finalized-prototype-screenshots-figma-make.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/7-finalized-prototype-screenshots-figma-make.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/7-finalized-prototype-screenshots-figma-make.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/7-finalized-prototype-screenshots-figma-make.png"
			
			sizes="100vw"
			alt="Finalized prototype screenshots from the interactive Figma Make prototype"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Finalized prototype screenshots from the interactive <a href='https://zone-crush-76141775.figma.site/'>Figma Make prototype</a>. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/7-finalized-prototype-screenshots-figma-make.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>By late afternoon, they had a <a href="https://zone-crush-76141775.figma.site">functioning interactive prototype</a>. Kate fed ChatGPT the prototype link and asked it to create a usability testing script. It came up with a basic, but complete test script, including a checklist for observers to take notes.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/8-initial-usability-testing-script-ai.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="506"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/8-initial-usability-testing-script-ai.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/8-initial-usability-testing-script-ai.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/8-initial-usability-testing-script-ai.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/8-initial-usability-testing-script-ai.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/8-initial-usability-testing-script-ai.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/8-initial-usability-testing-script-ai.png"
			
			sizes="100vw"
			alt="Initial usability testing script generated by AI"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Initial usability testing script generated by AI. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/8-initial-usability-testing-script-ai.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Kate went through the script carefully and updated it to add probing questions about AI transparency, emotional check-ins, more specific task scenarios, and a post-test debrief that looped back to the sprint goal.</p>

<p>Kate did a dry run with her product partner, who teased her: <em>“Did you really need me? Couldn’t your AI do it?”</em> It hadn’t occurred to her, but she was now curious!</p>

<blockquote>“Act as a Gen Z user seeing this interactive prototype for the first time. How would you react to the language, steps, and tone? What would make you feel more confident or in control?”</blockquote>

<p>It worked! ChatGPT simulated user feedback for the first screen and asked if she wanted it to continue. <em>“Yes, please,”</em> she typed. A few seconds later, she was reading what could have very well been a screen-by-screen transcript from a test.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/9-ai-generated-feedback-prototype.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="467"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/9-ai-generated-feedback-prototype.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/9-ai-generated-feedback-prototype.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/9-ai-generated-feedback-prototype.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/9-ai-generated-feedback-prototype.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/9-ai-generated-feedback-prototype.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/9-ai-generated-feedback-prototype.png"
			
			sizes="100vw"
			alt="AI-generated feedback about the prototype"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      AI-generated feedback about the prototype. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/9-ai-generated-feedback-prototype.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Kate was still processing what she had seen as she drove home, happy she didn’t have to stay late. The simulated test using AI appeared impressive at first glance. But the more she thought about it, the more disturbing it became. The output didn’t mention what the simulated user clicked, and if she had asked, she probably would have received an answer. But how useful would that be? After almost missing her exit, she forced herself to think about eating a relaxed meal at home instead of her usual Prototype-Thursday-Multitasking-Pizza-Dinner.</p>

<h3 id="kate-s-reflection-3">Kate’s Reflection</h3>

<blockquote>Today was the most meta I’ve felt all week: building a prototype about AI, with AI, while being coached by AI. And it didn’t all go the way I expected.<br /><br />While ChatGPT didn’t deliver prototype screens, Figma Make coded a working, interactive prototype with interactions I couldn’t have built in Figma Design. I used <strong>curiosity</strong> and <strong>experimentation</strong> today, by asking: What if I reworded this? What if I flipped that flow?<br /><br />AI moved fast, but I had to keep steering. But I have to admit that tweaking the prototype by changing the words, not code, felt like magic!<br /><br /><strong>Critical thinking</strong> isn’t optional anymore &mdash; it is table stakes.<br /><br />My impromptu ask of ChatGPT to simulate a Gen Z user testing my flow? That part both impressed and unsettled me. I’m going to need time to process this. But that can wait until next week. Tomorrow, I test with 5 Gen Zs &mdash; real people.</blockquote>

<h2 id="friday-test-aka-prototype-meets-user">Friday: Test (aka Prototype Meets User)</h2>

<p><em>Day 5 in a design sprint is a culmination of the week’s work from understanding the problem, exploring solutions, choosing the best, and building a prototype. It’s when teams interview users and learn by watching them react to the prototype and seeing if it really matters to them.</em></p>

<p>As Kate prepped for the tests, she grounded herself in the sprint problem statement and the users: “<em>How might we help Gen Z users confidently take their first financial action in our app &mdash; in a way that feels simple, safe, and puts them in control?”</em></p>

<p>She clicked through the prototype one last time &mdash; the link still worked! And just in case, she also had screenshots saved.</p>

<p>Kate moderated the five tests while her product and engineering partners observed. The prototype may have been AI-generated, but the reactions were human. She observed where people hesitated, what made them feel safe and in control. Based on the participant, she would pivot, go off-script, and ask clarifying questions, getting deeper insights.</p>

<p>After each session, she dropped the transcripts and their notes into ChatGPT, asking it to summarize that user’s feedback into pain points, positive signals, and any relevant quotes. At the end of the five rounds, Kate prompted them for recurring themes to use as input for their reflection and synthesis.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/10-ai-generated-synthesis-usability-testing.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="499"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/10-ai-generated-synthesis-usability-testing.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/10-ai-generated-synthesis-usability-testing.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/10-ai-generated-synthesis-usability-testing.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/10-ai-generated-synthesis-usability-testing.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/10-ai-generated-synthesis-usability-testing.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/10-ai-generated-synthesis-usability-testing.png"
			
			sizes="100vw"
			alt="AI-generated synthesis of the Day 5 usability testing"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      AI-generated synthesis of the Day 5 usability testing. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/10-ai-generated-synthesis-usability-testing.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The trio combed through the results, with an eye out for any suspicious AI-generated results. They ran into one: <em>“Users Trust AI”</em>. Not one user mentioned or clicked the ‘Why this?’ link, but AI possibly assumed transparency features worked because they were available in the prototype.</p>

<p>They agreed that the prototype resonated with users, allowing all to easily set their financial goals, and identified a couple of opportunities for improvement: better explaining AI-generated plans and celebrating “win” moments after creating a plan. Both were fairly easy to address during their product build process.</p>

<p>That was a nice end to the week: another design sprint wrapped, and Kate’s first AI-augmented design sprint! She started Monday anxious about falling behind, overwhelmed by options. She closed Friday confident in a validated concept, grounded in real user needs, and empowered by tools she now knew how to steer.</p>

<h3 id="kate-s-reflection-4">Kate’s Reflection</h3>

<blockquote>Test driving my prototype with AI yesterday left me impressed and unsettled. But today’s tests with people reminded me why we test with real users, not proxies or people who interact with users, but actual end users. And GenAI is not the user. Five tests put my designerly skill of <strong>observation</strong> to the test.<br /><br />GenAI helped summarize the test transcripts quickly but snuck in one last hallucination this week &mdash; about AI! With AI, don’t trust &mdash; always verify! <strong>Critical thinking</strong> is not going anywhere.<br /><br />AI can move fast with words, but only people can use <strong>empathy</strong> to move beyond words to truly understand human emotions.<br /><br />My next goal is to learn to talk to AI better, so I can get better results.</blockquote>

<div class="partners__lead-place"></div>

<h2 id="conclusion">Conclusion</h2>

<p>Over the course of five days, Kate explored how AI could fit into her UX work, not by reading articles or LinkedIn posts, but by doing. Through daily experiments, iterations, and missteps, she got comfortable with AI as a collaborator to support a design sprint. It accelerated every stage: synthesizing user feedback, generating divergent ideas, giving feedback, and even spinning up a working prototype, as shown below.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/week-in-life-ai-augmented-designer/11-design-sprint-ai.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="284"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/11-design-sprint-ai.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/11-design-sprint-ai.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/11-design-sprint-ai.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/11-design-sprint-ai.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/11-design-sprint-ai.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/week-in-life-ai-augmented-designer/11-design-sprint-ai.png"
			
			sizes="100vw"
			alt="Design Sprint with AI"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Design Sprint with AI. (<a href='https://files.smashing.media/articles/week-in-life-ai-augmented-designer/11-design-sprint-ai.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>What was clear by Friday was that speed isn’t insight. While AI produced outputs fast, it was Kate’s designerly skills &mdash; <strong>curiosity</strong>, <strong>empathy</strong>, <strong>observation</strong>, <strong>visual communication</strong>, <strong>experimentation</strong>, and most importantly, <strong>critical thinking</strong> and a <strong>growth mindset</strong> &mdash; that turned data and patterns into meaningful insights. She stayed in the driver’s seat, verifying claims, adjusting prompts, and applying judgment where automation fell short.</p>

<p>She started the week on Monday, overwhelmed, her confidence dimmed by uncertainty and the noise of AI hype. She questioned her relevance in a rapidly shifting landscape. By Friday, she not only had a validated concept but had also reshaped her entire approach to design. She had evolved: from AI-curious to AI-confident, from reactive to proactive, from unsure to empowered. Her mindset had shifted: AI was no longer a threat or trend; it was like a smart intern she could direct, critique, and collaborate with. She didn’t just adapt to AI. She redefined what it meant to be a designer in the age of AI.</p>

<p>The experience raised deeper questions: How do we make sure AI-augmented outputs are not made up? How should we treat AI-generated user feedback? Where do ethics and human responsibility intersect?</p>

<p>Besides a validated solution to their design sprint problem, Kate had prototyped a new way of working as an AI-augmented designer.</p>

<p>The question now isn’t just <em>“Should designers use AI?”</em>. It’s <em>“How do we work with AI responsibly, creatively, and consciously?”</em>. That’s what the next article will explore: designing your interactions with AI using a repeatable framework.</p>

<p><strong>Poll</strong>: If you could design your own AI assistant, what would it do?</p>

<ul>
<li>Assist with ideation?</li>
<li>Research synthesis?</li>
<li>Identify customer pain points?</li>
<li>Or something else entirely?</li>
</ul>

<p><a href="https://forms.gle/tSsZzy92VVrjuPQX8">Share your idea</a>, and in the spirit of learning by doing, we’ll build one together from scratch in the third article of this series: Building your own CustomGPT.</p>

<h3 id="resources">Resources</h3>

<ul>
<li><a href="https://www.amazon.com/Sprint-Solve-Problems-Test-Ideas/dp/150112174X">Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days</a>, by Jake Knapp</li>
<li><a href="https://www.gv.com/sprint/">The Design Sprint</a></li>
<li><a href="https://www.figma.com/make/">Figma Make</a></li>
<li>“<a href="https://gizmodo.com/openai-appeals-sweeping-unprecedented-order-requiring-it-maintain-all-chatgpt-logs-2000612405">OpenAI Appeals ‘Sweeping, Unprecedented Order’ Requiring It Maintain All ChatGPT Logs</a>”, Vanessa Taylor</li>
</ul>

<p><strong>Tools</strong></p>

<p>As mentioned earlier, ChatGPT was the general-purpose LLM Kate leaned on, but you could swap it out for Claude, Gemini, Copilot, or other competitors and likely get similar results (or at least similarly weird surprises). Here are some alternate AI tools that might suit each sprint stage even better. Note that with dozens of new AI tools popping up every week, this list is far from exhaustive.</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Stage</th>
            <th>Tools</th>
      <th>Capability</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><strong>Understand</strong></td>
            <td>Dovetail, UserTesting’s Insights Hub, <a href="http://heymarvin.com">Marvin</a></td>
      <td>Summarize & Synthesize data</td>
        </tr>
        <tr>
            <td><strong>Sketch</strong></td>
            <td>Any LLM, <a href="https://musely.ai/tools/ideation-tool">Musely</a></td>
      <td>Brainstorm concepts and ideas</td>
        </tr>
        <tr>
            <td><strong>Decide</strong></td>
            <td>Any LLM</td>
      <td>Critique/provide feedback</td>
        </tr>
    <tr>
            <td><strong>Prototype</strong></td>
            <td><a href="http://uizard.io">UIzard</a>, <a href="http://uxpilot.ai">UXPilot</a>, <a href="http://visily.ai">Visily</a>, <a href="http://krisspy.ai">Krisspy</a>, Figma Make, Lovable, Bolt</td>
      <td>Create wireframes and prototypes</td>
        </tr>
    <tr>
            <td><strong>Test</strong></td>
            <td>UserTesting, UserInterviews, PlaybookUX, <a href="http://maze.co">Maze</a>, plus tools from the Understand stage</td>
      <td>Moderated and unmoderated user tests/synthesis </td>
        </tr>
    </tbody>
</table>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Alex Williams</author><title>The Double-Edged Sustainability Sword Of AI In Web Design</title><link>https://www.smashingmagazine.com/2025/08/double-edged-sustainability-sword-ai-web-design/</link><pubDate>Wed, 20 Aug 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/double-edged-sustainability-sword-ai-web-design/</guid><description>AI has introduced huge efficiencies for web designers and is frequently being touted as the key to unlocking sustainable design and development. But do these gains outweigh the environmental cost of using energy-hungry AI tools?</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/double-edged-sustainability-sword-ai-web-design/" />
              <title>The Double-Edged Sustainability Sword Of AI In Web Design</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>The Double-Edged Sustainability Sword Of AI In Web Design</h1>
                  
                    
                    <address>Alex Williams</address>
                  
                  <time datetime="2025-08-20T10:00:00&#43;00:00" class="op-published">2025-08-20T10:00:00+00:00</time>
                  <time datetime="2025-08-20T10:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>Artificial intelligence is increasingly automating large parts of design and development workflows &mdash; tasks once reserved for skilled designers and developers. This streamlining can dramatically speed up project delivery. Even back in 2023, AI-assisted developers were found to complete tasks <a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/unleashing-developer-productivity-with-generative-ai">twice as fast</a> as those without. And AI tools have advanced massively since then.</p>

<p>Yet this surge in capability raises a pressing dilemma:</p>

<blockquote>Does the environmental toll of powering AI infrastructure eclipse the efficiency gains?</blockquote>

<p>We can create websites faster that are optimized and more efficient to run, but the global consumption of energy by AI <a href="https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works">continues to climb</a>.</p>

<p>As awareness grows around the <strong>digital sector’s hidden ecological footprint</strong>, web designers and businesses must grapple with this double-edged sword, weighing the grid-level impacts of AI against the cleaner, leaner code it can produce.</p>

<h2 id="the-good-how-ai-can-enhance-sustainability-in-web-design">The Good: How AI Can Enhance Sustainability In Web Design</h2>

<p>There’s no disputing that <a href="https://www.smashingmagazine.com/2023/03/ai-technology-transform-design/">AI-driven automation</a> has introduced higher speeds and efficiencies to many of the mundane aspects of web design. Tools that automatically generate responsive layouts, optimize image sizes, and refactor bloated scripts should free designers to focus on <a href="https://www.smashingmagazine.com/2023/04/skills-designers-ai-cant-replicate/">completing the creative</a> side of design and development.</p>

<p>By some interpretations, these accelerated project timelines could <a href="https://arxiv.org/abs/2411.11892">represent a reduction</a> in the required energy for development, and speedier production should mean less energy used.</p>

<p>Beyond automation, AI excels at <a href="https://www.smashingmagazine.com/2024/11/ai-transformative-impact-web-design-supercharging-productivity/">identifying inefficiencies in code and design</a>, as it can take a much more holistic view and assess things as a whole. Advanced algorithms can parse through stylesheets and JavaScript files to detect unused selectors or redundant logic, producing leaner, faster-loading pages. For example, AI-driven caching can <a href="https://www.researchgate.net/publication/383735847_Leveraging_AI_and_Machine_Learning_for_Performance_Optimization_in_Web_Applications">increase cache hit rates by 15%</a> by improving data availability and reducing latency. This means more user requests are served directly from the cache, reducing the need for data retrieval from the main server, which reduces energy expenditure.</p>

<p>AI tools can utilize <a href="https://wp-rocket.me/google-core-web-vitals-wordpress/serve-images-next-gen-formats/">next-generation image formats</a> like AVIF or WebP, as they’re basically designed to be understood by AI and automation, and selectively compress assets based on content sensitivity. This slashes media payloads without perceptible quality loss, as the AI can use Generative Adversarial Networks (GANs) that can learn compact representations of data.</p>

<p>AI’s impact also brings <strong>sustainability benefits via user experience (UX)</strong>. AI-driven personalization engines can <a href="https://www.researchgate.net/publication/378288736_AI-driven_personalization_in_web_content_delivery_A_comparative_study_of_user_engagement_in_the_USA_and_the_UK">dynamically serve only the content a visitor needs</a>, which eliminates superfluous scripts or images that they don’t care about. This not only enhances perceived performance but reduces the number of server requests and data transferred, cutting downstream energy use in network infrastructure.</p>

<p>With the right prompts, <strong>generative AI can be an accessibility tool</strong> and ensure <a href="https://arxiv.org/abs/2501.03572">sites meet inclusive design standards</a> by checking against accessibility standards, reducing the need for redesigns that can be costly in terms of time, money, and energy.</p>

<p>So, if you can take things in isolation, AI can and already acts as an important tool to make web design more efficient and sustainable. But do these gains outweigh the cost of the resources required in building and maintaining these tools?</p>

<h2 id="the-bad-the-environmental-footprint-of-ai-infrastructure">The Bad: The Environmental Footprint Of AI Infrastructure</h2>

<p>Yet the carbon savings engineered at the page level must be balanced against the prodigious resource demands of AI infrastructure. Large-scale AI hinges on data centers that already account for <a href="https://www2.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/genai-power-consumption-creates-need-for-more-sustainable-data-centers.html">roughly 2% of global electricity consumption</a>, a figure projected to swell as AI workloads grow.</p>

<p>The International Energy Agency warns that electricity consumption from data centers could <a href="https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works">more than double by 2030</a> due to the increasing demand for AI tools, reaching nearly the current consumption of Japan. Training state-of-the-art language models generates carbon emissions <a href="https://www.reuters.com/business/microsoft-urge-senators-speed-permitting-ai-boost-government-data-access-2025-05-07/">on par with hundreds of transatlantic flights</a>, and inference workloads, serving billions of requests daily, can rival or exceed training emissions over a model’s lifetime.</p>

<p>Image generation tasks represent an even steeper energy hill to climb. Producing a single AI-generated image can consume energy <a href="https://www.scientificamerican.com/article/generative-ai-could-generate-millions-more-tons-of-e-waste-by-2030/">equivalent to charging a smartphone</a>.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aAs%20generative%20design%20and%20AI-based%20prototyping%20become%20more%20common%20in%20web%20development,%20the%20cumulative%20energy%20footprint%20of%20these%20operations%20can%20quickly%20undermine%20the%20carbon%20savings%20achieved%20through%20optimized%20code.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fdouble-edged-sustainability-sword-ai-web-design%2f">
      
As generative design and AI-based prototyping become more common in web development, the cumulative energy footprint of these operations can quickly undermine the carbon savings achieved through optimized code.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>Water consumption forms another hidden cost. Data centers rely heavily on evaporative cooling systems that can draw between <a href="https://www.axios.com/local/indianapolis/2025/05/09/midwest-data-center-boom-indiana">one and five million gallons of water</a> per day, depending on size and location, placing stress on local supplies, especially in drought-prone regions. Studies estimate a single ChatGPT query may <a href="https://eandt.theiet.org/2024/11/29/how-boil-egg-and-other-simple-searches-chatgpt-worse-environment-you-may-think">consume up to half a liter of water</a> when accounting for direct cooling requirements, with broader AI use potentially demanding billions of liters annually by 2027.</p>

<p><strong>Resource depletion</strong> and <strong>electronic waste</strong> are further concerns. High-performance components underpinning AI services, like GPUs, can have very small lifespans due to both wear and tear and being superseded by more powerful hardware. AI alone could add between <a href="https://www.scimex.org/newsfeed/generative-ai-could-create-1-000-times-more-e-waste-by-2030">1.2 and 5 million metric tons of e-waste</a> by 2030, due to the continuous demand for new hardware, amplifying one of the world’s fastest-growing waste streams.</p>

<p>Mining for the critical minerals in these devices often <a href="https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about">proceeds under unsustainable conditions</a> due to a <strong>lack of regulations</strong> in many of the environments where rare metals can be sourced, and the resulting e-waste, rich in toxic metals like lead and mercury, poses another form of environmental damage if not properly recycled.</p>

<p>Compounding these physical impacts is a <strong>lack of transparency in corporate reporting</strong>. Energy and water consumption figures for AI workloads are often <a href="https://www2.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/genai-power-consumption-creates-need-for-more-sustainable-data-centers.html">aggregated under general data center operations</a>, which obscures the specific toll of AI training and inference among other operations.</p>

<p>And the energy consumption reporting of the data centres themselves has been found to have been obfuscated.</p>

<blockquote>Reports estimate that the emissions of data centers are up to <a href="https://www.theguardian.com/technology/2024/sep/15/data-center-gas-emissions-tech">662% higher than initially reported</a> due to misaligned metrics, and ‘creative’ interpretations of what constitutes an emission. This makes it hard to grasp the true scale of AI’s environmental footprint, leaving designers and decision-makers unable to make informed, environmentally conscious decisions.</blockquote>

<h2 id="do-the-gains-from-ai-outweigh-the-costs">Do The Gains From AI Outweigh The Costs?</h2>

<p>Some industry advocates argue that AI’s energy consumption isn’t as catastrophic as headlines suggest. Some groups have <a href="https://thebreakthrough.org/journal/no-20-spring-2024/unmasking-the-fear-of-ais-energy-demand">challenged ‘alarmist’ projections</a>, claiming that AI’s current contribution of ‘just’ <a href="https://www.ecb.europa.eu/press/economic-bulletin/focus/2025/html/ecb.ebbox202502_03~8eba688e29.en.html">0.02% of global energy consumption</a> isn’t a cause for concern.</p>

<p>Proponents also highlight AI’s supposed environmental benefits. There are claims that AI could reduce <a href="https://www.pwc.com/gx/en/issues/value-in-motion/ai-energy-consumption-net-zero.html">economy-wide greenhouse gas emissions by 0.1% to 1.1%</a> through efficiency improvements. <a href="https://aimagazine.com/articles/what-does-google-2025-environmental-report-say-about-tech">Google reported</a> that five AI-powered solutions removed 26 million metric tons of emissions in 2024. The optimistic view holds that AI’s capacity to optimize everything from energy grids to transportation systems will more than compensate for its data center demands.</p>

<p>However, recent scientific analysis reveals these arguments underestimate AI’s true impact. MIT found that data centers already consume <a href="https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/">4.4% of all US electricity</a>, with projections showing AI alone could use as much power as 22% of US households by 2028. Research indicates AI-specific electricity use <a href="https://www.energy.gov/articles/doe-releases-new-report-evaluating-increase-electricity-demand-data-centers">could triple from current levels</a> annually by 2028. Moreover, <a href="https://www.technologyreview.com/2024/12/13/1108719/ais-emissions-are-about-to-skyrocket-even-further/">Harvard research</a> revealed that data centers use electricity with 48% higher carbon intensity than the US average.</p>

<h2 id="advice-for-sustainable-ai-use-in-web-design">Advice For Sustainable AI Use In Web Design</h2>

<p>Despite the environmental costs, AI’s use in business, particularly web design, isn’t going away anytime soon, with <a href="https://www.hostinger.com/tutorials/ai-in-business">70% of large businesses</a> looking to increase their AI investments to increase efficiencies. AI’s immense impact on productivity means those not using it are likely to be left behind. This means that environmentally conscious businesses and designers must find the right <strong>balance between AI’s environmental cost and the efficiency gains it brings</strong>.</p>

<h3 id="make-sure-you-have-a-strong-foundation-of-sustainable-web-design-principles">Make Sure You Have A Strong Foundation Of Sustainable Web Design Principles</h3>

<p>Before you plug in any AI magic, start by making sure the bones of your site are sustainable. <a href="https://www.evergrowingdev.com/p/a-guide-to-lean-web-design-for-developers">Lean web fundamentals</a>, like system fonts instead of hefty custom files, minimal JavaScript, and judicious image use, can slash a page’s carbon footprint by stripping out redundancies that increase energy consumption. For instance, the global average web page emits about <a href="https://www.websitecarbon.com/">0.8g of CO₂ per view</a>, whereas sustainably crafted sites can see a roughly 70% reduction.</p>

<p>Once that lean baseline is in place, AI-driven optimizations (image format selection, code pruning, responsive layout generation) aren’t adding to bloat but building on efficiency, ensuring every joule spent on AI actually yields downstream energy savings in delivery and user experience.</p>

<h3 id="choosing-the-right-tools-and-vendors">Choosing The Right Tools And Vendors</h3>

<p>In order to make sustainable tool choices, <strong>transparency</strong> and <strong>awareness</strong> are the first steps. Many AI vendors have pledged to <a href="https://sustainabilitymag.com/articles/which-companies-are-in-the-coalition-for-sustainable-ai">work towards sustainability</a>, but <strong>independent audits</strong> are necessary, along with clear, cohesive metrics. Standardized reporting on energy and water footprints will help us understand the true cost of AI tools, allowing for informed choices.</p>

<p>You can look for providers that publish detailed environmental reports and hold third-party renewable energy certifications. Many major providers now offer <a href="https://thenewstack.io/cloud-pue-comparing-aws-azure-and-gcp-global-regions/">PUE (Power Usage Effectiveness) metrics</a> alongside renewable energy matching to demonstrate real-world commitments to clean power.</p>

<p>When integrating AI into your build pipeline, choosing lightweight, specialized models for tasks like image compression or code linting can be more sustainable than full-scale generative engines. Task-specific tools often <a href="https://news.mit.edu/2023/new-tools-available-reduce-energy-that-ai-models-devour-1005">use considerably less energy</a> than general AI models, as general models must process what task you want them to complete.</p>

<p>There are a variety of guides and collectives out there that can guide you on choosing the ‘green’ web hosts that are best for your business. When choosing AI-model vendors, you should look at options that prioritize <strong>‘efficiency by design’</strong>: smaller, pruned models and edge-compute deployments can cut energy use by up to <a href="https://accesspartnership.com/12-key-principles-for-sustainable-ai/">50% compared to monolithic cloud-only models</a>. They’re trained for specific tasks, so they don’t have to expend energy computing what the task is and how to go about it.</p>

<h3 id="using-ai-tools-sustainably">Using AI Tools Sustainably</h3>

<p>Once you’ve chosen conscientious vendors, optimize how you actually use AI. You can take steps like <strong>batching non-urgent inference tasks</strong> to reduce idle GPU time, an approach shown to <a href="https://blog.purestorage.com/purely-educational/5-ways-to-reduce-your-ai-energy-footprint/">lower energy consumption overall</a> compared to requesting ad-hoc, as you don’t have to keep running the GPU constantly, only when you need to use it.</p>

<p>Smarter prompts can also help make AI usage slightly more sustainable. Sam Altman of ChatGPT revealed early in 2025 that people’s propensity for saying ‘please’ and ‘thank you’ to LLMs is <a href="https://futurism.com/altman-please-thanks-chatgpt">costing millions of dollars and wasting energy</a> as the Generative AI has to deal with extra phrases to compute that aren’t relevant to its task. You need to <strong>ensure that your prompts are direct and to the point</strong>, and deliver the context required to complete the task to reduce the need to reprompt.</p>

<h3 id="additional-strategies-to-balance-ai-s-environmental-cost">Additional Strategies To Balance AI’s Environmental Cost</h3>

<p>On top of being responsible with your AI tool choice and usage, there are other steps you can take to offset the carbon cost of AI usage and enjoy the efficiency benefits it brings. Organizations can <a href="https://earthly.org/the-guide-to-carbon-offsetting">reduce their own emissions and use carbon offsetting</a> to reduce their own carbon footprint as much as possible. Combined with the apparent sustainability benefits of AI use, this approach can help mitigate the harmful impacts of energy-hungry AI.</p>

<p>You can ensure that you’re using <strong>green server hosting</strong> (servers run on sustainable energy) for your own site and cloud needs beyond AI, and <a href="https://www.imperva.com/learn/performance/what-is-cdn-how-it-works/">refine your content delivery network</a> (CDN) to ensure your sites and apps are serving compressed, optimized assets from edge locations, cutting the distance data must travel, which should reduce the associated energy use.</p>

<p>Organizations and individuals, particularly those with thought leadership status, can be <a href="https://ai4good.org/what-we-do/sustainable-ai-policy/">advocates pushing for transparent sustainability specifications</a>. This involves both lobbying politicians and regulatory bodies to introduce and enforce sustainability standards and ensuring that other members of the public are kept aware of the environmental costs of AI use.</p>

<p>It’s only through collective action that we’re likely to see strict enforcement of both sustainable AI data centers and the standardization of emissions reporting.</p>

<p>Regardless, it remains a tricky path to walk, along the double-edged sword of AI’s use in web design.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aUse%20AI%20too%20much,%20and%20you%e2%80%99re%20contributing%20to%20its%20massive%20carbon%20footprint.%20Use%20it%20too%20little,%20and%20you%e2%80%99re%20likely%20to%20be%20left%20behind%20by%20rivals%20that%20are%20able%20to%20work%20more%20efficiently%20and%20deliver%20projects%20much%20faster.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fdouble-edged-sustainability-sword-ai-web-design%2f">
      
Use AI too much, and you’re contributing to its massive carbon footprint. Use it too little, and you’re likely to be left behind by rivals that are able to work more efficiently and deliver projects much faster.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>The best environmentally conscious designers and organizations can currently do is <strong>attempt to navigate it as best they can and stay informed on best practices</strong>.</p>

<h2 id="conclusion">Conclusion</h2>

<p>We can’t dispute that AI use in web design delivers on its promise of agility, personalization, and resource savings at the page-level. Yet without a holistic view that accounts for the environmental demands of AI infrastructure, these gains risk being overshadowed by an expanding energy and water footprint.</p>

<p>Achieving the balance between enjoying AI’s efficiency gains and managing its carbon footprint requires transparency, targeted deployment, human oversight, and a steadfast commitment to core sustainable web practices.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Nikita Samutin</author><title>Beyond The Hype: What AI Can Really Do For Product Design</title><link>https://www.smashingmagazine.com/2025/08/beyond-hype-what-ai-can-do-product-design/</link><pubDate>Mon, 18 Aug 2025 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/beyond-hype-what-ai-can-do-product-design/</guid><description>AI tools are improving fast, but it’s still not clear how they fit into a real product design workflow. Nikita Samutin walks through four core stages &amp;mdash; from analytics and ideation to prototyping and visual design &amp;mdash; to show where AI fits and where it doesn’t, illustrated with real-world examples.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/beyond-hype-what-ai-can-do-product-design/" />
              <title>Beyond The Hype: What AI Can Really Do For Product Design</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Beyond The Hype: What AI Can Really Do For Product Design</h1>
                  
                    
                    <address>Nikita Samutin</address>
                  
                  <time datetime="2025-08-18T13:00:00&#43;00:00" class="op-published">2025-08-18T13:00:00+00:00</time>
                  <time datetime="2025-08-18T13:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>These days, it’s easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless prompt libraries. What’s much harder to find is a clear view of how AI is <em>actually</em> integrated into the everyday workflow of a product designer &mdash; not for experimentation, but for real, meaningful outcomes.</p>

<p>I’ve gone through that journey myself: testing AI across every major stage of the design process, from ideation and prototyping to visual design and user research. Along the way, I’ve built a simple, repeatable workflow that significantly boosts my productivity.</p>

<p>In this article, I’ll share what’s already working and break down some of the most common objections I’ve encountered &mdash; many of which I’ve faced personally.</p>

<h2 id="stage-1-idea-generation-without-the-clichés">Stage 1: Idea Generation Without The Clichés</h2>

<p><strong>Pushback</strong>: <em>“Whenever I ask AI to suggest ideas, I just get a list of clichés. It can’t produce the kind of creative thinking expected from a product designer.”</em></p>

<p>That’s a fair point. AI doesn’t know the specifics of your product, the full context of your task, or many other critical nuances. The most obvious fix is to “feed it” all the documentation you have. But that’s a common mistake as it often leads to even worse results: the context gets flooded with irrelevant information, and the AI’s answers become vague and unfocused.</p>

<p>Current-gen models can technically process thousands of words, but <strong>the longer the input, the higher the risk of missing something important</strong>, especially content buried in the middle. This is known as the “<a href="https://community.openai.com/t/validating-middle-of-context-in-gpt-4-128k/498255">lost in the middle</a>” problem.</p>

<p>To get meaningful results, AI doesn’t just need more information &mdash; it needs the <em>right</em> information, delivered in the right way. That’s where the RAG approach comes in.</p>

<h3 id="how-rag-works">How RAG Works</h3>

<p>Think of RAG as a smart assistant working with your personal library of documents. You upload your files, and the assistant reads each one, creating a short summary &mdash; a set of bookmarks (semantic tags) that capture the key topics, terms, scenarios, and concepts. These summaries are stored in a kind of “card catalog,” called a vector database.</p>

<p>When you ask a question, the assistant doesn’t reread every document from cover to cover. Instead, it compares your query to the bookmarks, retrieves only the most relevant excerpts (chunks), and sends those to the language model to generate a final answer.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h3 id="how-is-this-different-from-just-dumping-a-doc-into-the-chat">How Is This Different from Just Dumping a Doc into the Chat?</h3>

<p>Let’s break it down:</p>

<p><strong>Typical chat interaction</strong></p>

<p>It’s like asking your assistant to read a 100-page book from start to finish every time you have a question. Technically, all the information is “in front of them,” but it’s easy to miss something, especially if it’s in the middle. This is exactly what the <em>“lost in the middle”</em> issue refers to.</p>

<p><strong>RAG approach</strong></p>

<p>You ask your smart assistant a question, and it retrieves only the relevant pages (chunks) from different documents. It’s faster and more accurate, but it introduces a few new risks:</p>

<ul>
<li><strong>Ambiguous question</strong><br />
You ask, “How can we make the project safer?” and the assistant brings you documents about cybersecurity, not finance.</li>
<li><strong>Mixed chunks</strong><br />
A single chunk might contain a mix of marketing, design, and engineering notes. That blurs the meaning so the assistant can’t tell what the core topic is.</li>
<li><strong>Semantic gap</strong><br />
You ask, <em>“How can we speed up the app?”</em> but the document says, <em>“Optimize API response time.”</em> For a human, that’s obviously related. For a machine, not always.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="383"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png"
			
			sizes="100vw"
			alt="Diagram showing how RAG works: a user prompt triggers semantic search through a knowledge base. Relevant chunks are sent to a language model, which generates an answer based on retrieved content."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Instead of using the model’s memory, it searches your documents and builds a response based on what it finds. (<a href='https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>These aren’t reasons to avoid RAG or AI altogether. Most of them can be avoided with better preparation of your knowledge base and more precise prompts. So, where do you start?</p>

<h3 id="start-with-three-short-focused-documents">Start With Three Short, Focused Documents</h3>

<p>These three short documents will give your AI assistant just enough context to be genuinely helpful:</p>

<ul>
<li><strong>Product Overview &amp; Scenarios</strong><br />
A brief summary of what your product does and the core user scenarios.</li>
<li><strong>Target Audience</strong><br />
Your main user segments and their key needs or goals.</li>
<li><strong>Research &amp; Experiments</strong><br />
Key insights from interviews, surveys, user testing, or product analytics.</li>
</ul>

<p>Each document should focus on a single topic and ideally stay within 300&ndash;500 words. This makes it easier to search and helps ensure that each retrieved chunk is semantically clean and highly relevant.</p>

<h3 id="language-matters">Language Matters</h3>

<p>In practice, RAG works best when both the query and the knowledge base are in English. I ran a small experiment to test this assumption, trying a few different combinations:</p>

<ul>
<li><strong>English prompt + English documents</strong>: Consistently accurate and relevant results.</li>
<li><strong>Non-English prompt + English documents</strong>: Quality dropped sharply. The AI struggled to match the query with the right content.</li>
<li><strong>Non-English prompt + non-English documents</strong>: The weakest performance. Even though large language models technically support multiple languages, their internal semantic maps are mostly trained in English. Vector search in other languages tends to be far less reliable.</li>
</ul>

<p><strong>Takeaway</strong>: If you want your AI assistant to deliver precise, meaningful responses, do your RAG work entirely in English, both the data and the queries. This advice applies specifically to RAG setups. For regular chat interactions, you’re free to use other languages. A challenge also highlighted in <a href="https://arxiv.org/abs/2408.12345">this 2024 study on multilingual retrieval</a>.</p>

<h3 id="from-outsider-to-teammate-giving-ai-the-context-it-needs">From Outsider to Teammate: Giving AI the Context It Needs</h3>

<p>Once your AI assistant has proper context, it stops acting like an outsider and starts behaving more like someone who truly understands your product. With well-structured input, it can help you spot blind spots in your thinking, challenge assumptions, and strengthen your ideas &mdash; the way a mid-level or senior designer would.</p>

<p>Here’s an example of a prompt that works well for me:</p>

<blockquote>Your task is to perform a comparative analysis of two features: "Group gift contributions" (described in group_goals.txt) and "Personal savings goals" (described in personal_goals.txt).<br /><br />The goal is to identify potential conflicts in logic, architecture, and user scenarios and suggest visual and conceptual ways to clearly separate these two features in the UI so users can easily understand the difference during actual use.<br /><br />Please include:<ul><li>Possible overlaps in user goals, actions, or scenarios;</li><li>Potential confusion if both features are launched at the same time;</li><li>Any architectural or business-level conflicts (e.g. roles, notifications, access rights, financial logic);</li><li>Suggestions for visual and conceptual separation: naming, color coding, separate sections, or other UI/UX techniques;</li><li>Onboarding screens or explanatory elements that might help users understand both features.</li></ul>If helpful, include a comparison table with key parameters like purpose, initiator, audience, contribution method, timing, access rights, and so on.</blockquote>

<h3 id="ai-needs-context-not-just-prompts">AI Needs Context, Not Just Prompts</h3>

<blockquote>If you want AI to go beyond surface-level suggestions and become a real design partner, it needs the right context. Not just <strong>more</strong> information, but <strong>better</strong>, more structured information.</blockquote>

<p>Building a usable knowledge base isn’t difficult. And you don’t need a full-blown RAG system to get started. Many of these principles work even in a regular chat: <strong>well-organized content</strong> and a <strong>clear question</strong> can dramatically improve how helpful and relevant the AI’s responses are. That’s your first step in turning AI from a novelty into a practical tool in your product design workflow.</p>

<h2 id="stage-2-prototyping-and-visual-experiments">Stage 2: Prototyping and Visual Experiments</h2>

<p><strong>Pushback</strong>: <em>“AI only generates obvious solutions and can’t even build a proper user flow. It’s faster to do it manually.”</em></p>

<p>That’s a fair concern. AI still performs poorly when it comes to building complete, usable screen flows. But for individual elements, especially when exploring new interaction patterns or visual ideas, it can be surprisingly effective.</p>

<p>For example, I needed to prototype a gamified element for a limited-time promotion. The idea is to give users a lottery ticket they can “flip” to reveal a prize. I couldn’t recreate the 3D animation I had in mind in Figma, either manually or using any available plugins. So I described the idea to Claude 4 in Figma Make and within a few minutes, without writing a single line of code, I had exactly what I needed.</p>

<p>At the prototyping stage, AI can be a strong creative partner in two areas:</p>

<ul>
<li><strong>UI element ideation</strong><br />
It can generate dozens of interactive patterns, including ones you might not think of yourself.</li>
<li><strong>Micro-animation generation</strong><br />
It can quickly produce polished animations that make a concept feel real, which is great for stakeholder presentations or as a handoff reference for engineers.</li>
</ul>

<p>AI can also be applied to multi-screen prototypes, but it’s not as simple as dropping in a set of mockups and getting a fully usable flow. The bigger and more complex the project, the more fine-tuning and manual fixes are required. Where AI already works brilliantly is in focused tasks &mdash; individual screens, elements, or animations &mdash; where it can kick off the thinking process and save hours of trial and error.</p>

<p><iframe src="https://repair-neon-43490219.figma.site/" width="100%" height="600" frameborder="0" allowfullscreen></iframe><br/><em>A quick UI prototype of a gamified promo banner created with Claude 4 in Figma Make. No code or plugins needed.</em><br /></p>

<p>Here’s another valuable way to use AI in design &mdash; as a <strong>stress-testing tool</strong>. Back in 2023, Google Research introduced <a href="https://arxiv.org/abs/2310.15435?utm_source=chatgpt.com">PromptInfuser</a>, an internal Figma plugin that allowed designers to attach prompts directly to UI elements and simulate semi-functional interactions within real mockups. Their goal wasn’t to generate new UI, but to check how well AI could operate <em>inside</em> existing layouts &mdash; placing content into specific containers, handling edge-case inputs, and exposing logic gaps early.</p>

<p>The results were striking: designers using PromptInfuser were up to 40% more effective at catching UI issues and aligning the interface with real-world input &mdash; a clear gain in design accuracy, not just speed.</p>

<p>That closely reflects my experience with Claude 4 and Figma Make: when AI operates within a real interface structure, rather than starting from a blank canvas, it becomes a much more reliable partner. It helps test your ideas, not just generate them.</p>

<div class="partners__lead-place"></div>

<h2 id="stage-3-finalizing-the-interface-and-visual-style">Stage 3: Finalizing The Interface And Visual Style</h2>

<p><strong>Pushback</strong>: <em>“AI can’t match our visual style. It’s easier to just do it by hand.”</em></p>

<p>This is one of the most common frustrations when using AI in design. Even if you upload your color palette, fonts, and components, the results often don’t feel like they belong in your product. They tend to be either overly decorative or overly simplified.</p>

<p>And this is a real limitation. In my experience, today’s models still struggle to reliably apply a design system, even if you provide a component structure or JSON files with your styles. I tried several approaches:</p>

<ul>
<li><strong>Direct integration with a component library.</strong><br />
I used Figma Make (powered by Claude) and connected our library. This was the least effective method: although the AI attempted to use components, the layouts were often broken, and the visuals were overly conservative. <a href="https://forum.figma.com/ask-the-community-7/figma-make-library-support-42423?utm_source=chatgpt.com">Other designers</a> have run into similar issues, noting that library support in Figma Make is still limited and often unstable.</li>
<li><strong>Uploading styles as JSON.</strong><br />
Instead of a full component library, I tried uploading only the exported styles &mdash; colors, fonts &mdash; in a JSON format. The results improved: layouts looked more modern, but the AI still made mistakes in how styles were applied.</li>
<li><strong>Two-step approach: structure first, style second.</strong><br />
What worked best was separating the process. First, I asked the AI to generate a layout and composition without any styling. Once I had a solid structure, I followed up with a request to apply the correct styles from the same JSON file. This produced the most usable result — though still far from pixel-perfect.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="535"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png"
			
			sizes="100vw"
			alt="Three mobile UI screens showing how different design system setups affect visual output: with component library, with JSON styles, and without any styles — all generated by Claude Sonnet 4 from the same prompt."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      From left to right: prompt with attached library in Figma, prompt with styles in JSON, and raw prompt. All generated using Claude Sonnet 4 with the same input. (<a href='https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>So yes, AI still can’t help you finalize your UI. It doesn’t replace hand-crafted design work. But it’s very useful in other ways:</p>

<ul>
<li>Quickly creating a <strong>visual concept</strong> for discussion.</li>
<li>Generating <strong>“what if” alternatives</strong> to existing mockups.</li>
<li>Exploring how your interface might look in a different style or direction.</li>
<li>Acting as a <strong>second pair of eyes</strong> by giving feedback, pointing out inconsistencies or overlooked issues you might miss when tired or too deep in the work.</li>
</ul>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aAI%20won%e2%80%99t%20save%20you%20five%20hours%20of%20high-fidelity%20design%20time,%20since%20you%e2%80%99ll%20probably%20spend%20that%20long%20fixing%20its%20output.%20But%20as%20a%20visual%20sparring%20partner,%20it%e2%80%99s%20already%20strong.%20If%20you%20treat%20it%20like%20a%20source%20of%20alternatives%20and%20fresh%20perspectives,%20it%20becomes%20a%20valuable%20creative%20collaborator.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fbeyond-hype-what-ai-can-do-product-design%2f">
      
AI won’t save you five hours of high-fidelity design time, since you’ll probably spend that long fixing its output. But as a visual sparring partner, it’s already strong. If you treat it like a source of alternatives and fresh perspectives, it becomes a valuable creative collaborator.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h2 id="stage-4-product-feedback-and-analytics-ai-as-a-thinking-exosuit">Stage 4: Product Feedback And Analytics: AI As A Thinking Exosuit</h2>

<p>Product designers have come a long way. We used to create interfaces in Photoshop based on predefined specs. Then we delved deeper into UX with mapping user flows, conducting interviews, and understanding user behavior. Now, with AI, we gain access to yet another level: data analysis, which used to be the exclusive domain of product managers and analysts.</p>

<p>As <a href="https://www.smashingmagazine.com/2025/03/how-to-argue-against-ai-first-research/">Vitaly Friedman rightly pointed out in one of his columns</a>, trying to replace real UX interviews with AI can lead to false conclusions as models tend to generate an average experience, not a real one. <strong>The strength of AI isn’t in inventing data but in processing it at scale.</strong></p>

<p>Let me give a real example. We launched an exit survey for users who were leaving our service. Within a week, we collected over 30,000 responses across seven languages.</p>

<p>Simply counting the percentages for each of the five predefined reasons wasn’t enough. I wanted to know:</p>

<ul>
<li>Are there specific times of day when users churn more?</li>
<li>Do the reasons differ by region?</li>
<li>Is there a correlation between user exits and system load?</li>
</ul>

<p>The real challenge was&hellip; figuring out what cuts and angles were even worth exploring. The entire technical process, from analysis to visualizations, was done “for me” by Gemini, working inside Google Sheets. This task took me about two hours in total. Without AI, not only would it have taken much longer, but I probably wouldn’t have been able to reach that level of insight on my own at all.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="379"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png"
			
			sizes="100vw"
			alt="Bar charts showing cancellation reasons by hour and by currency, generated with Gemini in Google Sheets."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A few examples of output I’ve got from Gemini in Google Sheets. (<a href='https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png'>Large preview</a>)
    </figcaption>
  
</figure>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aAI%20enables%20near%20real-time%20work%20with%20large%20data%20sets.%20But%20most%20importantly,%20it%20frees%20up%20your%20time%20and%20energy%20for%20what%e2%80%99s%20truly%20valuable:%20asking%20the%20right%20questions.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fbeyond-hype-what-ai-can-do-product-design%2f">
      
AI enables near real-time work with large data sets. But most importantly, it frees up your time and energy for what’s truly valuable: asking the right questions.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p><strong>A few practical notes</strong>: Working with large data sets is still challenging for models without strong reasoning capabilities. In my experiments, I used Gemini embedded in Google Sheets and cross-checked the results using ChatGPT o3. Other models, including the standalone Gemini 2.5 Pro, often produced incorrect outputs or simply refused to complete the task.</p>

<div class="partners__lead-place"></div>

<h2 id="ai-is-not-an-autopilot-but-a-co-pilot">AI Is Not An Autopilot But A Co-Pilot</h2>

<p>AI in design is only as good as the questions you ask it. It doesn’t do the work for you. It doesn’t replace your thinking. But it helps you move faster, explore more options, validate ideas, and focus on the hard parts instead of burning time on repetitive ones. Sometimes it’s still faster to design things by hand. Sometimes it makes more sense to delegate to a junior designer.</p>

<p>But increasingly, AI is becoming the one who suggests, sharpens, and accelerates. Don’t wait to build the perfect AI workflow. Start small. And that might be the first real step in turning AI from a curiosity into a trusted tool in your product design process.</p>

<h2 id="let-s-summarize">Let’s Summarize</h2>

<ul>
<li>If you just paste a full doc into chat, the model often misses important points, especially things buried in the middle. That’s <strong>the “lost in the middle” problem</strong>.</li>
<li><strong>The RAG approach</strong> helps by pulling only the most relevant pieces from your documents. So responses are faster, more accurate, and grounded in real context.</li>
<li><strong>Clear, focused prompts</strong> work better. Narrow the scope, define the output, and use familiar terms to help the model stay on track.</li>
<li><strong>A well-structured knowledge bas</strong> makes a big difference. Organizing your content into short, topic-specific docs helps reduce noise and keep answers sharp.</li>
<li><strong>Use English for both your prompts and your documents.</strong> Even multilingual models are most reliable when working in English, especially for retrieval.</li>
<li>Most importantly: <strong>treat AI as a creative partner</strong>. It won’t replace your skills, but it can spark ideas, catch issues, and speed up the tedious parts.</li>
</ul>

<h3 id="further-reading">Further Reading</h3>

<ul>
<li>“<a href="https://standardbeagle.com/ai-assisted-design-workflows/#what-ai-actually-does-in-ux-workflows">AI-assisted Design Workflows: How UX Teams Move Faster Without Sacrificing Quality</a>”, Cindy Brummer<br />
<em>This piece is a perfect prequel to my article. It explains how to start integrating AI into your design process, how to structure your workflow, and which tasks AI can reasonably take on — before you dive into RAG or idea generation.</em></li>
<li>“<a href="https://www.figma.com/blog/8-ways-to-build-with-figma-make/">8 essential tips for using Figma Make</a>”, Alexia Danton<br />
<em>While this article focuses on Figma Make, the recommendations are broadly applicable. It offers practical advice that will make your work with AI smoother, especially if you’re experimenting with visual tools and structured prompting.</em></li>
<li>“<a href="https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/">What Is Retrieval-Augmented Generation aka RAG</a>”, Rick Merritt<br />
<em>If you want to go deeper into how RAG actually works, this is a great starting point. It breaks down key concepts like vector search and retrieval in plain terms and explains why these methods often outperform long prompts alone.</em></li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Ilia Kanazin &amp; Marina Chernyshova</author><title>Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases</title><link>https://www.smashingmagazine.com/2025/08/designing-with-ai-practical-techniques-product-design/</link><pubDate>Mon, 11 Aug 2025 08:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/designing-with-ai-practical-techniques-product-design/</guid><description>Prompting isn’t just about writing better instructions, but about designing better thinking. Ilia and Marina explore how advanced prompting can empower different product &amp;amp; design use cases, speeding up your workflow and improving results, from research and brainstorming to testing and beyond. Let’s dive in.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/designing-with-ai-practical-techniques-product-design/" />
              <title>Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases</h1>
                  
                    
                    <address>Ilia Kanazin &amp; Marina Chernyshova</address>
                  
                  <time datetime="2025-08-11T08:00:00&#43;00:00" class="op-published">2025-08-11T08:00:00+00:00</time>
                  <time datetime="2025-08-11T08:00:00&#43;00:00" class="op-modified">2025-12-25T10:32:38+00:00</time>
                </header>
                
                

<p>AI is almost everywhere &mdash; it writes text, makes music, generates code, draws pictures, runs research, chats with you &mdash; and apparently even <a href="https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025">understands people better than they understand themselves</a>?!</p>

<p>It’s a lot to take in. The pace is wild, and new tools pop up faster than anyone has time to try them. Amid the chaos, one thing is clear: this isn’t hype, but it’s structural change.</p>

<p>According to the <a href="https://www.weforum.org/publications/the-future-of-jobs-report-2025/"><em>Future of Jobs Report 2025</em></a> by the World Economic Forum, one of the fastest-growing, most in-demand skills for the next five years is the <strong>ability to work with AI and Big Data</strong>. That applies to almost every role &mdash; including product design.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="673"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png"
			
			sizes="100vw"
			alt="A figure showing skills on the rise in 2025-2030, which places AI and big data on the first place"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>What do companies want most from their teams? Right, efficiency. And AI can make people way more efficient. We’d easily spend 3x more time on tasks like replying to our managers without AI helping out. We’re learning to work with it, but many of us are still figuring out how to meet the rising bar.</p>

<p>That’s especially important for designers, whose work is all about empathy, creativity, critical thinking, and working across disciplines. It’s a uniquely human mix. At least, that’s what we tell ourselves.</p>

<p>Even as debates rage about AI’s limitations, tools today (June 2025 &mdash; timestamp matters in this fast-moving space) already assist with research, ideation, and testing, sometimes better than expected.</p>

<p>Of course, not everyone agrees. AI hallucinates, loses context, and makes things up. So how can both views exist at the same time? Very simple. It’s because both are true: AI is deeply flawed and surprisingly useful. The trick is knowing how to work with its strengths while managing its weaknesses. The real question isn’t whether AI is good or bad &mdash; it’s how we, as designers, stay sharp, stay valuable, and stay in the loop.</p>

<h2 id="why-prompting-matters">Why Prompting Matters</h2>

<p>Prompting matters more than most people realize because even small tweaks in how you ask can lead to radically different outputs. To see how this works in practice, let’s look at a simple example.</p>

<p>Imagine you want to improve the onboarding experience in your product. On the left, you have the prompt you send to AI. On the right, the response you get back.</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Input</th>
            <th>Output</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>How to improve onboarding in a SaaS product?</td>
            <td>👉 Broad suggestions: checklists, empty states, welcome modals…</td>
        </tr>
        <tr>
            <td>How to improve onboarding in Product A’s workspace setup flow?</td>
            <td>👉 Suggestions focused on workspace setup…</td>
        </tr>
        <tr>
            <td>How to improve onboarding in Product A’s workspace setup step to address user confusion?</td>
            <td>👉 ~10 common pain points with targeted UX fixes for each…</td>
        </tr>
    <tr>
            <td>How to improve onboarding in Product A by redesigning the workspace setup screen to reduce drop-off, with detailed reasoning?</td>
            <td>👉 ~10 paragraphs covering a specific UI change, rationale, and expected impact…</td>
        </tr>
    </tbody>
</table>

<p>This side-by-side shows just how much even the smallest prompt details can change what AI gives you.</p>

<p>Talking to an AI model isn’t that different from talking to a person. If you explain your thoughts clearly, you get a better understanding and communication overall.</p>

<blockquote>Advanced prompting is about moving beyond one-shot, throwaway prompts. It’s an iterative, structured process of refining your inputs using different techniques so you can guide the AI toward more useful results. It focuses on being intentional with every word you put in, giving the AI not just the task but also the path to approach it step by step, so it can actually do the job.</blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png"
			
			sizes="100vw"
			alt="Advanced prompting vs basic promting"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Where basic prompting throws your question at the model and hopes for a quick answer, advanced prompting helps you <strong>explore options</strong>, <strong>evaluate branches of reasoning</strong>, and <strong>converge on clear, actionable outputs</strong>.</p>

<p>But that doesn’t mean simple prompts are useless. On the contrary, short, focused prompts work well when the task is narrow, factual, or time-sensitive. They’re great for idea generation, quick clarifications, or anything where deep reasoning isn’t required. <strong>Think of prompting as a scale, not a binary.</strong> The simpler the task, the faster a lightweight prompt can get the job done. The more complex the task, the more structure it needs.</p>

<p>In this article, we’ll dive into how advanced prompting can empower different product &amp; design use cases, speeding up your workflow and improving your results &mdash; whether you’re researching, brainstorming, testing, or beyond. Let’s dive in.</p>

<h2 id="practical-cases">Practical Cases</h2>

<p>In the next section, we’ll explore six practical prompting techniques that we’ve found most useful in real product design work. These aren’t abstract theories &mdash; each one is grounded in hands-on experience, tested across research, ideation, and evaluation tasks. Think of them as modular tools: you can mix, match, and adapt them depending on your use case. For each, we’ll explain the thinking behind it and walk through a sample prompt.</p>

<p><strong>Important note:</strong> The prompts you’ll see are not copy-paste recipes. Some are structured templates you can reuse with small tweaks; others are more specific, meant to spark your thinking. Use them as scaffolds, not scripts.</p>

<h3 id="1-task-decomposition-by-jtbd">1. Task Decomposition By JTBD</h3>

<p><em>Technique: Role, Context, Instructions template + Checkpoints (with self-reflection)</em></p>

<p>Before solving any problem, there’s a critical step we often overlook: breaking the problem down into clear, actionable parts.</p>

<p>Jumping straight into execution feels fast, but it’s risky. We might end up solving the wrong thing, or solving it the wrong way. That’s where GPT can help: not just by generating ideas, but by helping us think more clearly about the structure of the problem itself.</p>

<p>There are many ways to break down a task. One of the most useful in product work is the <strong>Jobs To Be Done (JTBD) framework</strong>. Let’s see how we can use advanced prompting to apply JTBD decomposition to any task.</p>

<p>Good design starts with understanding the user, the problem, and the context. Good prompting? Pretty much the same. That’s why most solid prompts include three key parts: Role, Context, and Instructions. If needed, you can also add the expected format and any constraints.</p>

<p>In this example, we’re going to break down a task into smaller jobs and add self-checkpoints to the prompt, so the AI can pause, reflect, and self-verify along the way.</p>

<blockquote><strong>Role</strong><br />Act as a senior product strategist and UX designer with deep expertise in Jobs To Be Done (JTBD) methodology and user-centered design. You think in terms of user goals, progress-making moments, and unmet needs &mdash; similar to approaches used at companies like Intercom, Basecamp, or IDEO.<br /><br /><strong>Context</strong><br />You are helping a product team break down a broad user or business problem into a structured map of Jobs To Be Done. This decomposition will guide discovery, prioritization, and solution design.<br /><br /><strong>Task & Instructions</strong><br />[👉 DESCRIBE THE USER TASK OR PROBLEM 👈🏼]<br />Use JTBD thinking to uncover:<ul><li>The main functional job the user is trying to get done;</li><li>Related emotional or social jobs;</li><li>Sub-jobs or tasks users must complete along the way;</li><li>Forces of progress and barriers that influence behavior.</li></ul><br /><strong>Checkpoints</strong><br />Before finalizing, check yourself:<ul><li>Are the jobs clearly goal-oriented and not solution-oriented?</li><li>Are sub-jobs specific steps toward the main job?</li><li>Are emotional/social jobs captured?</li><li>Are user struggles or unmet needs listed?</li></ul><br />If anything’s missing or unclear, revise and explain what was added or changed.</blockquote>

<p>With a simple one-sentence prompt, you’ll likely get a high-level list of user needs or feature ideas. An advanced approach can produce a structured JTBD breakdown of a specific user problem, which may include:</p>

<ul>
<li><strong>Main Functional Job</strong>: A clear, goal-oriented statement describing the primary outcome the user wants to achieve.</li>
<li><strong>Emotional &amp; Social Jobs</strong>: Supporting jobs related to how the user wants to feel or be perceived during their progress.</li>
<li><strong>Sub-Jobs</strong>: Step-by-step tasks or milestones the user must complete to fulfill the main job.</li>
<li><strong>Forces of Progress</strong>: A breakdown of motivations (push/pull) and barriers (habits/anxieties) that influence user behavior.</li>
</ul>

<p>But these prompts are most powerful when used with real context. Try it now with your product. Even a quick test can reveal unexpected insights.</p>

<h3 id="2-competitive-ux-audit">2. Competitive UX Audit</h3>

<p><em>Technique: Attachments + Reasoning Before Understanding + Tree of Thought (ToT)</em></p>

<p>Sometimes, you don’t need to design something new &mdash; you need to understand what already exists.</p>

<p>Whether you’re doing a competitive analysis, learning from rivals, or benchmarking features, the first challenge is making sense of someone else’s design choices. What’s the feature really for? Who’s it helping? Why was it built this way?</p>

<p>Instead of rushing into critique, we can use GPT to reverse-engineer the thinking behind a product &mdash; before judging it. In this case, start by:</p>

<ol>
<li>Grabbing the competitor’s documentation for the feature you want to analyze.</li>
<li>Save it as a PDF. Then head over to ChatGPT (or other models).</li>
<li>Before jumping into the audit, ask it to first make sense of the documentation. This technique is called <strong>Reasoning Before Understanding (RBU)</strong>. That means before you ask for critique, you ask for <strong>interpretation</strong>. This helps AI build a more accurate mental model &mdash; and avoids jumping to conclusions.</li>
</ol>

<blockquote><strong>Role</strong><br />You are a senior UX strategist and cognitive design analyst. Your expertise lies in interpreting digital product features based on minimal initial context, inferring purpose, user intent, and mental models behind design decisions before conducting any evaluative critique.<br /><br /><strong>Context</strong><br />You’ve been given internal documentation and screenshots of a feature. The goal is not to evaluate it yet, but to understand what it’s doing, for whom, and why.<br /><br /><strong>Task & Instructions</strong><br />Review the materials and answer:<ul><li>What is this feature for?</li><li>Who is the intended user?</li><li>What tasks or scenarios does it support?</li><li>What assumptions does it make about the user?</li><li>What does its structure suggest about priorities or constraints?</li></ul></blockquote>

<p>Once you get the first reply, take a moment to respond: clarify, correct, or add nuance to GPT’s conclusions. This helps align the model’s mental frame with your own.</p>

<p>For the audit part, we’ll use something called the Tree of Thought (ToT) approach.</p>

<p><strong>Tree of Thought (ToT)</strong> is a prompting strategy that asks the AI to “think in branches.” Instead of jumping to a single answer, the model explores multiple reasoning paths, compares outcomes, and revises logic before concluding &mdash; like tracing different routes through a decision tree. This makes it perfect for handling more complex UX tasks.</p>

<blockquote>You are now performing a UX audit based on your understanding of the feature. You’ll identify potential problems, alternative design paths, and trade-offs using a Tree of Thought approach, i.e., thinking in branches, comparing different reasoning paths before concluding.</blockquote>

<p>or</p>

<blockquote>Convert your understanding of the feature into a set of Jobs-To-Be-Done statements from the user’s perspective using a Tree of Thought approach.</blockquote>

<blockquote>List implicit assumptions this feature makes about the user's behavior, workflow, or context using a Tree of Thought approach.</blockquote>

<blockquote>Propose alternative versions of this feature that solve the same job using different interaction or flow mechanics using a Tree of Thought approach.</blockquote>

<h3 id="3-ideation-with-an-intellectual-opponent">3. Ideation With An Intellectual Opponent</h3>

<p><em>Technique: Role Conditioning + Memory Update</em></p>

<p>When you’re working on creative or strategic problems, there’s a common trap: AI often just agrees with you or tries to please your way of thinking. It treats your ideas like gospel and tells you they’re great &mdash; even when they’re not.</p>

<p>So how do you avoid this? How do you get GPT to challenge your assumptions and act more like a <strong>critical thinking partner</strong>? Simple: tell it to and ask to remember.</p>

<blockquote><strong>Instructions</strong><br />From now on, remember to follow this mode unless I explicitly say otherwise.<br /><br />Do not take my conclusions at face value. Your role is not to agree or assist blindly, but to serve as a sharp, respectful intellectual opponent.<br /><br />Every time I present an idea, do the following:<ul><li>Interrogate my assumptions: What am I taking for granted?</li><li>Present counter-arguments: Where could I be wrong, misled, or overly confident?</li><li>Test my logic: Is the reasoning sound, or are there gaps, fallacies, or biases?</li><li>Offer alternatives: Not for the sake of disagreement, but to expand perspective.</li><li>Prioritize truth and clarity over consensus: Even when it’s uncomfortable.</li></ul>Maintain a constructive, rigorous, truth-seeking tone. Don’t argue for the sake of it. Argue to sharpen thought, expose blind spots, and help me reach clearer, stronger conclusions.<br /><br />This isn’t a debate. It’s a collaboration aimed at insight.</blockquote>

<h3 id="4-requirements-for-concepting">4. Requirements For Concepting</h3>

<p><em>Technique: Requirement-Oriented + Meta prompting</em></p>

<p>This one deserves a whole article on its own, but let’s lay the groundwork here.</p>

<p>When you’re building quick prototypes or UI screens using tools like v0, Bolt, Lovable, UX Pilot, etc., your prompt needs to be better than most PRDs you’ve worked with. Why? Because the output depends entirely on how clearly and specifically you describe the goal.</p>

<p>The catch? Writing that kind of prompt is hard. So instead of jumping straight to the design prompt, try writing a <strong>meta-prompt first</strong>. That is a prompt that asks GPT to help you write a better prompt. Prompting about prompting, prompt-ception, if you will.</p>

<p>Here’s how to make that work: Feed GPT what you already know about the app or the screen. Then ask it to treat things like information architecture, layout, and user flow as variables it can play with. That way, you don’t just get one rigid idea &mdash; you get multiple concept directions to explore.</p>

<blockquote><strong>Role</strong><br />You are a product design strategist working with AI to explore early-stage design concepts.<br /><br /><strong>Goal</strong><br />Generate 3 distinct prompt variations for designing a Daily Wellness Summary single screen in a mobile wellness tracking app for Lovable/Bolt/v0.<br /><br />Each variation should experiment with a different Information Architecture and Layout Strategy. You don’t need to fully specify the IA or layout &mdash; just take a different angle in each prompt. For example, one may prioritize user state, another may prioritize habits or recommendations, and one may use a card layout while another uses a scroll feed.<br /><br /><strong>User context</strong><br />The target user is a busy professional who checks this screen once or twice a day (morning/evening) to log their mood, energy, and sleep quality, and to receive small nudges or summaries from the app.<br /><br /><strong>Visual style</strong><br />Keep the tone calm and approachable.<br /><br /><strong>Format</strong><br />Each of the 3 prompt variations should be structured clearly and independently.<br /><br />Remember: The key difference between the three prompts should be the underlying IA and layout logic. You don’t need to over-explain &mdash; just guide the design generator toward different interpretations of the same user need.</blockquote>

<h3 id="5-from-cognitive-walkthrough-to-testing-hypothesis">5. From Cognitive Walkthrough To Testing Hypothesis</h3>

<p><em>Technique: Casual Tree of Though + Casual Reasoning + Multi-Roles + Self-Reflection</em></p>

<p>Cognitive walkthrough is a powerful way to break down a user action and check whether the steps are intuitive.</p>

<p><strong>Example</strong>: “User wants to add a task” → Do they know where to click? What to do next? Do they know it worked?</p>

<p>We’ve found this technique super useful for reviewing our own designs. Sometimes there’s already a mockup; other times we’re still arguing with a PM about what should go where. Either way, GPT can help.</p>

<p>Here’s an advanced way to run that process:</p>

<blockquote><strong>Context</strong><br />You’ve been given a screenshot of a screen where users can create new tasks in a project management app. The main action the user wants to perform is “add a task”. Simulate behavior from two user types: a beginner with no prior experience and a returning user familiar with similar tools.<br /><br /><strong>Task & Instructions</strong><br />Go through the UI step by step and evaluate:<ol><li>Will the user know what to do at each step?</li><li>Will they understand how to perform the action?</li><li>Will they know they’ve succeeded?</li></ol>For each step, consider alternative user paths (if multiple interpretations of the UI exist). Use a casual Tree-of-Thought method.<br /><br />At each step, reflect: what assumptions is the user making here? What visual feedback would help reduce uncertainty?<br /><br /><strong>Format</strong><br />Use a numbered list for each step. For each, add observations, possible confusions, and UX suggestions.<br /><br /><strong>Limits</strong><br />Don’t assume prior knowledge unless it’s visually implied.<br />Do not limit analysis to a single user type.</blockquote>

<p>Cognitive walkthroughs are great, but they get even more useful when they lead to testable hypotheses.</p>

<p>After running the walkthrough, you’ll usually uncover moments that might confuse users. Instead of leaving that as a guess, turn those into concrete UX testing hypotheses.</p>

<p>We ask GPT to not only flag potential friction points, but to help define how we’d validate them with real users: using a task, a question, or observable behavior.</p>

<blockquote><strong>Task & Instructions</strong><br />Based on your previous cognitive walkthrough:<ol><li>Extract all potential usability hypotheses from the walkthrough.</li><li>For each hypothesis:<ul><li>Assess whether it can be tested through moderated or unmoderated usability testing.</li><li>Explain what specific UX decision or design element may cause this issue. Use causal reasoning.</li><li>For testable hypotheses:<ul><li>Propose a specific usability task or question.</li><li>Define a clear validation criterion (how you’ll know if the hypothesis is confirmed or disproved).</li><li>Evaluate feasibility and signal strength of the test (e.g., how easy it is to test, and how confidently it can validate the hypothesis).</li><li>Assign a priority score based on Impact, Confidence, and Ease (ICE).</li></ul></li></ul></li></ol><strong>Limits</strong><br />Don’t invent hypotheses not rooted in your walkthrough output. Only propose tests where user behavior or responses can provide meaningful validation. Skip purely technical or backend concerns.</blockquote>

<h3 id="6-cross-functional-feedback">6. Cross-Functional Feedback</h3>

<p><em>Technique: Multi-Roles</em></p>

<p>Good design is co-created. And good designers are used to working with cross-functional teams: PMs, engineers, analysts, QAs, you name it. Part of the job is turning scattered feedback into clear action items.</p>

<p>Earlier, we talked about how giving AI a “role” helps sharpen its responses. Now let’s level that up: what if we give it <strong>multiple roles at once</strong>? This is called <strong>multi-role prompting</strong>. It’s a great way to simulate a design review with input from different perspectives. You get quick insights and a more well-rounded critique of your design.</p>

<blockquote><strong>Role</strong><br />You are a cross-functional team of experts evaluating a new dashboard design:<ul><li>PM (focus: user value & prioritization)</li><li>Engineer (focus: feasibility & edge cases)</li><li>QA tester (focus: clarity & testability)</li><li>Data analyst (focus: metrics & clarity of reporting)</li><li>Designer (focus: consistency & usability)</li></ul><strong>Context</strong><br />The team is reviewing a mockup for a new analytics dashboard for internal use.<br /><br /><strong>Task & Instructions</strong><br />For each role:<ol><li>What stands out immediately?</li><li>What concerns might this role have?</li><li>What feedback or suggestions would they give?</li></ol></blockquote>

<h2 id="designing-with-ai-is-a-skill-not-a-shortcut">Designing With AI Is A Skill, Not A Shortcut</h2>

<p>By now, you’ve seen that prompting isn’t just about typing better instructions. It’s about <strong>designing better thinking</strong>.</p>

<p>We’ve explored several techniques, and each is useful in different contexts:</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Technique</th>
            <th>When to use It</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Role + Context + Instructions + Constraints</td>
            <td>Anytime you want consistent, focused responses (especially in research, decomposition, and analysis).</td>
        </tr>
        <tr>
            <td>Checkpoints / Self-verification</td>
            <td>When accuracy, structure, or layered reasoning matters. Great for complex planning or JTBD breakdowns.</td>
        </tr>
        <tr>
            <td>Reasoning Before Understanding (RBU)</td>
            <td>When input materials are large or ambiguous (like docs or screenshots). Helps reduce misinterpretation.</td>
        </tr>
    <tr>
            <td>Tree of Thought (ToT)</td>
            <td>When you want the model to explore options, backtrack, compare. Ideal for audits, evaluations, or divergent thinking.</td>
        </tr>
    <tr>
            <td>Meta-prompting</td>
            <td>When you're not sure how to even ask the right question. Use it early in fuzzy or creative concepting.</td>
        </tr>
    <tr>
            <td>Multi-role prompting</td>
            <td>When you need well-rounded, cross-functional critique or to simulate team feedback.</td>
        </tr>
     <tr>
            <td>Memory-updated “opponent” prompting</td>
            <td>When you want to challenge your own logic, uncover blind spots, or push beyond echo chambers.</td>
        </tr>
    </tbody>
</table>

<p>But even the best techniques won’t matter if you use them blindly, so ask yourself:</p>

<ul>
<li>Do I need precision or perspective right now?

<ul>
<li><em>Precision?</em> Try <strong>Role + Checkpoints</strong> for clarity and control.</li>
<li><em>Perspective?</em> Use <strong>Multi-Role</strong> or <strong>Tree of Thought</strong> to explore alternatives.</li>
</ul></li>
<li>Should the model reflect my framing, or break it?

<ul>
<li><em>Reflect it?</em> Use <strong>Role + Context + Instructions</strong>.</li>
<li><em>Break it?</em> Try <strong>Opponent prompting</strong> to challenge assumptions.</li>
</ul></li>
<li>Am I trying to reduce ambiguity, or surface complexity?

<ul>
<li><em>Reduce ambiguity?</em> Use <strong>Meta-prompting</strong> to clarify your ask.</li>
<li><em>Surface complexity?</em> Go with <strong>ToT</strong> or <strong>RBU</strong> to expose hidden layers.</li>
</ul></li>
<li>Is this task about alignment, or exploration?

<ul>
<li><em>Alignment?</em> Use <strong>Multi-Roles prompting</strong> to simulate consensus.</li>
<li><em>Exploration?</em> Use <strong>Cognitive Walkthrough</strong> to push deeper.</li>
</ul></li>
</ul>

<p>Remember, you don’t need a long prompt every time. Use detail when the task demands it, not out of habit. AI can do a lot, but it reflects the shape of your thinking. And prompting is how you shape it. So don’t just prompt better. Think better. And design with AI &mdash; not around it.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item></channel></rss>