By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Home > Blog >
Building AI Features: A 7-Step Guide for UX Researchers
Home > Blog >
Building AI Features: A 7-Step Guide for UX Researchers

Building AI Features: A 7-Step Guide for UX Researchers

Author:
Theertha Raj
·
June 28, 2024

Heard about the AI bot that decided to be a stand-up comedian? 

It kept telling jokes...but they weren’t funny.

I guess some things are better left to humans after all.

Generative AI tools like Chat-GPT and DALL-E have caused a lot of excitement around these parts. For product designers and researchers, it’s incredible stuff to behold. Poems, essays, videos of cats cuddling in front of the Eiffel tower— you name it, AI can generate it. 

But amidst all this hype and awe, there's an important question we need to ask—are we getting a little ahead of ourselves by dreaming up all the possibilities, without doing due diligence first?

AI may be incredibly powerful, but it's not a magic solution that can solve every problem right out of the box. It’s really useful when applied right, but potentially chaotic or even dangerous if used recklessly without regard for consequences.

This is why user experience (UX) research has never been more vital for the industry.

As companies race to integrate generative AI into their products and services, UX research ensures that we're introducing these powerful capabilities thoughtfully and responsibly into user experiences. Randomly injecting AI without context is a surefire way to build solutions that underwhelm at best, and infuriate users at worst.

When done right, AI-powered tooling can feel like magic. But building it needs proper groundwork, with meticulous research and design.

We have experience with building something like that. 

Have you checked out our AI-powered research analysis & repository tool Looppanel yet? It even auto-tags your notes! Although in the age of GenAI, the magic is how it auto-tags your notes, and where it brings the researchers into the mix.

Auto-tagged data on Looppanel

Consider this article your go-to guide on running UX research for building AI-powered features and products. You can find a detailed template to help you through every step of the process below, along with insights on:

  • Identifying the real user problems and inefficiencies to solve
  • Understanding the current capabilities and limitations of AI
  • Determining which tasks should stay human-centric vs. automated
  • Building user trust and mental models around AI capabilities  
  • Evaluating and iterating on AI outputs through usability testing
  • Practical tips like prompt engineering and considering technical constraints

UX Research for building AI-powered software

Before diving headfirst into building an AI-powered feature or product, take a step back and do some critical groundwork. 

Randomly adding an AI-powered feature is a surefire way to waste resources and create underwhelming experiences. 

Instead, start with a deep understanding of the user's needs, wants, and pain points. 

Step 1: Identify the user

Research’s role: Identify the types of users in your category and who you want to go after. Make sure you’re aligned with your team.

Even with the most exciting technology involved, the first step of UX research remains the same.

Think about the user. 

This may sound obvious, but it may be the most crucial step in your work. Very different people doing the same job have very different expectations. This has always been true of software—you have Figma, and then you have Canva. Both are design softwares, but built very, very differently.

Understanding your core user persona becomes even more critical in an AI-powered workflow, where you’re constantly deciding what the AI should do, and what counts as good output.

Do NOT skip this step. 

Step 2: Identify the user’s problems

Research’s role: Do generative research and dive deep into knowledge about the user base, their workflow, and pain points. Make sure your team is aligned on these.

Once you know who you’re focused on, it’s time for strategic research to truly understand them. Jared Spool has a lot of notes on this, read them here and here.

Start with closely observing your users in their natural environments and workflows. Look for the parts where they seem to be spending an inordinate amount of time, or getting frustrated. 

Maybe they've explicitly complained about a certain task being a huge pain point. Perhaps you've noticed that they dedicate 80% of their time to one part of the process.

Such strategic thinking also gives your product a massive competitive advantage. The advent of AI tooling has made it much easier to code, develop apps and build websites from scratch in a couple of hours. Take any use case and do a simple Google search— you’ll find at least 20 tools jostling for attention and promising the same outcomes. In such a landscape, the only way to stand out is attacking the RIGHT problem the user cares about—even if they don't know it yet. 

So, how does one find the right problem?

Look for signs that users are willing to dedicate a large amount of time, money, or effort to a task. The amount of time or money they’re currently spending, is a clue to you about the value of that task to the customer.

For example, we spoke to many, many researchers about their research workflows. We asked them—what takes you the most amount of time? We noticed what users were actually doing on the product—where were the inefficiencies?

Time and time again, it became clear that even with our AI-generated notes to speed up analysis, researchers were spending hours, if not days tagging their data. They would often struggle with creating a taxonomy, and if multiple people were tagging together consistency became a challenge.

Clearly this work was important—researchers were willing to spend significant time manually tagging 100s of notes—but it was also repetitive and painful. 💡Painpoint identified.

Step 3: Understand AI’s capabilities as a tool

Research’s role: Align the team on AI’s capabilities—what it can and cannot do well.

Once you’ve pinpointed a genuine user problem or inefficiency, the next step is to assess what's actually possible with AI given its current state and capabilities.

Keep in mind—the AI landscape is constantly evolving, so the answer to this question may change in two months. While early AI releases primarily operated on text-based data, recent  iterations can now process multi-modal inputs like images, videos, and even audio recordings. 

That being said, there are still some general rules of thumb for what AI can and can’t do:

AI excels at:

  • Processing and summarizing large volumes of data, especially text data
  • Generating standard content types that are commonplace like emails templates or code snippets
  • Repetitive tasks that require limited context

The key is to identify where AI is obviously better, and focus instead of work which it cannot do, such as:

  • Understanding nuance—picking up the human elements of an interaction
  • Creativity and originality that lets you write a hilarious joke, or a beautiful poem
  • Building empathy and human connection that enables your research participants to open up about their lives during an interview
  • Strategic, high-stakes decision-making where complex reasoning is required
  • Work that requires having contextual information about your stakeholders, company priorities, and which experiments you already tried last year—information the system simply doesn’t have

Identify users’ pain points that lie in the first bucket (what AI can do), and let them keep doing the work that lies in the second (what AI can’t do).

For example: AI can be useful in helping people find the answer to a question on your support page. But it’s not so handy at calming down an irate customer who’s been waiting on a support call for 35 minutes. At least so far.

Step 4: Defining the Solution

Research’s role: Figure out where your specific user persona needs efficiency versus control in their workflow.

There are many ways to make a video.

If you’re a film-maker in Hollywood, you probably want Adobe Premiere Pro. These are multi-million dollar films—you need control over every frame, every screen.

If you’re an Instagram health influencer making sponsored ads, you just need something that allows you to change the lighting and filters to look just right, without having to understand the details of exposure and light theory.

If you’re me and you don’t know the first thing about creating videos, you’ll google “Video AI”, pick the first tool you see and expect AI to do most of the work.

AI is actually useful for all 3 of these users and use cases—but in fundamentally different ways.

A filmmaker in Hollywood may use AI to create the first draft of captions, or color-correct screens. They want control over which takes to keep and which angles work best. Or they might need it for highly complex, and specific use-cases like de-aging Harrison Ford.

An Instagram influencer may use it to create a first draft reel that they tweak for lighting. They want control over how they / the food / the clothes / the suspicious-looking weight-loss shake they’re selling looks. Weight-loss shake is looking an icky color of green? That needs an edit.

Me? I don’t know the first thing about lighting, transitions, or timelines. I want AI to do almost everything, with a veto power to edit / remove something I hate.

Your job as a researcher is to figure out what your user’s priorities are. Where do they need control, and where do they need efficiency?

There will always be aspects of a workflow or creative process where human expertise, judgment, and real-world context are invaluable—- and frankly, irreplaceable by AI. What do those look like for your users?

If you’re me and you don’t know the first thing about creating videos, you’ll google “Video AI”, pick the first tool you see and expect AI to do most of the work.

Do users need transparency or a black box?

Researchers role: Identify how much oversight and control your user persona needs for their use case.

Here's a great question to start with: How much transparency and user control should your AI tool provide?

On one end of the spectrum, you have tools that offer some degree of transparency, where users can see and influence every step of the AI's process. This model involves using AI as an assistant and force-multiplier, handling tedious computational tasks with speed and precision, while the human provides contextual expertise, strategic thinking, and oversight. The key is that the user always feels in control and understands what the AI is doing.

For complex, high-stakes decisions like—"Which user problem should we focus on?", "Which takes should we keep in this $100 million movie?", or "What's the right positioning strategy for our business", you want AI tooling that gives up some amount of control in the workflow. Where the AI contributes and where the human takes over depends on what parts of the process are sacred to the user persona.

If your user persona prioritizes simplicity or speed over control, you may be able to take a black box approach.

Take me for example—I truly don’t know what camera angle to pick, so it’s easier and faster for the AI to take control away from me. I don’t care for the output to be 100% perfect, so I can afford to roll with its take. For me + video editing, a black box approach is 🤌. 

Integration is key

Researchers role: Identify the tools and workflow of the user before and after they use your AI features.
Whether you’re a filmmaker or an influencer, you probably want your video to go somewhere once it’s ready.

This is where understanding user workflows is crucial. Assuming the creator isn’t generating content just for the sake of it, they need to pull it out into some other tool or share it with an audience.

Think of an AI writing assistant for example. It’s introduced with the goal of enhancing your creativity and output quality. But if the AI assistant exists as a separate, disconnected tool from your preferred writing application like Google Docs or MS Word, you'd constantly have to context-switch, likely breaking your creative flow.

As a researcher it’s your job to identify the user workflow before and after your solution and make sure your product works seamlessly within it.

Step 5: Prompting!

Now comes the actual step where you figure out the AI prompt(s) that solve your user’s use case.

Here are a couple of guidelines for this step.

1. Have the target user create the prompt.

Yes, you read that right.

Prompting is actually no longer highly technical—anyone can do it. In fact, it’s better if technical folks don’t do it.

The challenging thing about prompting is knowing what “good” output looks like. AI can create so many types of videos—when is it good enough?

As discussed, the answer to this question depends on your persona. What may look like a really cool video to you and me, may be thrown out at a glance by a professional videographer.

This means that having the actual user in the driver’s seat, experimenting with prompts and deciding when they feel good enough, and when they feel magical, is really really helpful.

It allows you to try out, reject and iterate on prompts much faster than a traditional build, test, release workflow.

If you don’t have access to your user persona, get someone who really deeply understands them to do this step (but this is not ideal).

2. Keep technical constraints in mind.

Your user wants the moon. Great, you know what the solution is.

But your tech team can only release half a moon today and the other half in two weeks. That’s okay—it’s just important for you to know as you’re testing prompts.

What are the limitations we’re dealing with? What kind of data do we get? What kind of data do we need to output? Where can we introduce human interaction?

Understanding your limitations upfront will help your team stay aligned and prevent those pesky we-actually-cant-build-this conversations after you’ve done all the work.

Speaking of human interaction…

3. Where’s the human in the loop?

(pun intended)

The cool thing about AI is that you can totally customize how you work with it.

You could hand it a transcript and say, “tell me the insights” (not recommended), or you could hand the same transcript one paragraph at a time and ask it to summarize just that.

The way you design the workflow again depends on the level of control your user persona needs to get an output they consider magical. 

The same thing applies to context—when do you ask for it, and how much do you ask for? 

You could generate a video with a one-sentence prompt, or you could ask your user to input 10 highly specific data points to get a more tailored output (Who is the audience? What style do you want? Is this for YouTube or Instagram?).

This is where knowing your user persona becomes crucial. That will help you decide where and when the human should be in the loop.

4. Keep track of prompts you test

Version history is critical. You’ll be testing many, many, many, many iterations—knowing which worked, which didn’t, and why is crucial to saving time and not losing track of that prompt that FINALLY worked!

Here’s a template to get you started.

Step 6: Testing, testing, 1 2 3

With some promising AI-powered concepts in hand, the next stage user testing and validation.

Heard of the Wizard of Oz? Not the old movie with Judy Garland, but a prototyping technique of the same name.

With the Wizard of Oz technique, the core idea is to create an experience where the AI's role is simulated by a human "wizard" behind the scenes. Basically you have a person do the work you’ll eventually automate to see how users react.

You can use to varying levels:

  • Have a person do the entire workflow instead of AI
  • Have AI do the prompt work, but let people manually feed the prompt in, and copy the output out

This allows you to thoroughly test the overall experience, user flows, and uncover potential points of confusion or distrust while minimizing the amount of code written.

If it’s hard to test these workflows with real data at the prototyping stage, you can also release features in beta and work closely with users to see how they work with your AI output.

It’s crucial to let users try the AI features in real-world settings with real use cases, rather than as a cool toy to play around with—the feedback and usage you’ll get will be wildly different.

You treat a blog post auto-generated by AI very differently when it’s for fun (“So cool!”) versus when it needs to be published on your website (“This sounds like a robot”).

During your testing phase, in addition to evaluating usability and flow of your product (where applicable), make sure to probe users on:

  • Discoverability of AI features (we got burnt by this a couple of times!)
  • Their level of trust in the AI's outputs or recommendations (and how to build it)
  • Points where they wanted more transparency into the AI's reasoning
  • Where human input would be most useful
  • What they do after they’ve got the AI output

Iterate and refine the prototypes based on this feedback until you arrive at an experience that works for your audience.

Then pick the next problem, and start again!

Step 7: Enable the feedback loop  

As you move closer to launching your AI-powered feature(s), build in mechanisms for continuously capturing user feedback. This feedback loop will be critical for promptly identifying issues, managing user trust, and ensuring your AI system keeps learning and improving over time.

You could start with a simple upvote / downvote system and a simple dialog box to collect mass feedback.

But frankly, our favorite way of collecting feedback: talk to your customers.

Especially in an all-new-world, you want to deeply understand what your users are doing in your system, outside the system, and why.

On building trust through transparency

When designing an AI-powered product or feature, one of the biggest challenges is getting users to actually trust the AI enough to use and adopt it.

Think about it—you're essentially asking people to hand over tasks and decisions to a machine intelligence that may or may not get things right all the time. That's a big leap of faith to make, even for the most tech-savvy users.

When it comes to building trust—be it within personal relationships or business teams, there are four pillars. The brilliant designers at Ericsson created their own version of this, within the context of human-AI relationships.

1. Competence

First and foremost, your AI tool needs to demonstrate clear competence in its core capabilities. Users need to see proof that this thing can actually deliver on its promised value.

You can prove your system’s competence to the user by using these techniques:

  • Explaining why the system generated a specific output and how confident it is in its quality
  • Letting the users test the AI system in a quick and safe way.

2. Benevolence

In this scenario, it means openness and transparency. Once competence is established, reinforce that your AI means no harm and has no hidden malicious intent. It's simply a tool created to make the user's life easier and more productive.

You don’t hire the intern who ignores your feedback. Users won’t trust an AI system that ignores their feedback.

You can show that your system is open to change by:

  • Giving users easy ways to edit or undo an action taken by AI
  • Enabling the system to take feedback from users and change its output accordingly

Don’t forget to build in human checkpoints or approval gates for especially critical tasks or decisions, where the AI's output must be manually reviewed and validated before being acted upon.

3. Integrity

Similarly, integrity is about ensuring your AI experience operates within clear ethical guardrails that align with your user's values and principles.

For example, if you're building an AI that interacts with private user data, it's crucial to be upfront about how that data is handled, used, and protected. Break that trust through unethical behavior, and users will abandon your product instantly. 

It's also crucial to be upfront about what AI can and cannot do from the very start. Don't try to present it as an infallible, all-knowing system. Instead, acknowledge that AI may occasionally make errors.

If your AI feature is built only for certain use cases, that’s okay—just tell the users upfront.

4. Charisma

Now, this doesn’t mean that you steal Scarlet Johansson’s voice to bring Hollywood star charisma to your bot. That’s not the lesson here. 

Weird name aside, this pillar is actually not that different from what we do with product design today. Basically, we want UIs to be visually appealing and easy to use. We want content on your UI to be in the tone of voice your customer expects.

The key is making your AI tool feel like a welcome addition to the user's world, not an impersonal set of utilities to begrudgingly put up with.

Template : 7 Steps to Run Research for AI Features

UX Research for AI | Template
UX Research for AI: Template

We’ve created a detailed template you can use easily duplicate and use for your own research on AI features.

It takes you through all the steps detailed above, and includes brainstorming spaces, checklists and all the essential notes you need on building AI tooling. You can thank us later.

Get Looppanel's FigJam Template for '7 Steps to Run Research for AI Features' here.
Get the Miro Template version here.

To sum it all up

Alright, let's wrap this up with a bow, shall we?

The key takeaway? AI might be the shiny new toy in the tech playground, but it's not a magic wand. Like any tool, its value lies in how well we wield it to solve real user problems. And that's where UX research comes in. A few points to remember.

  • It's not about cramming AI into every nook and cranny of your product. It's about finding those sweet spots where AI can genuinely enhance the user experience. Maybe it's automating tedious tasks, providing intelligent suggestions, or processing data at superhuman speeds. Whatever it is, make sure it aligns with your users' needs and workflows.
  • Get your target users involved in crafting prompts. They know best what "good" looks like in their world.
  • Trust and transparency is key. Be upfront about what your AI can and can't do. Show users how it works, let them peek behind the curtain, and give them an escape hatch if things go sideways.
  • Testing is crucial. Whether you're going full Wizard of Oz or releasing beta features, make sure you're getting real-world feedback. And once you launch? Keep that feedback loop flowing. Your AI system should be learning and improving.

In the end, successful AI integration is about finding that perfect balance - between human and machine, between efficiency and control, between innovation and familiarity. It's a tightrope walk, but with solid UX research as your safety net, you'll be wowing users in no time.

If you’re interested in learning more about designing with AI, read Google’s People + AI Research (PAIR) guidebook. It's chock-full of practical guidance for designing human-centered AI products, based on data and insights from over a hundred Googlers, industry experts, and academic research. Happy researching!

Share this:

Get the best resources for UX Research, in your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Related Articles