You can be OK at your job if you just do what you're told. OR you can become a better UX-er—by asking the right questions.
We spoke to Dan Winer, who's been releasing extraordinary content to help people do just that.
With over a decade’s experience in UX design, Dan Winer has done his fair share of effective problem-solving. His LinkedIn posts about it have been immensely popular, with his Figjam Problem Discovery template clocking over a thousand users.
Dan told us a great anecdote about how easy it is to spend hours working on the wrong solution, if you don’t ask the right questions. Read on to learn about it, and the fail-proof template that helped him save the day.
It’s tempting to put your head down and just do as you’re told, no questions asked. But Dan points out—growing as a UX-er begins with asking questions.
Let’s say you’re a UX Designer, breaking in from visual design or advertising as he did. At first, you’re only concerned with making things look better, or visual cues. Over time, you start to build your UX interaction muscle.
That’s great—it’s important work. But if you want to become a great UXer, the real shift happens when you start asking questions about why a certain task needs to be done.
If you’re a UX Designer, how many times have you been told—“We need to make that button more prominent.” And diligently, you’ve gone about adding contrast, changing fonts and button sizes until your purported goal is achieved.
If you want to be a great designer, Dan says that you need to ask questions beyond what should I do.
Is my goal to make a button more prominent? Or is it to solve the underlying user problem?
This is when things get interesting. Follow these steps next, to meaningfully problem-solve.
Is this something users are actually struggling with? Where’s the evidence?
Does this cost the company new customers, make us lose existing ones, or something else?
Can you articulate the problem in a single sentence? Do you have evidence to support this sentence, or are there built-in assumptions?
Is there past data or experience we can lean on to get more clarity here? This is particularly important if your company has been around for a while.
Here is where you identify a hypothesis on why the problem exists and find ways to rapidly test it.
Dan’s laid these questions and some other handy ones to refer back to when you’re in a pickle on his Figjam Discovery template on asking the right questions.
To make this list of questions more meaningful, Dan graciously walked us through a real-world example of how you might apply them.
Let’s dig in!
Dan’s company, Smile.io, has an app with over 100,000 installs (woohoo!).
While hundreds of thousands of installs are awesome, they do come with some challenges.
Smile’s support was getting inundated with questions about a certain feature (let’s call it Feature X).
Every minute the support team spent answering questions about the location of Feature X, was a minute they weren’t attending to people who wanted to upgrade the app, or solve truly unique challenges.
The company was spending a ton of resources solving a repeated UX problem related to Feature X, instead of bringing in more revenue.
Stakeholders saw the problem and decided to solve it. All pretty straightforward so far.
One valiant stakeholder marched up to Dan with a clear directive: “We need to make Feature X simpler to use”.
Now if Dan was any old run-of-the-mill designer, he might have said—sure, let me increase some contrast here, change the colors there—and voila! I’ve done what was asked of me.
But, as the truly great designer he is, instead of going blindly with the solution provided, Dan approached the problem using the Discovery Template.
In this the evidence for the problem was clear—hundreds of support tickets pouring in, asking the same questions over and over again.
Clearly there was something users weren’t understanding about Feature X.
Evidence can come in many forms though. Analytics data may show that users are dropping off at a certain point during onboarding. Every user feedback call may be ending with a rant about a feature.
Whatever resources you may rely on, the first thing to establish without a doubt is that there is a clear problem experienced across users (not just a complaint one person made in the most recent feedback call).
If stakeholders are coming to you with a problem, ask for evidence of it, or better yet—help them find it. If there’s really a problem across your users, it will show up in existing company data (quantitative metrics, support tickets, sales calls), or you can launch your own study to collect this information (for instance, via a survey).
While support tickets and analytics data can tell you that there is a problem, they often can’t tell you what that problem actually is.
This is a crucial spot to bring your questions out because all of us have a human tendency of making assumptions.
For instance, Dan’s stakeholder told him they need to make Feature X easier to use. This comes with a built-in assumption that the complexity of this feature is the issue and simplifying it will solve the challenge.
To be a great UXer, you need to question assumptions at this stage. Instead of saying ok to simplifying X, ask – Do we know exactly what the problem is or are we assuming something here?
This way, you actually spend your time and effort solving the right problem.
In Dan’s case, his team had a huge volume of support tickets—the easiest way to identify and articulate the problem was to read through these.
They parsed through dozens of tickets and saw a clear pattern—customers weren’t calling to ask how to use the feature. Instead they were asking if the feature existed at all!
Most of us, if not all, work at businesses. This means that while we would love to solve all problems all the time, there tends to be a hierarchy of what needs to be most urgently solved with the limited resources we have.
Quantifying the cost to the business if the problem goes ignored, comes with incredible benefits:
In Dan’s case, they were able to see the consequences of leaving this problem unsolved in terms of support costs. Support people were being paid to resolve tickets related to Feature X ($ amount) and there was an important opportunity cost as well—every time a support person answered a ticket about Feature X, they weren’t able to help users upgrade or solve more complex problems.
Before jumping to a solution, examine what’s been done already. Always look into history. If you’re working somewhere new and doing a lot of work on something, it can be disappointing to find out later that it had already been done before you got there, and that it didn’t work.
Dan’s team for example found out that the exact issue had cropped up before, and had been addressed already. They had already redesigned the feature to simplify it, and it still remained one of the top things on support request chats.
This helped them eliminate false hypotheses on the problem and potential solutions quickly. Instead, they could skip to the good stuff.
Now that the team had done all the good stuff from the steps above, they started to create hypotheses about why users weren’t able to discover Feature X by themselves.
Based on all the evidence gathered, the leading contender was navigation—users simply weren’t able to find the feature because it wasn’t where they expected it to be. The feature was not named intuitively. On further inspection, Dan’s team discovered that most of the users calling in about it were new to the software, and had either just installed it or were considering installing.
Instead of the initial assumptions about the feature’s simplicity, the project evolved to be more about information architecture.
Once the hypothesis is formed, the next step is to test it out, through the cheapest, fastest way possible. This could be through A/B testing or a usability test.
In this case, the team conducted tree testing to test out more obvious-sounding names and organizational structures. Tree tests could tell them in a matter of days if their assumption about naming and information architecture were correct.
If users were still equally confused, they’d go back to the drawing board. If the results from the tests showed that users had a completely different mental model—voila, you’ve actually hit the nail on the head!
As a bonus, during tree testing Dan’s team also discovered a set of related issues—killing multiple birds with a single stone.
By asking meaningful questions and thinking deeply, the team was able to get to the root cause of multiple problems, instead of being derailed by assumptions or going in circles.
You can check out the template laying out Dan’s problem and the solutioning here.
To read more from Dan, check out his LinkedIn here!
Looppanel automatically records your calls, transcribes them, and centralizes all your research data in one place