15 BEWARE OF ERRORS
19 CONCLUSION
Remote user research is a fast, reliable, and scalable way to get the insights you need to improve your
customer experience. Unlike traditional in-lab usability testing or focus groups, you can recruit participants
and get results in a matter of hours, not days or weeks. You’re not limited by your location, schedule, or
facilities. You’ll get candid feedback from real people in your target demographic in their natural environment:
at home, in a store, or wherever they’d ordinarily interact with your product.
In this eBook, we’ll cover how to plan, conduct, and analyze your remote user research. If you’re new
to research, you’ll learn the ropes of setting up a study and getting straight to the insights. And if
you’re an experienced researcher, you’ll learn best practices for applying your skills in a remote
study setting.
You don’t need to uncover every usability problem or every user behavior
in one exhaustive study. It’s much easier and more productive to run a
series of smaller studies with one specific objective each. That way, you’ll
get focused feedback in manageable chunks. Be sure to keep your
objective clear and concise so that you know exactly what to focus on.
As you set your objective, think about the outcomes your stakeholders
will care about. You may be interested in identifying how users interact
with the onboarding flow, for example, but that won’t be helpful if your
team needs to identify ways to improve the monetization process.
Keeping your objective front and center will help you structure your
studies to gain insights on the right set of activities. It’ll also guide you in
whom you recruit to participate in your study, the tasks they’ll perform,
and what questions you should ask them.
Consider which devices and/or browsers you’ll want to include in your study. If your product is available on
multiple devices, we recommend testing your mobile experience with at least as many users as your desktop
experience.
Next, consider who you will need to recruit to participate in your study.
With remote user testing, you don’t need to search far and wide to find people to give you feedback. You
simply select your demographics or other requirements when you set up your study.
The key to a successful study is a well-designed plan. Ideally, your test plan will result in qualitative and
quantitative feedback. Ask users to perform specific tasks and then answer questions that will give you the
insights you need in a measurable way.
A task should be an action or activity that you want a user to accomplish at that time.
Example of a task: Go through the checkout process as far as you can without actually making a purchase.
Use a question when you want to elicit some form of feedback from a user in their own words.
Rating scale, multiple choice, and written response questions can be helpful when you’re running a large
number of studies and you’re looking to uncover trends. You’ll be able to quickly glance at the resulting data
rather than having to watch every video. From there, you can zero in on the outliers and the most surprising
responses.
We recommend taking a few moments to think back to your objective and consider the best way to convey
results to your team. When the results come back, how do you want that feedback to look? Will you want
quantitative data so you can create graphs? Written responses that you can use to create word clouds?
Verbal responses so you can create a video clip and share it with your team?
Establishing the type of deliverable you need from the outset will help you determine the right way to collect
the information you need.
The structure of your study is important. We recommend starting with People are notorious for skimming through written content, whether they’re
broad tasks (exploring the home page, using search, adding an item to interacting with a digital product or reading instructions for a user test.
a basket) and moving in a logical flow toward specific tasks. The more
natural the flow is, the more realistic the study will be—and the better One way to ensure that your participants read your whole task is to
your results will be. make the task short and your language concise.
Ask yourself whether you’re interested in discovering the users’ natural 1. Add the item to your cart.
journey or whether you need them to arrive at a particular destination. 2. Shop for another item and add it to your cart.
If it’s about the journey, give the participants the freedom to use the 3. On the shopping cart, please update the quantity of the first item
product in their own way. But if you’re more focused on the destination, you added from 1 to 2.
guide them to the right location through your sequence of tasks.
4. Now proceed through the entire checkout process.
If you’re interested in both the journey and the destination, give the
users the freedom to find the right place on their own. Then, in subsequent
tasks, tell them where they should be. You can even include the correct
URL or provide instructions on how to navigate to that location.
Also, if you think a specific task will require the user to do something
complicated or has a high risk of failure, consider putting that task
near the end of the study. This will help prevent the test participants
from getting stuck or off track right in the beginning, throwing off the
results of your entire test.
Once you’ve mapped out a sequence of tasks for users to attempt, it’s time to start drafting your questions.
It’s important to structure questions accurately and strategically to get reliable answers and gain the insights
that you really want.
Don’t use industry jargon. Terms like “sub-navigation” and “affordances” probably won’t resonate with the
average user, so don’t include them in your questions unless you’re certain your actual target customer uses
those words on a daily basis.
Define new terms or concepts in the questions themselves (unless the goal of your study is to see if they
understand these terms/concepts).
If you’re asking about some sort of frequency, such as how often a user visits a particular site, make sure you
define the timeline clearly. Always put the timeline at the beginning of the sentence.
BAD: How often do you visit Amazon.com?
BETTER: How often did you visit Amazon.com in the past six months?
BEST: In the past six months, how often did you visit Amazon.com?
After you’ve written the question, consider the possible answers. If the respondent could give you the answer
“It depends,” then you should make the question more specific. It’s best to ask about first-hand experiences.
People are notoriously unreliable in predicting their own future behavior, so ask about what people have actually
done, not what they would do. It’s not always possible, but try your best to avoid hypotheticals and hearsay.
To make sure your participants are all responding to the same stimulus,
give them a reminder of which page or screen they should be looking
at when they respond to the question. For example, “Now that you’re in
the ‘My Contacts’ screen, what three words would you use to describe Happy Not happy
this section?”
(Happy = opposite of not happy)
You’re not judging the intelligence of your respondents when analyzing
their results, so make sure that your questions don’t make them feel
that way. Place the fault on the product, not the test participant.
Bad example: “I was very lost and confused.” (agree/disagree) Happy Neutral Unhappy
Good example: “The site caused me to feel lost and confused.” (Happy = the best!)
(agree/disagree)
Be fair, realistic, and consistent with the two ends of a rating spectrum.
Good example: Plus, emotional states are very personal and mean different things to
different people. Being “very confident” to a sheepish person may mean
“After going through the checkout process, to what extent do you trust
something very different from what it means to an experienced executive.
or distrust this company?” I strongly distrust this company ←→ I strongly
trust this company
Instead of asking about overall satisfaction, ask about all the criteria independently. When you’re analyzing
the results, you can create a composite “satisfaction” rating based on the results from the smaller pieces.
With leading questions, you influence the participants’ responses by including small hints in the phrasing of
the questions themselves. More often than not, you’ll subconsciously influence the outcome of the responses
in the direction that you personally prefer.
Leading questions will result in biased, inaccurate results, and you won’t actually learn anything helpful.
In fact, it might lead you to make flawed decisions. While it may lead to the answer you “want” to hear, it
doesn’t help your team make improvements, so don’t be tempted to use leading questions!
BAD: “How much better is the new version than the original home page?”
GOOD: “Compare the new version of the home page to the original. Which do you prefer?”
USERTESTING TIP:
If you’re asking about task success, remember to define what a success is. If the previous task instructs a
user to find a tablet on Amazon and add it to the cart and you ask “Were you successful?” be sure to clarify
whether you are asking about finding a tablet or adding it to the cart.
Rating scale questions allow you to measure Multiple choice questions are great for collecting Written response questions result in short
participants’ reactions on a spectrum. They’re a yes/no responses or answers that can’t be applied answers that can be used to collect impressions
great way to benchmark common tasks and to a scale. and opinions.
compare the results with a similar test run on
your competitor’s product. • Multiple choice responses should be • Ask questions that can be answered in a
exhaustive, meaning that every possible couple of words or sentences at most.
• Use relative extremes: Make the negative response should be included in your Typing long responses can become frustrating
feeling have the lowest numerical value response options. At the same time, you for participants, especially on mobile devices.
and the positive answer have the highest want a manageable number of responses. Good example: What three words would you
numerical value. In other words, make We recommend two to six response options use to describe this app?
difficult = 1 and easy = 5, not the other per question. If you suspect that there are
way around. just too many options, do your best to guess • Use these questions sparingly. The greatest
which options will be mentioned most, and value in remote user research usually comes
• Stay consistent throughout the test! Use the then include an “Other” option. from hearing participants speak their thoughts
same end labels and the same wording when aloud naturally. Written response questions
you’re repeating a question. • Ask only one question at a time. Don’t do this: are good for getting a snapshot of the users’
“Did you find the tablet you were looking for, impressions, but if you overuse them, the
• Consider asking “why?” after a multiple-choice and was it where you expected to find it? quality of the responses will often degrade
or ratings scale. Then, when you get your Yes/No” Instead, break it up into two separate after several questions.
results back, you can go back and hear the questions.
participants discuss their answers or responses. • Create a word cloud from all your users’
Asking “why?” also prompts people to think • Choose mutually exclusive responses since responses to quickly see which words they’re
critically about their answer. users will only be able to select one answer. using to describe their experience.
If it’s possible for more than one answer to be
true, include a “More than one of these” option.
SAMPLING ERROR
Sampling error occurs when you recruit the wrong participants to participate in your study. When this happens,
you may end up with a bunch of opinions from people outside your target market—and therefore, they aren’t
very helpful.
For example, perhaps your target market includes SaaS sales executives, so you tried to recruit people who work
in software sales, but the actual participants ended up being electronics store retail associates.
SOLUTION:
Ask clear and precise screener questions to help qualify potential study participants. If you’re uncertain whether
your screener questions are accurately capturing the right users, do a dry run with a small handful of participants.
As the first step of your study, have them describe aloud what they do for a living (or how familiar they are with your
industry, or whatever criteria will help you determine whether they’re actually your target market).
RESEARCHER ERROR
With this type of error, participants misunderstand a task or question because of the way it was worded. Study
participants will often take instructions literally and search for the exact terminology that you include in your tasks.
SOLUTION 1:
Try out your study with several people and monitor their reaction to your questions. You’ll quickly learn whether
or not your questions are accurately communicating what you want them to.
SOLUTION 2:
Be aware of your target audience and ask questions in a manner that naturally resonates with them. Use plain
language. Slang, jargon, regionalisms, and turns of phrase can easily confuse participants during a study.
In this case, the participants are giving you inaccurate or false information. With this error, participants feel pressured to give a response that
There are several reasons that this may occur: they think is most popularly accepted in society, even if it’s not true.
For example, if you ask people about their tech-savviness, people may
• They don’t trust you with their personal information.
over-report their abilities because they think it’s “better” than not being
• They’re uncomfortable sharing details of their personal lives. tech-savvy.
• They’ve become fatigued and have resorted to bogus responses
to get through the test quickly. SOLUTION 1:
• They don’t understand whether you’re looking for their opinion or
When you’re looking for test participants, be sure to explain that you value
the “right” answer.
the skillsets or demographic characteristics you’re requesting. Emphasize
that you hope to learn how your product will be useful or beneficial to people
SOLUTION 1:
like them.
Reassure participants that their responses won’t be shared publicly.
SOLUTION 2:
SOLUTION 2:
Reassure your participants that they’ll remain anonymous.
At the very beginning of your study, be sure to explain that if they have
to fill out any personal information, their responses will be blurred out to ACQUIESCENCE BIAS
protect their identity.
When acquiescence occurs, the participant will tell you what they think you
SOLUTION 3: want to hear out of fear of offending you. For example, they may dislike your
Keep your test short (around 15 minutes, in most cases) so you don’t app but don’t want to make you feel bad about your work. This is more
fatigue your participants. common in moderated tests than unmoderated tests.
SOLUTION 1:
FAULTY PARTICIPANT RECALL
If you’re too close to the product (for example, if you’re the designer), you
These errors occur when a participant is unable to correctly remember the may want to use an impartial moderator to moderate your tests for you.
event you’re asking about. This happens when your question asks them to Skilled researchers can help ensure impartiality, reducing barriers to
recall something too far in the past or in too much detail. sharing the truth.
SOLUTION: SOLUTION 2:
Do a gut check. Can you remember the specifics of something similar? Reassure participants that you value their truth and honesty and that none of
If not, revise your question. their feedback will be taken personally.
Take note of user frustrations as well as items that users find particularly
helpful or exciting. These become discussion points for design teams
and can often help to uncover opportunities for improvements to future
releases. It’s important to identify the things that people love, too, so
that you don’t inadvertently try to “fix” something that’s not broken
when trying to improve the user experience of your product.
Here are a few ideas for successfully relaying your research findings:
Use charts to represent any interesting metrics data from your questions.
Use a word cloud to display the most common words used throughout your study.
Be careful not to place blame on any of your teammates. If you have a lot of negative findings, choose
your words carefully. “Users found this feature frustrating” is much easier to hear than “This feature is
terrible.”
Encourage team members to ask questions about the findings, but remind them not to make excuses.
They’re there to learn about the customer experience, not to defend their design decisions.
Sharing research findings with stakeholders and colleagues in multiple departments can be a great way to
promote a user-centered culture in your company.
With a clear objective, the right tasks, and carefully planned and worded
questions, you’ll gather useful, actionable insights.