Usability testing is the act of letting users try a product and observing them closely. Teams often record sessions and pore over the footage to identify how users feel at every step to gain a deeper understanding of how they react to the product. This allows teams to adjust and improve the service before they launch and it’s too late.
Why does usability testing matter?
By testing a product’s usability, teams save themselves time and heartache by not launching apps, sites, or digital products that nobody wants. Teams often build unwanted features because, like all humans, they are subject to cognitive biases and project their own feelings and beliefs onto users. The less teams know about their users, the more confident they feel in their potentially misguided beliefs—a thought pattern psychologists call the Dunning-Kruger effect. Every team must test, especially when they don’t think they need to.
“You are not your user. Never assume you know what they want,” says Josh Decker-Trinidad, UX Researcher at the community organization startup Meetup. “Otherwise, whatever you build can’t reach its full potential. You have to interview your users to get the real stories—they’re often stranger than any fiction and those artifacts are the building blocks of good design.”
“You have to interview your users to get the real stories—they’re often stranger than any fiction and those artifacts are the building blocks of good design.” – Josh Decker-Trinidad, UX Researcher, Meetup
The sooner teams test users and catch problems, the easier issues are to correct. It’s less work to redraw the menu on the cardboard mockup of an iPhone app, for instance, than the app itself after it’s been coded. And it’s easier to patch a bug when a website has only been released to a group of forgiving beta testers than to a large, fickle user base.
How to conduct usability testing
Usability testing follows prototyping. It’s only after teams have conducted the user research, task analysis, and persona development to build a product that they can put it into users’ hands to see how they interact with it. And though last, it’s far from the least important design step: Usability testing serves as a final safety check that ensures launches are successful. Teams that neglect testing out of a desire to be agile, move fast, and see what real users think often regret it.
Read Mixpanel’s Metrics-Driven Guide to Product Launches.
Agile development only works if a service’s early adopters like it enough to stick around. If a team launches a minimum-viable product that fails basic usability standards and suffers low adoption rates, it never gains traction or enough users for the team to collect data and iterate. If the team conducts usability tests, on the other hand, they’re more likely to release a product that new customers enjoy, which allows the team to test and develop it further.
While the UX design resource usability.gov claims anyone can be a usability tester—“You don’t need a formal laboratory,” one article states—producing useful results depends on having someone who knows what they’re doing. Tests are easy to botch and untrained researchers often inject their own thoughts and feelings by leaning over a user’s shoulder to point at the screen when they get stuck or asking leading questions such as “This is a great menu, isn’t it?”
UX designers like Aryn Shelander, co-founder of the app design agency Logical Animal, are constantly reminding teams to remove themselves from the process. “Participants often want to make you happy. They react more positively when you’re with them than they would if they were alone,” said Aryn. “That’s why we always tell participants that we didn’t create the app, even if we did, and are only testing it, so they feel more free to be honest.”
There are five steps to usability testing:
Set objectives: Teams must first decide what they want to learn from their test. For example, a weather app could want to discover which reports are most useful its users. A mobile banking app could want to know if users can figure out how to deposit checks, and a CRM could want to know whether salespeople can book deals faster with their new design.
Examples of common usability test objectives:
- Learn whether participants can complete a task
- Record how long users take to accomplish a task
- Find out how satisfied users are with a website or app
- Identify changes that can improve the product’s performance
- Determine whether the product meets the team’s usability objectives
Record hypotheses: What does the team think the outcome will be? A news site could hypothesize that a tool that allows readers to leave comments will increase read-time by five percent. A website A/B testing service could consider its newly retooled interface a success if it helps users create A/B tests in half the time.
Outline the methods: How much in the way of time, materials, and participants are needed? Testing and organizing the data for even a few users can be time-consuming and most teams aim for the minimum number of user tests that they believe will produce valid results. When Mixpanel redesigned its user analytics platform, the team initially tested with just ten participants.
Conduct the test: There are two ways to usability test: Observing users in-person and observing their behaviors at a distance, digitally. In-person, UX researchers set up a research station that often includes a quiet, distraction-free room with cameras trained on the test device’s screen and on the user, to watch their body language and facial expressions. UX researchers invite users to visit one-by-one, ask them to try the product, and then interview them about their experience.
To test digitally, teams give users access to the product and then monitor them with a user analytics platform to examine their screen flows and identify excessive button taps, drop-offs, and any indicator that suggests users are running into problems. Digital testing has the advantage of scale. Rather than monitor a handful of users, teams can observe a large percentage of their user base and pull the data into reports. For the most accurate and holistic view of how users use, however, teams should combine results from both in-person and digital analyses.
Synthesize the findings: Teams decide whether they’ve proven or disproven their hypotheses. Results aren’t always conclusive: When Mixpanel tested its new app design, engagement increased for some features but not others. It was up to the design team to decide whether the good outweighed the bad, and proceed.
Examples of usability testing
Test user goals
Users adopt services that help them achieve a goal of theirs, whether it’s as simple as being entertained or as serious as safeguarding their personal information. Most will only adopt the service if it’s faster and easier than its alternatives. Teams can determine whether that’s the case with testing.
The design team behind one ride-hailing app, for instance, usability tested its service by having individuals hail a cab: once with the app and once without. The team timed both tests and measured users’ feelings throughout the process, both by watching their expressions and interviewing them after. The results confirmed that using the app was much faster and more satisfying than not using the app, and the results helped the team decide it was ready for launch.
Test new features
Teams can check whether new features contribute to company goals before rolling them out. For example, a SaaS platform could test whether a new reporting suite helps meet the company goal of growing the user base by 20 percent. The team could find through testing that while its advanced customers deeply appreciate the added tools, new users find the additional tabs and pages overwhelming, and are less likely to adopt it because they don’t feel the product is designed for them.
Test new designs
When teams find their digital product has accumulated too many features, or they think they can organize the information more clearly, they give it a design refresh. But before teams commit to an overhaul, they can usability test their existing design to confirm users agree it needs a redesign—it may not. If the product does need a visual refresh, the team can ensure their new design improves upon the old one by running a usability A/B tests.
The team at an e-commerce company, for example, could conduct in-person tests where they present users with the new design and gauge their reactions. They could also launch the new site to a cohort of beta testers. By including a link that allows beta testers to revert to the old design, the team can measure the opt-out rate or the percentage of users who prefer the old design. Depending on the hypothesis they set—say, that anything below 20 percent opt-out rate is a success—the team knows whether the new design needs more work or not.
Usability testing helps teams build more useful digital products. Testing taps into the single most valuable source of product feedback—users themselves—and ensures teams launch products that are easy, satisfying, and well-received.