Search

Usability Testing

Please note: This content reflects industry best practices. We’ve provided links to third-party resources where appropriate.

What It Is

In usability testing, participants evaluate a design by performing tasks with the goal of answering the questions: “Does the product help satisfy user needs?” and “Is the product easy to use?” Results can be quantitative (success rate, error rate, and time spent) and/or qualitative (UI expectations, thought process, and interactions). It’s often conducted multiple times during the design process.​​​​​​​

Types of Usability Testing

  • Comparative usability testing compares up to three different user interfaces that seek to achieve the same task/goal using quantitative metrics, including: task success, the number of errors, time spent on task, user satisfaction, usability, and general feedback. Seeing two or more ways of doing the same task makes it much easier for participants to discuss the differences between each design solution. You can compare the data to determine which design solution is most effective. 

    ​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​Read more about how to conduct your own comparative usability testing.

    Please note: Comparative usability testing is different from A/B testing. A/B testing allows you to test your designs with real-time traffic, without recruiting participants or scripting tasks and questions. These are performed on a live site where some site visitors are directed to version A, and others to version B. Metrics, like click and conversation rates, are recorded from each design and used to compare the effectiveness.

    Read more about A/B testing.​​​​​​​​​​​​​​​​​​​​​
  • A 5-second test measures what information users take away from a design and what impression they get within the first five seconds of viewing a design. It’s a quick way to test if a webpage is effectively communicating an intended message.​​​​​​ 

    Watch this video to learn more about 5-second tests. ​​​​​​​​​​​​​​

Usability testing is not the same as UAT. Usability testing is iterative. User acceptance testing (UAT), also known as beta testing, determines if a product meets the functional requirements—usually toward the end of the software development lifecycle. Both usability testing and UAT are needed in product development. 

​​​​​​​​​​​​​​Read more about UAT vs. usability testing.

When To Use It​​​​​​​

  • Navigating users to a new page and learning how best to do that.
  • Validating the intuitiveness and ease of use of a new design.
  • Choosing between two or three design options.

How It Works

Usability tests can be either moderated or unmoderated.

  • Moderated: A moderator observes participants as they interact with the product, either in person or remote. A remote environment could reveal more accurate field insights. Researchers can ask follow-up questions.
  • Unmoderated: A participant interacts with the product on their own and provides feedback. The session is recorded for the researcher to review afterward, but it relies on the participant thinking out loud as they navigate through the user interface.

Duration of test:

  • Moderated: 20 to 30 minutes
  • Unmoderated: 15 to 20 minutes

Suggested number of participants: 5 to 8

Steps:

  1. Develop a research plan and recruit your participants. Decide which product screens you want to test.
  2. Ensure that all participants receive the same information and instructions. Avoid bias by writing a script to provide an introduction and a task list for participants.
  3. Establish consistent success criteria for each task to evaluate each participant on the same scale.
  4. Sit with each participant in real time during a moderated test. Provide instructions and a task list for an unmoderated test.
  5. Monitor and record your observations as the participants go through the tasks. Track their actions, reactions, feedback, and spoken thoughts. If moderated, you may ask follow-up questions to engage further, but make sure to ask open-ended, neutral questions.
  6. Analyze your results and synthesize your findings in a report.

Tips:

  • ​​​​​​​Make scenarios and task instructions clear, simple, and straightforward.
  • Make tasks actionable.
  • Remind participants to explain their thoughts about the experience.
  • Phrase questions in a neutral way to avoid influencing participants. Avoid leading questions that may influence participants.
    • Don’t ask: “How easy was this product to use?”
    • Ask: “Describe your experience when you were using this product.”
  • Consider recording a participant on video to capture their facial reactions, with proper consent.
  • Don’t include terms in your scenarios or task instructions that are visible on the screen currently being evaluated.​​​​​​​​​​​​​​
    • For example, on an ecommerce checkout page, there’s a designated textbox that explicitly says: “Enter your promotion code.” Because this is visible on the screen, you shouldn’t ask the participant: “Where would you go to enter a promotion code?” Instead, you can ask: “What could the participant do while on the current checkout page?”​​​​​​​

Outcomes​​​​​​​​​​​​​​

  • Research report
  • Scenarios, use cases, and/or user stories
  • Personas

Resources

UX Research Plan (Template)

View

Usability Test Script & Observer Notes (Template)

View

Research Results Report (Template)

View

Tips on How to Ask Good Questions

View

Learn more


Questions or feedback? Check out our Frequently Asked Questions (FAQ) or contact the Infor Design UX Insights team at uxinsights@infor.com.