Quantitative Research
Please note: This content reflects industry best practices. We’ve provided links to third-party resources where appropriate.
Quantitative research typically provides numerical or measurable data. It can answer questions like: How many? How much? Whereas qualitative research usually answers: Why? What are users thinking? Quantitative research collects data about behavior and attitudes through statistics, charts, and/or graphs. The following are the most popular types of quantitative research methods.
Analytics | Card Sorting | Tree Testing | Click Testing |
---|---|---|---|
Uses product metrics (user data, page views, etc.) to uncover potential insights into user behaviors, issues, and trends. | Participants sort various menu items into categories to help create or validate content structure and hierarchy. | Participants test an existing or predetermined content structure to evaluate how intuitive it is to find items in the categories provided. | Measures participants’ first impressions and tracks click paths to evaluate a design’s effectiveness (layout, clarity, or findability). |
Note: While a survey can technically be classified as quantitative, it can also produce qualitative data. Learn more about surveys here.
Analytics
What It Is
Analytics are product or application metrics collected from a tracking functionality within a product or from a web analytics tool such as Google Analytics or AWS QuickSight. Some tools may have limitations in metrics or integrations, so evaluate your exact needs before picking an analytics tool.
Metrics can include:
- Page views
- Bounce rates
- Form submissions
- User demographics
- Average time spent on a page
- Conversion rates (% of people who accomplish a desired action)
Analytics can help provide surface-level insights into user behavior within a product. They’re great for identifying possible issues and trends. Analytics alone aren’t always sufficient enough to draw conclusions about user behavior. To validate insights collected from analytics, it’s best to cross-check them with qualitative research, such as usability tests, interviews, and/or surveys.
Product analytics are available via useranalytics.infor.com and it varies by product. You can post here to request access: Infor User Analytics | General | Microsoft Teams.
When To Use It
- Certain product features need improvement, but you have limited resources. Prioritize based on product usage.
- You want regular benchmarking and improvement tracking to ensure user satisfaction.
How It Works
Valuable insights can be collected after a product is released, but you can only gather analytical data if the product has been set up with analytics measuring tools before being launched to users. These measuring tools assess what users actually do within your application/product. It shows you metrics like page views, unique visitors, number of downloads, page-view duration, and more.
Analytics help uncover user behavior and potential usability issues. For example, if you expect users to spend at least 2 minutes on a page, but the average time spent is really 30 seconds, then there’s probably room for improvement.
This insight helps you determine next steps and if more research is required to validate your assumptions.
Duration of test:
- Moderated: 20 to 30 minutes
- Unmoderated: 15 to 20 minutes
Suggested number of participants: 5 to 8
Steps:
- Determine the type of metrics or key performance indicators (KPIs) that you’re looking for, and what tools/resources are available to you.
- Set up the analytics measuring tool for your product.
- Once you have some user data, conduct an analysis to identify opportunities, potential issues, and/or friction points within your product.
- Keep a log of opportunities/issues, then consolidate your findings. This might lead to further qualitative research.
Learn More
- Google Analytics Beginners Tutorial 2023 (Video)
- 3 Uses for Analytics in User Experience Practice (Article)
- Conversion Rate in UX and Web Analytics (Article)
- How to Use Analytics in UX (Videos)
- Turning Analytics Findings Into Usability Studies (Video)
Card Sorting
What It Is
In card sorting, participants are asked to categorize items for application, navigation, or settings menus. Users are given written menu items on cards—either physical or virtual—and they’re asked to group them into categories that make sense to them. There are two main types of card sorting:
- Closed card sorting: Participants sort cards into predefined categories.
- Open card sorting: Participants sort cards into groups and write their own category names.
When To Use It
- Use closed card sorting when you need to add new content, but you aren’t sure where to place it on the navigation menu.
- Use open card sorting when you need to add several items to the navigation menu, but you’re unsure how to group them and/or what labels to use.
How It Works
Card sorts can be either moderated or unmoderated using UX research tools like UserZoom. They can also be used manually on paper or via whiteboarding tools like the MS Teams Whiteboard plug-in, Mural, or Figma.
- Moderated card sort: Participants perform the card sort during a one-on-one interview, and the researcher can ask follow-up questions to dig into the participants’ rationale.
- Unmoderated card sort: Participants organize content into groups on their own, typically using a UX research tool, with no interaction from the researcher. If a video recording tool is available, participants may be able to record their thoughts out loud.
Duration of test: 5 to 20 minutes
Suggested number of participants: 15 to 20
Steps:
- Develop a research plan and recruit participants.
- Select 20 to 50 menu items that represent the content to be sorted.
- Tip: Don’t exceed 50 items for closed card sorting and 30 items for open card sorting.
- Tip: Avoid menu items that contain the same words.
- Present participants with each menu item on a card, either one at a time or all at once. It can also provide a text list and any guidelines (such as the minimum or maximum number of categories).
- Participants place each card into groups.
- Note: If the card sort is moderated or you’re recording an unmoderated sort, encourage the participants to think out loud while they sort the cards
- Participants categorize each group of cards.
- For closed card sorts, the participant will put the groups of cards into predefined categories.
- For open card sorts, the participant will write in a category name for each group.
- Tip: It’s important to do this naming step after all the groups are created, so that the participant doesn’t lock themselves into categories while they’re still working.
- Ask participants to explain the rationale behind the groups they created. (For moderated studies, it’s highly recommended.)
- Ask questions like: Were any items especially easy or difficult to place? Did any items seem to belong in two or more groups? What thoughts do you have about the items left unsorted, if any?
- Ask participants for more practical group sizes, if needed.
- Analyze the data. Look for common groups, category names, themes, and/or items that were frequently paired together.
Tips:
- Set a limit of 5 to 7 categories.
- Randomize the order of items that you present to each participant.
- Consider including a category named “I’m not sure” to give participants a place for items they don’t feel confident categorizing.
- Reassure participants that it’s OK to change their mind as they work.
Outcomes
- Table of recommended groupings/categories
- Groups/categories requiring modification
Learn More
- Card Sorting Demonstration (Video)
- Card Sorting: Uncover Users’ Mental Models for Better Information Architecture
Tree Testing
What It Is
The aim of tree testing is to validate whether users can easily find what they’re looking for. In this test, participants are shown a text-only version of the site’s hierarchy and are asked to complete a series of tasks. Metrics from the test can include success rate, directness, average time to complete a task, and the site path taken by users.
When To Use It
- Early in the design process to test the effectiveness of your application’s navigation and content structure.
- If you find content or feature not being used and you want to validate that it’s because users can’t find it.
How It Works
Tree tests are typically unmoderated and conducted using UX research tools like UserZoom. If resources are limited, you can also conduct a moderated tree test using Excel. You can build your tree diagram/menu hierarchy in Excel and prompt the participants with tasks while manually taking notes. (See this example.)
Duration of test: 30 minutes
Suggested number of participants: 50+
Steps:
- Develop a research plan and recruit your participants.
- Define the tree structure by outlining the categories, subcategories, and pages in your site or application. Include specific subcategories because they’ll prompt realistic user behavior.
- Come up with a task list for participants to complete one at a time. This tests if they can find a page or location in a tree with a top-down approach.
- Don’t exceed 10 tasks.
- Don’t be too precise or specific to avoid bias.
- You can ask questions via a survey before or after participants complete the tasks to help provide additional context, such as demographic information and product familiarity.
- Analyze the results to help inform changes to your navigation.
Outcomes
- Tree diagram
- Validated navigation structure
Learn More
- Optimal Workshop Tree Testing Demonstration (Video)
- Tree Testing: Fast, Iterative Evaluation of Menu Labels and Categories
- Tree Testing Part 2: Interpreting the Results
Click Testing
What It Is
Click testing, also known as first-click testing, is a quick and simple way to test and validate product designs. It can also test if your product’s navigation and linking structure are effectively helping users complete their tasks.
In this test, participants are presented with an image that represents the product screen. Then they’re prompted to complete a task. Calculate success by how many participants are successful on just their first click. Click tests are unmoderated. They don’t test interactive elements. If you want to test interactive elements, a usability test is more appropriate because it provides deeper insights.
When To Use It
- A new design needs validated by observing a user’s initial impression via analysis of their first clicks.
- You want to rearrange the application, based on the users’ journey, and you’d like to determine what was done first in each scenario.
Because you’re testing if each click that a user makes carries out a task in a clear and easy way, it’s best to conduct a click test after you finalize your information architecture (via card sorting and/or tree testing).
How It Works
Usually in a UX research tool like Maze, participants are shown a static image, wireframe, or prototype of an application. Then they’re given a prompt, such as: “Where would you click to trigger a specific action, navigate to another page, or open a piece of content?” Once they decide where to click, the task is considered finished, and they can move on to the next task.
Duration of test: 5 to 20 minutes
Suggested number of participants: 20
Steps:
- Develop a research plan and recruit your participants.
- Write a task list and the success criteria for the relevant application displays.
- Tip: Don’t tell your participants that you’re going to test their clicks. This could influence their actions.
- Observe where the participant clicks, even if you don’t use a UX research tool to conduct your test.
- Record key metrics, including time on task, level of difficulty, and a user’s confidence level.
- Analyze and interpret your click test data. If you used a UX research tool, you could visualize your results through a click map, heat map, and/or dark map. This will validate if the navigation design is noticeable or if elements are conflicting or creating too much noise.
Outcomes
- Heat map
- Dark map
- Click clusters/map
Learn more
Resources
Learn more
Questions or feedback? Check out our Frequently Asked Questions (FAQ) or contact the Infor Design UX Insights team at uxinsights@infor.com.