Integrations

What's new?

Prototype Testing

Live Website Testing

Feedback Surveys

Interview Studies

Card Sorting

Tree Testing

In-Product Prompts

Participant Management

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Research Maturity Model

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Creating a research hypothesis: How to formulate and test UX expectations

User Research

Mar 21, 2024

Creating a research hypothesis: How to formulate and test UX expectations

A research hypothesis helps guide your UX research with focused predictions you can test and learn from. Here’s how to formulate your own hypotheses.

Armin Tanovic

Armin Tanovic

All great products were once just thoughts—the spark of an idea waiting to be turned into something tangible.

A research hypothesis in UX is very similar. It’s the starting point for your user research; the jumping off point for your product development initiatives.

Formulating a UX research hypothesis helps guide your UX research project in the right direction, collect insights, and evaluate not only whether an idea is worth pursuing, but how to go after it.

In this article, we’ll cover what a research hypothesis is, how it's relevant to UX research, and the best formula to create your own hypothesis and put it to the test.

Test your hypothesis with Maze

Maze lets you validate your design and test research hypotheses to move forward with authentic user insights.

ux hypothesis testing

What defines a research hypothesis?

A research hypothesis is a statement or prediction that needs testing to be proven or disproven.

Let’s say you’ve got an inkling that making a change to a feature icon will increase the number of users that engage with it—with some minor adjustments, this theory becomes a research hypothesis: “ Adjusting Feature X’s icon will increase daily average users by 20% ”.

A research hypothesis is the starting point that guides user research . It takes your thought and turns it into something you can quantify and evaluate. In this case, you could conduct usability tests and user surveys, and run A/B tests to see if you’re right—or, just as importantly, wrong .

A good research hypothesis has three main features:

  • Specificity: A hypothesis should clearly define what variables you’re studying and what you expect an outcome to be, without ambiguity in its wording
  • Relevance: A research hypothesis should have significance for your research project by addressing a potential opportunity for improvement
  • Testability: Your research hypothesis must be able to be tested in some way such as empirical observation or data collection

What is the difference between a research hypothesis and a research question?

Research questions and research hypotheses are often treated as one and the same, but they’re not quite identical.

A research hypothesis acts as a prediction or educated guess of outcomes , while a research question poses a query on the subject you’re investigating. Put simply, a research hypothesis is a statement, whereas a research question is (you guessed it) a question.

For example, here’s a research hypothesis: “ Implementing a navigation bar on our dashboard will improve customer satisfaction scores by 10%. ”

This statement acts as a testable prediction. It doesn’t pose a question, it’s a prediction. Here’s what the same hypothesis would look like as a research question: “ Will integrating a navigation bar on our dashboard improve customer satisfaction scores? ”

The distinction is minor, and both are focused on uncovering the truth behind the topic, but they’re not quite the same.

Why do you use a research hypothesis in UX?

Research hypotheses in UX are used to establish the direction of a particular study, research project, or test. Formulating a hypothesis and testing it ensures the UX research you conduct is methodical, focused, and actionable. It aids every phase of your research process , acting as a north star that guides your efforts toward successful product development .

Typically, UX researchers will formulate a testable hypothesis to help them fulfill a broader objective, such as improving customer experience or product usability. They’ll then conduct user research to gain insights into their prediction and confirm or reject the hypothesis.

A proven or disproven hypothesis will tell if your prediction is right, and whether you should move forward with your proposed design—or if it's back to the drawing board.

Formulating a hypothesis can be helpful in anything from prototype testing to idea validation, and design iteration. Put simply, it’s one of the first steps in conducting user research.

Whether you’re in the initial stages of product discovery for a new product, a single feature, or conducting ongoing research, a strong hypothesis presents a clear purpose and angle for your research It also helps understand which user research methodology to use to get your answers.

What are the types of research hypotheses?

Not all hypotheses are built the same—there are different types with different objectives. Understanding the different types enables you to formulate a research hypothesis that outlines the angle you need to take to prove or disprove your predictions.

Here are some of the different types of hypotheses to keep in mind.

Null and alternative hypotheses

While a normal research hypothesis predicts that a specific outcome will occur based upon a certain change of variables, a null hypothesis predicts that no difference will occur when you introduce a new condition.

By that reasoning, a null hypothesis would be:

  • Adding a new CTA button to the top of our homepage will make no difference in conversions

Null hypotheses are useful because they help outline what your test or research study is trying to dis prove, rather than prove, through a research hypothesis.

An alternative hypothesis states the exact opposite of a null hypothesis. It proposes that a certain change will occur when you introduce a new condition or variable. For example:

  • Adding a CTA button to the top of our homepage will cause a difference in conversion rates

Simple hypotheses and complex hypotheses

A simple hypothesis is a prediction that includes only two variables in a cause-and-effect sequence, with one variable dependent on the other. It predicts that you'll achieve a particular outcome based on a certain condition. The outcome is known as the dependent variable and the change causing it is the independent variable .

For example, this is a simple hypothesis:

  • Including the search function on our mobile app will increase user retention

The expected outcome of increasing user retention is based on the condition of including a new search function. But, what happens when there are more than two factors at play?

We get what’s called a complex hypothesis. Instead of a simple condition and outcome, complex hypotheses include multiple results. This makes them a perfect research hypothesis type for framing complex studies or tracking multiple KPIs based on a single action.

Building upon our previous example, a complex research hypothesis could be:

  • Including the search function on our mobile app will increase user retention and boost conversions

Directional and non-directional hypotheses

Research hypotheses can also differ in the specificity of outcomes. Put simply, any hypothesis that has a specific outcome or direction based on the relationship of its variables is a directional hypothesis . That means that our previous example of a simple hypothesis is also a directional hypothesis.

Non-directional hypotheses don’t specify the outcome or difference the variables will see. They just state that a difference exists. Following our example above, here’s what a non-directional hypothesis would look like:

  • Including the search function on our mobile app will make a difference in user retention

In this non-directional hypothesis, the direction of difference (increase/decrease) hasn’t been specified, we’ve just noted that there will be a difference.

The type of hypothesis you write helps guide your research—let’s get into it.

How to write and test your UX research hypothesis

Now we’ve covered the types of research hypothesis examples, it’s time to get practical.

Creating your research hypothesis is the first step in conducting successful user research.

Here are the four steps for writing and testing a UX research hypothesis to help you make informed, data-backed decisions for product design and development.

1. Formulate your hypothesis

Start by writing out your hypothesis in a way that’s specific and relevant to a distinct aspect of your user or product experience. Meaning: your prediction should include a design choice followed by the outcome you’d expect—this is what you’re looking to validate or reject.

Your proposed research hypothesis should also be testable through user research data analysis. There’s little point in a hypothesis you can’t test!

Let’s say your focus is your product’s user interface—and how you can improve it to better meet customer needs. A research hypothesis in this instance might be:

  • Adding a settings tab to the navigation bar will improve usability

By writing out a research hypothesis in this way, you’re able to conduct relevant user research to prove or disprove your hypothesis. You can then use the results of your research—and the validation or rejection of your hypothesis—to decide whether or not you need to make changes to your product’s interface.

2. Identify variables and choose your research method

Once you’ve got your hypothesis, you need to map out how exactly you’ll test it. Consider what variables relate to your hypothesis. In our case, the main variable of our outcome is adding a settings tab to the navigation bar.

Once you’ve defined the relevant variables, you’re in a better position to decide on the best UX research method for the job. If you’re after metrics that signal improvement, you’ll want to select a method yielding quantifiable results—like usability testing . If your outcome is geared toward what users feel, then research methods for qualitative user insights, like user interviews , are the way to go.

3. Carry out your study

It’s go time. Now you’ve got your hypothesis, identified the relevant variables, and outlined your method for testing them, you’re ready to run your study. This step involves recruiting participants for your study and reaching out to them through relevant channels like email, live website testing , or social media.

Given our hypothesis, our best bet is to conduct A/B and usability tests with a prototype that includes the additional UI elements, then compare the usability metrics to see whether users find navigation easier with or without the settings button.

We can also follow up with UX surveys to get qualitative insights and ask users how they found the task, what they preferred about each design, and to see what additional customer insights we uncover.

💡 Want more insights from your usability tests? Maze Clips enables you to gather real-time recordings and reactions of users participating in usability tests .

4. Analyze your results and compare them to your hypothesis

By this point, you’ve neatly outlined a hypothesis, chosen a research method, and carried out your study. It’s now time to analyze your findings and evaluate whether they support or reject your hypothesis.

Look at the data you’ve collected and what it means. Given that we conducted usability testing, we’ll want to look to some key usability metrics for an indication of whether the additional settings button improves usability.

For example, with the usability task of ‘ In account settings, find your profile and change your username ’, we can conduct task analysis to compare the times spent on task and misclick rates of the new design, with those same metrics from the old design.

If you also conduct follow-up surveys or interviews, you can ask users directly about their experience and analyze their answers to gather additional qualitative data . Maze AI can handle the analysis automatically, but you can also manually read through responses to get an idea of what users think about the change.

By comparing the findings to your research hypothesis, you can identify whether your research accepts or rejects your hypothesis. If the majority of users struggle with finding the settings page within usability tests, but had a higher success rate with your new prototype, you’ve proved the hypothesis.

However, it's also crucial to acknowledge if the findings refute your hypothesis rather than prove it as true. Ruling something out is just as valuable as confirming a suspicion.

In either case, make sure to draw conclusions based on the relationship between the variables and store findings in your UX research repository . You can conduct deeper analysis with techniques like thematic analysis or affinity mapping .

UX research hypotheses: four best practices to guide your research

Knowing the big steps for formulating and testing a research hypothesis ensures that your next UX research project gives you focused, impactful results and insights. But, that’s only the tip of the research hypothesis iceberg. There are some best practices you’ll want to consider when using a hypothesis to test your UX design ideas.

Here are four research hypothesis best practices to help guide testing and make your UX research systematic and actionable.

Align your hypothesis to broader business and UX goals

Before you begin to formulate your hypothesis, be sure to pause and think about how it connects to broader goals in your UX strategy . This ensures that your efforts and predictions align with your overarching design and development goals.

For example, implementing a brand new navigation menu for current account holders might work for usability, but if the wider team is focused on boosting conversion rates for first-time site viewers, there might be a different research project to prioritize.

Create clear and actionable reports for stakeholders

Once you’ve conducted your testing and proved or disproved your hypothesis, UX reporting and analysis is the next step. You’ll need to present your findings to stakeholders in a way that's clear, concise, and actionable. If your hypothesis insights come in the form of metrics and statistics, then quantitative data visualization tools and reports will help stakeholders understand the significance of your study, while setting the stage for design changes and solutions.

If you went with a research method like user interviews, a narrative UX research report including key themes and findings, proposed solutions, and your original hypothesis will help inform your stakeholders on the best course of action.

Consider different user segments

While getting enough responses is crucial for proving or disproving your hypothesis, you’ll want to consider which users will give you the highest quality and most relevant responses. Remember to consider user personas —e.g. If you’re only introducing a change for premium users, exclude testing with users who are on a free trial of your product.

You can recruit and target specific user demographics with the Maze Panel —which enables you to search for and filter participants that meet your requirements. Doing so allows you to better understand how different users will respond to your hypothesis testing. It also helps you uncover specific needs or issues different users may have.

Involve stakeholders from the start

Before testing or even formulating a research hypothesis by yourself, ensure all your stakeholders are on board. Informing everyone of your plan to formulate and test your hypothesis does three things:

Firstly, it keeps your team in the loop . They’ll be able to inform you of any relevant insights, special considerations, or existing data they already have about your particular design change idea, or KPIs to consider that would benefit the wider team.

Secondly, informing stakeholders ensures seamless collaboration across multiple departments . Together, you’ll be able to fit your testing results into your overall CX strategy , ensuring alignment with business goals and broader objectives.

Finally, getting everyone involved enables them to contribute potential hypotheses to test . You’re not the only one with ideas about what changes could positively impact the user experience, and keeping everyone in the loop brings fresh ideas and perspectives to the table.

Test your UX research hypotheses with Maze

Formulating and testing out a research hypothesis is a great way to define the scope of your UX research project clearly. It helps keep research on track by providing a single statement to come back to and anchor your research in.

Whether you run usability tests or user interviews to assess your hypothesis—Maze's suite of advanced research methods enables you to get the in-depth user and customer insights you need.

Frequently asked questions about research hypothesis

What is the difference between a hypothesis and a problem statement in UX?

A research hypothesis describes the prediction or method of solving that problem. A problem statement, on the other hand, identifies a specific issue in your design that you intend to solve. A problem statement will typically include a user persona, an issue they have, and a desired outcome they need.

How many hypotheses should a UX research problem have?

Technically, there are no limits to the amount of hypotheses you can have for a certain problem or study. However, you should limit it to one hypothesis per specific issue in UX research. This ensures that you can conduct focused testing and reach clear, actionable results.

MeasuringU Logo

Hypothesis Testing in the User Experience

ux hypothesis testing

It’s something we all have completed and if you have kids might see each year at the school science fair.

  • Does an expensive baseball travel farther than a cheaper one?
  • Which melts an ice block quicker, salt water or tap water?
  • Does changing the amount of vinegar affect the color when dying Easter eggs?

While the science project might be relegated to the halls of elementary schools or your fading childhood memory, it provides an important lesson for improving the user experience.

The science project provides us with a template for designing a better user experience. Form a clear hypothesis, identify metrics, and collect data to see if there is evidence to refute or confirm it. Hypothesis testing is at the heart of modern statistical thinking and a core part of the Lean methodology .

Instead of approaching design decisions with pure instinct and arguments in conference rooms, form a testable statement, invite users, define metrics, collect data and draw a conclusion.

  • Does requiring the user to double enter an email result result in more valid email addresses?
  • Will labels on the top of form fields or the left of form fields reduce the time to complete the form?
  • Does requiring the last four digits of your Social Security Number improve application rates over asking for a full SSN?
  • Do users have more trust in the website if we include the McAfee security symbol or the Verisign symbol ?
  • Do more users make purchases if the checkout button is blue or red?
  • Does a single long form generate higher form submissions than the division of the form on three smaller pages?
  • Will users find items faster using mega menu navigation or standard drop-down navigation?
  • Does the number of monthly invoices a small business sends affect which payment solution they prefer?
  • Do mobile users prefer to download an app to shop for furniture or use the website?

Each of the above questions is both testable and represents real examples. It’s best to have as specific a hypothesis as possible and isolate the variable of interest. Many of these hypotheses can be tested with a simple A/B test , unmoderated usability test , survey or some combination of them all .

Even before you collect any data, there is an immediate benefit gained from forming hypotheses. It forces you and your team to think through the assumptions in your designs and business decisions. For example, many registration systems require users to enter their email address twice. If an email address is wrong, in many cases a company has no communication with a prospective customer.

Requiring two email fields would presumably reduce the number of mistyped email addresses. But just like legislation can have unintended consequences, so do rules in the user interface. Do users just copy and paste their email thus negating the double fields? If you then disable the pasting of email addresses into the field, does this lead to more form abandonment and less overall customers?

With a clear hypothesis to test, the next step involves identifying metrics that help quantify the experience . Like most tests, you can use a simple binary metric (yes/no, pass/fail, convert/didn’t convert). For example, you could collect how many users registered using the double email vs. the single email form, how many submitted using the last four numbers of their SSN vs. the full SSN, and how many found an item with the mega menu vs. the standard menu.

Binary metrics are simple, but they usually can’t fully describe the experience. This is why we routinely collect multiple metrics, both performance and attitudinal. You can measure the time it takes users to submit alternate versions of the forms, or the time it takes to find items using different menus. Rating scales and forced ranking questions are good ways of measuring preferences for downloading apps or choosing a payment solution.

With a clear research hypothesis and some appropriate metrics, the next steps involve collecting data from the right users and analyzing the data statistically to test the hypothesis. Technically we rework our research hypothesis into what’s called the Null Hypothesis, then look for evidence against the Null Hypothesis, usually in the form of the p-value . This is of course a much larger topic we cover in Quantifying the User Experience .

While the process of subjecting data to statistical analysis intimidates many designers and researchers (recalling those school memories again), remember that the hardest and most important part is working with a good testable hypothesis. It takes practice to convert fuzzy business questions into testable hypotheses. Once you’ve got that down, the rest is mechanics that we can help with.

You might also be interested in

102721-Feature

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

How to create a perfect design hypothesis

ux hypothesis testing

A design hypothesis is a cornerstone of the UX and UI design process. It guides the entire process, defines research needs, and heavily influences the final outcome.

Design Hypothesis UX

Doing any design work without a well-defined hypothesis is like riding a car without headlights. Although still possible, it forces you to go slower and dramatically increases the chances of unpleasant pitfalls.

The importance of a hypothesis in the design process

Design change for your hypothesis, the objective of your hypothesis, mapping underlying assumptions in your hypothesis, example 1: a simple design hypothesis, example 2: a robust design hypothesis.

There are three main reasons why no discovery or design process should start without a well-defined and framed hypothesis. A good design hypothesis helps us:

  • Guide the research
  • Nail the solutions
  • Maximize learnings and enable iterative design

Benefits of Hypotheses

A design hypothesis guides research

A good hypothesis not only states what we want to achieve but also the final objective and our current beliefs. It allows designers to assess how much actual evidence there is to support the hypothesis and focus their research and discovery efforts on areas they are least confident about.

Research for the sake of research brings waste. Research for the sake of validating specific hypotheses brings learnings.

A design hypothesis influences the design and solution

Design hypothesis gives much-needed context. It helps you:

  • Ideate right solutions
  • Focus on the proper UX
  • Polish UI details

The more detailed and robust the design hypothesis, the more context you have to help you make the best design decisions.

A design hypothesis maximizes learnings and enables iterative design

If you design new features blindly, it’s hard to truly learn from the launch. Some metrics might go up. Others might go down, so what?

With a well-defined design hypothesis, you can not only validate whether the design itself works but also better understand why and how to improve it in the future. This helps you iterate on your learnings.

Components of a good design hypothesis

I am not a fan of templatizing how a solid design hypothesis should look. There are various ways to approach it, and you should choose whatever works for you best. However, there are three essential elements you should include to ensure you get all the benefits mentioned earlier of using design hypotheses, that is:

  • Design change
  • The objective
  • Underlying assumptions

Elements of Good Design Hypothesis

The fundamental part is the definition of what you are trying to do. If you are working on shortening the onboarding process, you might simply put “[…] we’d like to shorten the onboarding process […].”

The goal here is to give context to a wider audience and be able to quickly reference that the design hypothesis is concerning. Don’t fret too much about this part; simply boil the problem down to its essentials. What is frustrating your users?

In other words, the objective is the “why” behind the change. What exactly are you trying to achieve with the planned design change? The objective serves a few purposes.

ux hypothesis testing

Over 200k developers and product managers use LogRocket to create better digital experiences

ux hypothesis testing

First, it’s a great sanity check. You’d be surprised how many designers proposed various ideas, changes, and improvements without a clear goal. Changing design just for the sake of changing the design is a no-no.

It also helps you step back and see if the change you are considering is the best approach. For instance, if you are considering shortening the onboarding to increase the percentage of users completing it, are there any other design changes you can think of to achieve the same goal? Maybe instead of shortening the onboarding, there’s a bigger opportunity in simply adjusting the copy? Defining clear objectives invites conversations about whether you focus on the right things.

Additionally, a clearly defined objective gives you a measure of success to evaluate the effectiveness of your solution. If you believed you could boost the completion rate by 40 percent, but achieved only a 10 percent lift, then either the hypothesis was flawed (good learning point for the future), or there’s still room for improvements.

Last but not least, a clear objective is essential for the next step: mapping underlying assumptions.

Now that you know what you plan to do and which goal you are trying to achieve, it’s time for the most critical question.

Why do you believe the proposed design change will achieve the desired objective? Whether it’s because you heard some interesting insights during user interviews or spotted patterns in users’ behavioral data, note it down.

Proposed Design Change

Even if you don’t have any strong justification and base your hypothesis on pure guesses (we all do that sometimes!), clearly name these beliefs. Listing out all your assumption will help you:

  • Focus your discovery efforts on validating these assumptions to avoid late disappointments
  • Better analyze results post-launch to maximize your learnings

You’ll see exactly how in the examples of good design hypotheses below.

Examples of good design hypotheses

Let’s put it all into practice and see what a good design hypothesis might look like.

I’ll use two examples:

  • A simple design hypothesis
  • A robust design hypothesis

You should still formulate a design hypothesis if you are working on minor changes, such as changing the copy on buttons. But there’s also no point in spending hours formulating a perfect hypothesis for a fifteen-minute test. In these cases, I’d just use a simple one-sentence hypothesis.

Yet, suppose you are working on an extensive and critical initiative, such as redesigning the whole conversion funnel. In that case, you might want to put more effort into a more robust and detailed design hypothesis to guide your entire process.

A simple example of a design hypothesis could be:

Moving the sign-up button to the top of the page will increase our conversion to registration by 10 percent, as most users don’t look at the bottom of the page.

Although it’s pretty straightforward, it still can help you in a few ways.

First of all, it helps prioritize experiments. If there is another small experiment in the backlog, but with the hypothesis that it’ll improve conversion to registration by 15 percent, it might influence the order of things you work on.

Impact assessments (where the 10 percent or 15 percent comes from) are another quite advanced topic, so I won’t cover it in detail, but in most cases, you can ask your product manager and/or data analyst for help.

It also allows you to validate the hypothesis without even experimenting. If you guessed that people don’t look at the bottom of the page, you can check your analytics tools to see what the scroll rate is or check heatmaps.

Lastly, if your hypothesis fails (that is, the conversion rate doesn’t improve), you get valuable insights that can help you reassess other hypotheses based on the “most users don’t look at the bottom of the page” assumption.

Now let’s take a look at a slightly more robust assumption. An example could be:

Shortening the number of screens during onboarding by half will boost our free trial to subscription conversion by 20 percent because:

  • Most users don’t complete the whole onboarding flow
  • Shorter onboarding will increase the onboarding completion rate
  • Focusing on the most important features will increase their adoption
  • Which will lead to aha moments and better premium retention
  • Users will perceive our product as simpler and less complex

The most significant difference is our effort to map all relevant assumptions.

Listing out assumptions can help you test them out in isolation before committing to the initiative.

For example, if you believe most users don’t complete the onboarding flow , you can check self-serve tools or ask your PM for help to validate if that’s true. If the data shows only 10 percent of users finish the onboarding, the hypothesis is stronger and more likely to be successful. If, on the other hand, most users do complete the whole onboarding, the idea suddenly becomes less promising.

The second advantage is the number of learnings you can get from the post-release analysis.

Say the change led to a 10 percent increase in conversion. Instead of blindly guessing why it didn’t meet expectations, you can see how each assumption turned out.

It might turn out that some users actually perceive the product as more complex (rather than less complex, as you assumed), as they have difficulty figuring out some functionalities that were skipped in the onboarding. Thus, they are less willing to convert.

Not only can it help you propose a second iteration of the experiment, that learning will help you greatly when working on other initiatives based on a similar assumption.

Closing thoughts

Ensuring everything you work on is based on a solid design hypothesis can greatly help you and your career.

It’ll guide your research and discovery in the right direction, enable better iterative design, maximize learning, and help you make better design decisions.

Some designers might think, “Hypotheses are the job of a product manager, not a designer.”

While that’s partly true, I believe designers should be proactive in working with hypotheses.

If there are none set, do it yourself for the sake of your own success. If all your designs succeed, or worse, flunk, no one will care who set or didn’t set the hypotheses behind these decisions. You’ll be judged, too.

If there’s a hypothesis set upfront, try to understand it, refine it, and challenge it if needed.

Most senior and desired product designers are not just pixel-pushers that do what they are being told to do, but they also play an active role in shaping the direction of the product as a whole. Becoming fluent in working with hypotheses is a significant step toward true seniority.

Header image source: IconScout

LogRocket : Analytics that give you UX insights without the need for interviews

LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.

See how design choices, interactions, and issues affect your users — get a demo of LogRocket today .

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #ux research

ux hypothesis testing

Stop guessing about your digital experience with LogRocket

Recent posts:.

Common UX Designer Job Interview Questions And Answers

Common UX designer job interview questions and answers

A no-brainer way to prepare for a UX job interview is to practice the most common interview questions.

Design Guide

A designer’s guide to flexbox and grid layout

In this article, you’ll learn what flexbox and grid layout do and the benefits of using both to develop websites.

ux hypothesis testing

A guide to defending accessibility to skeptical stakeholders

It’s time to conquer the preconceptions of accessibility and undertake the process of creating websites and products that benefit everyone.

ux hypothesis testing

How to use social media to improve your product design career

Discover how to leverage social media to help find your next opportunity, promote yourself above others, and continue your career path.

ux hypothesis testing

Leave a Reply Cancel reply

  • Services Product Management Product Ideation Services Product Design Design Design Web Design Mobile Application Design UX Audit Web Development Web Development Web Development in Ruby on Rails Backend API Development in Ruby on Rails Web Applications Development on React.js Web Applications Development on Vue.js Mobile Development Mobile Development Mobile app Development on React Native iOS Applications Development Android Applications Development Software Testing Software Testing Web Application Testing Mobile Application Testing Technology Consulting DevOps Maintenance Source Code Audit HIPAA security consulting
  • Solutions Multi-Vendor Marketplace Multi-Vendor Marketplace B2B - Business to Business B2C - Business to Customer C2C - Customer to Customer Online Store Create an online store with unique design and features at minimal cost using our MarketAge solution Custom Marketplace Get a unique, scalable, and cost-effective online marketplace with minimum time to market Telemedicine Software Get a cost-efficient, HIPAA-compliant telemedicine solution tailored to your facility's requirements Chat App Get a customizable chat solution to connect users across multiple apps and platforms Custom Booking System Improve your business operations and expand to new markets with our appointment booking solution Video Conferencing Adjust our video conferencing solution for your business needs For Enterprise Scale, automate, and improve business processes in your enterprise with our custom software solutions For Startups Turn your startup ideas into viable, value-driven, and commercially successful software solutions
  • Industries Fintech Automate, scale, secure your financial business or launch innovative Fintech products with our help Edutech Cut paperwork, lower operating costs, and expand your market with a custom e-learning platform E-commerce Streamline and scale your e-commerce business with a custom platform tailored to your product segments Telehealth Upgrade your workflow, enter e-health market, and increase marketability with the right custom software

About Us

  • Case Studies

ux hypothesis testing

  • How to Test UX Design

How to Test UX Design: UX Problem Discovery, Hypothesis Validation & User Testing

  • 16333 views
  • Feb 23, 2022

Elena S.

Oleksandra I.

Head of Product Management Office

  • Tech Navigator
  • Entrepreneurship

ux hypothesis testing

Customer-driven product development implies designing and developing a product based on customer feedback. The match between actual product capabilities and end users’ expectations defines the success of any software project.

At RubyGarage, we create design mockups, wireframes, and prototypes to communicate assumptions regarding how your app should look and perform. User testing is then needed to validate these ideas before the actual product development starts. Why? Because early UX validation takes significantly less time and effort than rebuilding a ready-made product.

Learn how to test UX design ideas with this ultimate practical guide from RubyGarage UX experts.

What is UX validation?

In a broad sense, UX validation is the process of collecting evidence and learning around some idea with the aim of conducting experiments and user testing to make informed decisions about product design. You can validate a business idea, a user experience, or a specific problem with an existing product, or you can choose the most viable solution among all available options.

There are two approaches to UX validation:

  • Waterfall: Validation of the whole concept at the final release
  • Lean: Validation of individual hypotheses through multiple experiments

Waterfall vs Lean UX validation

Lean UX validation is preferable for startups due to lower risks of failure compared to the Waterfall approach and optimized budget distribution.

Why do you need UX validation?

The validation phase of UX research gives the product team an understanding of what the future (or existing) product should be like to satisfy the end user. It helps the team:

  • Understand customer value more profoundly. Get precise feedback on each feature and every blocker on the user’s way to conversion.
  • Align the product concept with user expectations. There’s no better way to form the correct product value for customers than listening to what real users want.
  • Find product–market fit. You need to be on the same page with your target audience to build a viable product that will bring desired business results.
  • Save budget and resources. Early validation per iteration helps reveal mismatches between features and customer expectations so you don’t spend time and money on the wrong features and UX flows. This way, you’ll invest resources with maximum efficiency.

You can use a lot of theory about UX validation methods to run the design validation cycle with your team. However, very few teams know how to test a UX design in practice. It’s rather challenging to organize all activities correctly and effectively. The RubyGarage UX design team has a well-thought-out UX validation workflow that we’ve polished through years of working on our own and clients’ products. Here is our practical step-by-step guide on running the UX validation process based on our in-house workflow.

Step 1: UX problem discovery

UX problem discovery involves researching the problem space. During this step, the team identifies the problems to be explored and solved after collecting evidence and determining what to do next.

1. Organizational preparation

The problem discovery process can differ depending on the project stage (whether a new or existing product is under development), the team structure, management, etc. However, here are the main preparation steps you should arrange before starting the UX validation process:

  • Define the purpose and objectives for conducting UX validation. What product design flows do you need to validate and what UX problems do you need to test? We recommend using the Project Charter template to outline these items.

Project Charter

  • Define stakeholders to participate in the UX validation process. Identify each process participant and their areas of responsibility. We highly recommend elaborating stakeholders’ roles through the Stakeholders Register , Stakeholders Influence Matrix , or Stakeholders RACI Matrix .

Stakeholder register. contains contact information for each stakeholder such as their name, category, analysis, job title, and address. A stakeholder register may look approximately like this:

Stakeholder register

In the Stakeholder influence matrix, you can/should structure stakeholders by two essential criteria: power (influence) and interest. Power defines the impact of each stakeholder from the decision-making standpoint. The level of interest is how likely a stakeholder is to take action to exercise their power.

Stakeholder influence matrix

Stakeholder RACI matrix serves to identify who should be responsible, accountable, consulted, and informed:

Stakeholder RACI matrix

To structure the roles and responsibilities of stakeholders, use RACI ranging based on the following characteristics:

  • Responsible: Who will be doing the task?
  • Accountable: Who is responsible for making decisions? Who is going to approve it?
  • Consulted: Who can tell me about this task, activity, etc?
  • Informed: Who has to be kept informed about the progress? Whose work depends on this task, activity, etc?

As a result, you’ll get an outline of your UX validation process with all required participants defined and ready for the following activities.

2. Kick-off workshop meeting preparations

The kick-off workshop meeting is conducted to align the entire team around the challenge in order to form a UX validation plan with activities each team member (stakeholder) will perform.

Here is how to prepare for the kick-off workshop step by step:

  • Define the workshop goals and participants. Define the workshop goals and activities to achieve those goals. Use the Stakeholder Register to outline the required participants.
  • Prepare a kick-off workshop agenda to give participants a clear overview of the meeting activities, the duration of each step, and who will lead each task.
  • Share the agenda with all participants before the workshop via established communication channels (email, Slack, etc.) to ensure they receive it.
  • Create a workshop canvas to structurize and organize all activities and prepare the workspace for documenting the progress.

A workshop canvas is required so that the facilitator can effectively conduct the kick-off workshop meeting and gain the target results through the range of activities. At RubyGarage, we use the Miro whiteboard tool to map templates for all planned activities.

UX research template in Miro

3. Conducting a kick-off workshop meeting

Run the following activities during the kick-off workshop:

  • Reframe the initial problem. Analyze the problem from three perspectives (desirability, feasibility, and viability) to more deeply understand the issues related to the briefing. We recommend using the Abstraction laddering tool. Put the problem statement in the middle. Go with ’why?’ questions up the ladder to get to the root cause of each challenge. Go down the ladder with ’how?’ to explore the issue more precisely and reveal sub-problems.
  • Map existing knowledge and assumptions. Map out what you already know as a fact and what is still unknown in the context of discussed problems.
  • Plan activities and pick tools to reach your objectives. Think about the practical activities to do in each discovery phase and pick those that are just enough to achieve your goals. Create a shared sprint plan that clearly defines the time frame for each activity, the participants responsible for each item, and how they will collaborate across roles. Put all milestone meetings, sprint ceremonies, and deadlines into a plan. Further, you’ll need to revisit this plan to track progress and adjust further steps if necessary.
  • Fill in the after-meeting section in the agenda file to outline the meeting results and document all findings.
  • Prepare a UX validation plan and approve it with the client and stakeholders. The final deliverable of the kick-off workshop meeting is a UX validation plan that includes the list of activities for the UX design validation process; the tasks backlog for each activity; and assignees, participants, and estimates for each task.

We recommend that you prepare a UX validation plan in the form of a presentation with clear descriptions for each activity, including its purpose, goals, execution, and deliverables. By doing so, it will be clear to the client and other decision-makers what you will do and why.

Step 2: User behavior analysis

During the kick-off meeting, you defined the UX problems that should be researched and validated. It’s essential to collect more data on how users currently interact with the product’s interface to understand these problems from the user’s point of view and prepare for user interviews. This is called behavioral analysis. This type of user testing helps you find answers to the following questions:

  • Where do users click within the product screens?
  • Where do users get stuck?
  • What screen areas and interface elements cause problems with the user experience?
  • How long does it take users from first click to conversion?
  • How can you nudge users to take actions?
  • What types of UI elements are most effective?

Analyzing behavior includes the following steps:

  • Select analysis goals, KPIs, and metrics. Outline your plan for behavior analysis and pick relevant tools to accomplish your goals. Define the time frame and people responsible for conducting the research.
  • Define user journeys for analysis. Users accomplish their actions via multiple scenarios through the product interface. Clearly outline user journeys for analysis to track the current activity and roadblocks. Map these user journeys in your UX validation canvas.
  • Set up the required tools . Follow the selected tool setup and configuration guides to make them ready to track user activity.
  • Set unique identifiers for each user to distinguish the activity of a specific user.
  • Collect and analyze results. Based on recorded activity, define patterns in user behavior. Outline deviations in expected user behavior and define problem areas in the UX flow.
  • Develop UX problem hypotheses. Formulate UX problems and define UX flows that must be validated during user interviews.

The deliverable at this step is a behavior analysis report to share results with the client and outline your findings for the next steps.

Tools for behavior analysis

Tools for behavior analysis

The right tool depends on the specifics of your product and your research subject. Different tools present user behavior statistics in different formats, from visualized heatmaps in Hotjar to statistical charts in Amplitude. Here is a short overview of the most popular user behavior testing tools:

  • Mixpanel : A powerful mobile app analytics tool that helps you collect and analyze data on specific actions and events set up in advance.
  • Hotjar : We prefer using Hotjar to identify UX-related problems causing user drop-offs. Hotjar provides heatmaps, session recordings, and surveys to understand customer behavior.
  • Heap : This tool doesn’t require upfront configuration. It automatically tracks each user’s activity and aggregates the collected data in various reports.
  • Amplitude : A comprehensive analytics tool to analyze user behavior with a view to multiple marketing metrics including retention, conversion rate, and lifetime value. 

Step 3: UX hypotheses validation

Over the previous steps, you got a set of UX hypotheses about what causes problems for your users, where they get stuck, what user flows to improve, and so on. You now need evidence to determine if these hypotheses are valid and worth further elaboration. There are different ways to test UX design hypotheses. At RubyGarage, we prefer in-depth interviews due to their informative value. Here is how our UX audit team approaches this step.

1. Select the right users for testing

Define who from the product’s customer base or target audience should participate in user interviews.

  • Define user personas. Create a typical description of each category of customer to be interviewed. Decide whether you need to interview people from various demographics, with different experience, etc. Document the defined personas.
  • Define sample quantity. Select how many users you need for each persona to receive enough representative information to validate hypotheses. It all depends on the type of your study, the variety of user patterns, and how people use your product. The Norman Nielsen Group recommends 40 participants for most quantitative studies to obtain statistically reliable results.
  • Recruit enough participants. You should find participants that match your target audience. You can get them from among your teammates and coworkers ( internal recruiting ), or you can look for suitable interviewees outside your company ( external recruiting ). If you engage participants internally, aim at employees not in the product team to get objective feedback. If you go for external training, focus on crowded places like malls or coffee shops to look for participants that represent your target audience. To speed up the recruitment process and find relevant participants for UX validation interviews, use specialized platforms like User Interviews or Ethnio . Both of these platforms offer an extensive database of vetted research participants with the ability to filter by multiple parameters.
  • Define methods of user allocation. Interviews can be conducted in person or remotely via phone, video call, or chat. Interviewees should be motivated to participate and provide high-quality insights.

When describing user personas, focus on their characteristics and experience using your product. Here is a user persona template for your reference:

User persona template

2. User interview design

Getting ready for user interviews takes some time. Having all visuals, questions, and forms prepared in advance ensures interviews will run smoothly and that participants will feel comfortable, guided, and engaged for effective interaction. When preparing for user interviews, focus on the following:

  • Collect user data about actual users’ experiences with your product
  • Collect insights into what users think about your product
  • Reveal real UX problems that users face
  • Define user pains, gains, wants, needs, and wishes related to the product’s UX solution

Keep in mind those criteria when preparing for interviews through the following steps:

  • Prepare visual materials. Get a clickable prototype, working system version, or other visuals to use for your interviews. Define the strict order of showing visual materials and create a separate document with links to all visuals in the proper order.
  • Prepare the interview structure. Determine the sequence of interview steps with the appropriate timing for each step.
  • Prepare the user interview script. Define the questions for the interview in the proper sequence.
  • Allocate the necessary number of users. Schedule the required number of interviewees and set up personal meetings depending on the selected allocation method (remote or in-house).
  • Prepare a form for gathering feedback. This form must be filled out to sum up the findings after each interview session.

3. Running user interviews

Once you’ve created all required documents and scheduled meetings (or virtual sessions), begin running interviews:

  • Conduct an interview with each participant. Go through the interview questions, following the set order and timing.
  • Fill in all results in a feedback gathering form. Within one hour after each interview session, fill in the interview feedback form with your observations and information collected from the interviewee.
  • Prepare a user interview report. Summarize your observations, insights, and data collected during user interview sessions in a report. Prepare analytical conclusions based on the collected data and share them with core stakeholders and the client. Define the most crucial insights gathered during the interview for the next UX validation activities.

One approach for processing user interview findings is Affinity mapping:

Affinity mapping approach to structure user interview responses

To follow this UX validation method:

  • Record all notes or observations in a document (this can be a Google Document or a Miro board with sticky notes).
  • Look for patterns in your observations and group them accordingly.
  • Create a group for each pattern or theme.
  • Give each group a name.

If you validate a range of hypotheses during user interviews, you can run a hypothesis-driven analysis combined with Affinity mapping, grouping the findings for and against specific hypotheses:

Hypothesis-driven user interview analysis

Step 4: Form a list of UX problem hypotheses

After the problem discovery and user testing phases, you can form a backlog of UX problem hypotheses. Based on this backlog, the UX design team should ideate solutions during the next steps of UX design validation. Formulate your hypothesis using the problem hypothesis framework:

Problem hypothesis framework

Each hypothesis should contain a proposed solution, the definition of success (a goal whose completion defines that the solution is successful), and evidence of your statement (facts and data collected during user behavior analysis and user testing).

Final thoughts

As a result of the first part of the UX design validation process, you get a defined list of UX problems and some hypotheses of how to solve them. Instead of vague ideas, you get grounded reasons for making improvements to your product’s UX design. The next step is generating potential solutions and choosing the most viable one for implementation. We uncover these steps in Practical Guide to UX Design Validation Part 2: Problem Definition and Solution Validation.

When do I need to run UX design validation?

UX design validation is essential when you develop the design for a new product or solve the usability problems in the existing solution. Determining the most suitable UX design approach before the implementation helps save time, budget, and team resources and prevent the product’s failure on delivery and release stages.

How long does it take to run UX design validation?

Depending on the project complexity and scope of challenges, it may take up to a couple of weeks. Most of this time is spent organizing the required activities and analyzing the obtained results.

Where can I find participants for user testing if my product has no real users yet?

You should define your target audience and recruit participants who match your customer personas. We recommend looking for suitable interviewees outside your product team to get objective feedback on the research questions. The best option is to use specialized platforms like User Interviews or Ethnio .

Rate this article!

ux hypothesis testing

Share article with

RubyGarage Blog

Please identify yourself to leave comments and connect with other readers

There are no comments yet

Subscribe via email and know it all first!

Thanks for your subscription!

The Ultimate UX Audit Guide for Digital Products

  • 30534 views

Top UX Research Methods to Build a Successful Product

  • Formulate hypotheses as a foundation for this method. The hypotheses can be statements of stakeholders or users, a research outcome or even a possible Future Trend .
  • Conduct research to question the hypothesis. Depending on the size of the target group, it makes sense to conduct Surveys or perform User Interviews . Remember not to ask suggestive questions.
  • Record the results of your research. Interpret the recordings to match them with your hypotheses.
  • Verify or disprove the hypothesis if possible. In case, you were not able to do so, the hypothesis might be phrased incorrectly. In either case you should continue to research around your hypotheses to bring them into a more detailed shape and be aware of changes in the future.

Test new features.

Start your meeting with an creative and communicative atmosphere by seeing your project with new, extraterrestrial eyes.

Stimulate new ideas and challenge existing ones.

Reflect on what was learned from the experience of designing a product or service.

Spot quality ideas after having generated a good amount of output.

5 rules for creating a good research hypothesis

ux hypothesis testing

UserTesting

ux hypothesis testing

A good hypothesis is critical to creating a measurable study with successful outcomes. Without one, you’re stumbling through the fog and merely guessing which direction to travel in. It’s an especially critical step in  A/B and Multivariate  testing. 

Every user research study needs clear goals and objectives. Writing a good hypothesis stands in the middle of that process, which looks like this:

1: Problem : Think about the problem you’re trying to solve and what you know about it.

2: Question : Consider which questions you want to answer. 

3: Hypothesis : Write your research hypothesis.

4: Goal : State one or two SMART goals for your project (specific, measurable, achievable, relevant, time-bound).

5: Objective : Draft a measurable objective that aligns directly with each goal.

In this article, we will focus on writing your hypothesis.

Five rules for a good hypothesis

1: A hypothesis is your best guess about what will happen. A good hypothesis says, "this change will result in this outcome."The "change" is a variation on an element—a label, color, text, etc.The "outcome" is the measure of success, the metric—click-through, conversion, etc.

2: Your hypothesis may be right or wrong, rather than ‘what you want’—just learn from it. The initial hypothesis might be quite bold, such as “Variation B will result in 40% conversion over variation A”. If the conversion uptick is only 35% then your hypothesis is false. But you can still learn from it. 

3: It must be specific. Stated values are important. Be bold while not being ridiculous. Believe that what you suggest is indeed possible. When possible, be specific and assign numeric values to your predictions.

4: It must be measurable. The hypothesis must lead to concrete success metrics for the key measure. If click through, then measure clicks, if conversion, then measure conversion, even if on a subsequent page. If measuring both, also state in the study design which is more important, click through or conversion.

5: It should be repeatable You should be able to run multiple different experiments testing different variants, and be able to re-test to get the same results. If you find that your conversion went down, then back up to a prior version and try a different direction. 

How to structure your hypothesis

Any good hypothesis has two key parts, the variant and the result. 

First state which variant will be affected . Only state one (A-B-C) or the recipe if multivariate. Be sure that you’ve included screenshots of each version in your testing documentation for clarity, or detailed descriptions of flows or processes. 

Next,  state the expected outcome . “Variant B will  result in a higher rate of course completion. ” After the hypothesis, be sure to specifically document the metric that will measure the result - in this case, completion. Leave no ambiguity in your metric. 

Quick tips for creating a good hypothesis

  • Keep it short—just one clear sentence
  • State the variant you believe will “win”  (include screenshots in your doc background)
  • State the metric that will define your winner  (a download, purchase, sign-up … )
  • Avoid adding  attitudinal  metrics with words like  “because”  or  “since”  

Hypothesis examples

A good hypothesis has its birth in data, whether the data is from web analytics, user research, competitive analysis, or your gut.

It should make sense, be easy to read without ambiguity, and be based on reality rather than pie-in-the-sky thinking or simply shooting for a company KPI (key performance indicator) or OKR (objectives and key results). The data that result is incremental and yields small insights to be built over time. 

The images below are for A, B, and C variants. The ‘control’ is the orange box, while green and grey are variants B and C (Always state a control, which is generally the current design in use).

ux hypothesis testing

Hypothesis: Variant B will result in the highest click rate.

Let's look at some examples of hypotheses in the real world. Read the examples below and ask yourself how the hypothesis could be improved.

Example 1:   Variant designs for a call-to-action button (CTA) on a web page

Background:  It has been noted through web analytics that…

  • Only 30% of page visitors scroll past the first screen.
  • 6% of all page visitors click on the CTA button.
  • Of those users that click, 12% purchase (note:  conversion can often be shown with a specific monetary value of $xx per year).

ux hypothesis testing

Example 2:   Variant designs for text narrative in a call-out/ad on a web page

Background :  It has been noted through web analytics that:

  • Same metrics as in example 1 above, AND …
  • 60% of all users are technical - i.e. IT professionals
  • Basic usability has shown that technical users don’t like “marketing speak”.

NOTE:   In your background, it’s best to link to actual studies that show your insights. 

ux hypothesis testing

Ultimately, creating a solid hypothesis is about following a process. By thinking about the problem, your prior data, your experience, plus the design options you’ve created, you already have everything you need to write a great hypothesis.

In this Article

Get started now

About the author(s)

With UserTesting’s on-demand platform, you uncover ‘the why’ behind customer interactions. In just a few hours, you can capture the critical human insights you need to confidently deliver what your customers want and expect.

Human understanding. Human experiences.

Get the latest news on events, research, and product launches

Oh no! We're unable to display this form.

Please check that you’re not running an adblocker and if you are please whitelist usertesting.com.

If you’re still having problems please drop us an email .

By submitting the form, I agree to the Privacy Policy and Terms of Use .

UX Hypothesis Testing Resources

Delivered august 14th, 2021 . contributors: mariana d., key takeaways.

  • UX Research published an article about how UX objectives can be written in the form of research hypotheses . It includes hypothesis examples and issues found around them.
  • In a recent article, Jeff G o t h e l f , a product designer, explains the Hypothesis Prioritization Canvas , which helps select useful hypotheses.
  • U X P i n provides an article that contains the steps of the Lean UX process , as well as how to write a good hypothesis and test it.

Introduction

1. ux research: objectives, assumptions, and hypothesis.

  • The author talks about how UX objectives can be written in the form of research hypotheses . It includes hypothesis examples and issues found around them.

2. Getting Started with Statistics for UX

  • This article explains two main types of hypotheses : null and alternative. It also covers how these should be tested.

3. The Hypothesis Prioritization Canvas

  • Through this article, the author explains the Hypothesis Prioritization Canvas , which helps select useful hypotheses.

4. Hypotheses in User Research and Discovery

  • This write-up focuses on UX hypotheses and how they can help organize user research. The author includes an explanation of testable assumptions (hypotheses), the unit of measurement, and the research plan.

5. Framing Hypotheses in Problem Discovery Phase

  • In this article, an expert from SumUp shares steps for problem discovery, including observation and hypothesis design.  

6. Lean UX: Expert Tips to Maximize Efficiency in UX

  • This piece contains the steps of the Lean UX process , as well as how to write a good hypothesis and test it.

7. The 6 Steps That We Use For Hypothesis-Driven Development

  • Throughout this paper, an expert explains hypothesis-driven development and its process, which includes the development of hypotheses , testing, and learning.

8. A/B Testing: Optimizing The UX

  • This paper explains how to effectively conduct A/B testing on hypotheses.

9. How Does Statistical Hypothesis Testing Work?

  • The author thoroughly explains the framework of hypothesis testing , which includes the definition of the null hypothesis, data collection, p-value computing, and determination of statistical significance.

10. How to Create Product Design Hypotheses: A Step-by-Step Guide

  • This article provides a guide to creating product design hypotheses and includes five steps to do so. It also contains a shorter, one-minute guide.

Research Strategy:

Did this report spark your curiosity, ux research: objectives, assumptions, and hypothesis - rick dzekman, getting started with statistics for ux | ux booth, the hypothesis prioritization canvas | jeff gothelf, hypotheses in user research and discovery, framing hypotheses in problem discovery phase, lean ux: expert tips to maximize efficiency in ux, the 6 steps that we use for hypothesis-driven development, a/b testing: optimizing the ux - usability geek, how does statistical hypothesis testing work, how to create rock-solid product design hypotheses: a step-by-step guide.

UX Research: Objectives, Assumptions, and Hypothesis

by Rick Dzekman

An often neglected step in UX research

Introduction

UX research should always be done for a clear purpose – otherwise you’re wasting the both your time and the time of your participants. But many people who do UX research fail to properly articulate the purpose in their research objectives. A major issue is that the research objectives include assumptions that have not been properly defined.

When planning UX research you have some goal in mind:

  • For generative research it’s usually to find out something about users or customers that you previously did not know
  • For evaluative research it’s usually to identify any potential issues in a solution

As part of this goal you write down research objectives that help you achieve that goal. But for many researchers (especially more junior ones) they are missing some key steps:

  • How will those research objectives help to reach that goal?
  • What assumptions have you made that are necessary for those objectives to reach that goal?
  • How does your research (questions, tasks, observations, etc.) help meet those objectives?
  • What kind of responses or observations do you need from your participants to meet those objectives?

Research objectives map to goals but that mapping requires assumptions. Each objective is broken down into sub-objectives which should lead to questions, tasks, or observations. The questions we ask in our research should map to some research objective and help reach the goal.

One approach people use is to write their objectives in the form of research hypothesis. There are a lot of problems when trying to validate a hypothesis with qualitative research and sometimes even with quantitative.

This article focuses largely on qualitative research: interviews, user tests, diary studies, ethnographic research, etc. With qualitative research in mind let’s start by taking a look at a few examples of UX research hypothesis and how they may be problematic.

Research hypothesis

Example hypothesis: users want to be able to filter products by colour.

At first it may seem that there are a number of ways to test this hypothesis with qualitative research. For example we might:

  • Observe users shopping on sites with and without colour filters and see whether or not they use them
  • Ask users who are interested in our products about how narrow down their choices
  • Run a diary study where participants document the ways they narrowed down their searches on various stores
  • Make a prototype with colour filters and see if participants use them unprompted

These approaches are all effective but they do not and cannot prove or disprove our hypothesis. It’s not that the research methods are ineffective it’s that the hypothesis itself is poorly expressed.

The first problem is that there are hidden assumptions made by this hypothesis. Presumably we would be doing this research to decide between a choice of possible filters we could implement. But there’s no obvious link between users wanting to filter by colour and a benefit from us implementing a colour filter. Users may say they want it but how will that actually benefit their experience?

The second problem with this hypothesis is that we’re asking a question about “users” in general. How many users would have to want colour filters before we could say that this hypothesis is true?

Example Hypothesis: Adding a colour filter would make it easier for users to find the right products

This is an obvious improvement to the first example but it still has problems. We could of course identify further assumptions but that will be true of pretty much any hypothesis. The problem again comes from speaking about users in general.

Perhaps if we add the ability to filter by colour it might make the possible filters crowded and make it more difficult for users who don’t need colour to find the filter that they do need. Perhaps there is a sample bias in our research participants that does not apply broadly to our user base.

It is difficult (though not impossible) to design research that could prove or disprove this hypothesis. Any such research would have to be quantitative in nature. And we would have to spend time mapping out what it means for something to be “easier” or what “the right products” are.

Example Hypothesis: Travelers book flights before they book their hotels

The problem with this hypothesis should now be obvious: what would it actually mean for this hypothesis to be proved or disproved? What portion of travelers would need to book their flights first for us to consider this true?

Example Hypothesis: Most users who come to our app know where and when they want to fly

This hypothesis is better because it talks about “most users” rather than users in general. “Most” would need to be better defined but at least this hypothesis is possible to prove or disprove.

We could address this hypothesis with quantitative research. If we found out that it was true we could focus our design around the primary use case or do further research about how to attract users at different stages of their journey.

However there is no clear way to prove or disprove this hypothesis with qualitative research. If the app has a million users and 15/20 research participants tell you that this is true would your findings generalise to the entire user base? The margin of error on that finding is 20-25%, meaning that the true results could be closer to 50% or even 100% depending on how unlucky you are with your sample.

Example Hypothesis: Customers want their bank to help them build better savings habits

There are many things wrong with this hypothesis but we will focus on the hidden assumptions and the links to design decisions. Two big assumptions are that (1) it’s possible to find out what research participants want and (2) people’s wants should dictate what features or services to provide.

Research objectives

One of the biggest problem with using hypotheses is that they set the wrong expectations about what your research results are telling you. In Thinking, Fast and Slow, Daniel Kahneman points out that:

  • “extreme outcomes (both high and low) are more likely to be found in small than in large samples”
  • “the prominence of causal intuitions is a recurrent theme in this book because people are prone to apply causal thinking inappropriately, to situations that require statistical reasoning”
  • “when people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when these arguments are unsound”

Using a research hypothesis primes us to think that we have found some fundamental truth about user behaviour from our qualitative research. This leads to overconfidence about what the research is saying and to poor quality research that could simply have been skipped in exchange for simply making assumption. To once again quote Kahneman: “you do not believe that these results apply to you because they correspond to nothing in your subjective experience”.

We can fix these problems by instead putting our focus on research objectives. We pay attention to the reason that we are doing the research and work to understand if the results we get could help us with our objectives.

This does not get us off the hook however because we can still create poor research objectives.

Let’s look back at one of our prior hypothesis examples and try to find effective research objectives instead.

Example objectives: deciding on filters

In thinking about the colour filter we might imagine that this fits into a larger project where we are trying to decide what filters we should implement. This is decidedly different research to trying to decide what order to implement filters in or understand how they should work. In this case perhaps we have limited resources and just want to decide what to implement first.

A good approach would be quantitative research designed to produce some sort of ranking. But we should not dismiss qualitative research for this particular project – provided our assumptions are well defined.

Let’s consider this research objective: Understand how users might map their needs against the products that we offer . There are three key aspects to this objective:

  • “Understand” is a common form of research objective and is a way that qualitative research can discover things that we cannot find with quant. If we don’t yet understand some user attitude or behaviour we cannot quantify it. By focusing our objective on understanding we are looking at uncovering unknowns.
  • By using the word “might” we are not definitively stating that our research will reveal all of the ways that users think about their needs.
  • Our focus is on understanding the users’ mental models. Then we are not designing for what users say that they want and we aren’t even designing for existing behaviour. Instead we are designing for some underlying need.

The next step is to look at the assumptions that we are making. One assumption is that mental models are roughly the same between most people. So even though different users may have different problems that for the most part people tend to think about solving problems with the same mental machinery. As we do more research we might discover that this assumption is not true and there are distinctly different kinds of behaviours. Perhaps we know what those are in advance and we can recruit our research participants in a way that covers those distinct behaviours.

Another assumption is that if we understand our users’ mental models that we will be able to design a solution that will work for most people. There are of course more assumptions we could map but this is a good start.

Now let’s look at another research objective: Understand why users choose particular filters . Again we are looking to understand something that we did not know before.

Perhaps we have some prior research that tells us what the biggest pain points are that our products solve. If we have an understanding of why certain filters are used we can think about how those motivations fit in with our existing knowledge.

Mapping objectives to our research plan

Our actual research will involve some form of asking questions and/or making observations. It’s important that we don’t simply forget about our research objectives and start writing questions. This leads to completing research and realising that you haven’t captured anything about some specific objective.

An important step is to explicitly write down all the assumptions that we are making in our research and to update those assumptions as we write our questions or instructions. These assumptions will help us frame our research plan and make sure that we are actually learning the things that we think we are learning. Consider even high level assumptions such as: a solution we design with these insights will lead to a better experience, or that a better experience is necessarily better for the user.

Once we have our main assumptions defined the next step is to break our research objective down further.

Breaking down our objectives

The best way to consider this breakdown is to think about what things we could learn that would contribute to meeting our research objective. Let’s consider one of the previous examples: Understand how users might map their needs against the products that we offer

We may have an assumption that users do in fact have some mental representation of their needs that align with the products they might purchase. An aspect of this research objective is to understand whether or not this true. So two sub-objectives may be to (1) understand why users actually buy these sorts of products (if at all), and (2) understand how users go about choosing which product to buy.

Next we might want to understand what our users needs actually are or if we already have research about this understand which particular needs apply to our research participants and why.

And finally we would want to understand what factors go into addressing a particular need. We may leave this open ended or even show participants attributes of the products and ask which ones address those needs and why.

Once we have a list of sub-objectives we could continue to drill down until we feel we’ve exhausted all the nuances. If we’re happy with our objectives the next step is to think about what responses (or observations) we would need in order to answer those objectives.

It’s still important that we ask open ended questions and see what our participants say unprompted. But we also don’t want our research to be so open that we never actually make any progress on our research objectives.

Reviewing our objectives and pilot studies

At the end it’s important to review every task, question, scenario, etc. and seeing which research objectives are being addressed. This is vital to make sure that your planning is worthwhile and that you haven’t missed anything.

If there’s time it’s also useful to run a pilot study and analyse the responses to see if they help to address your objectives.

Plan accordingly

It should be easy to see why research hypothesis are not suitable for most qualitative research. While it is possible to create suitable hypothesis it is more often than not going to lead to poor quality research. This is because hypothesis create the impression that qualitative research can find things that generalise to the entire user base. In general this is not true for the sample sizes typically used for qualitative research and also generally not the reason that we do qualitative research in the first place.

Instead we should focus on producing effective research objectives and making sure every part of our research plan maps to a suitable objective.

  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

ux hypothesis testing

User Research – The Importance of Hypotheses

It is easy to be tempted to look at the objective of your user research and pump out a solution that fits your best idea of how to achieve those objectives. That’s because experienced professionals can be quite good at that but then again they can also be very bad at it.

It is better to take your objectives and generate some hypothetical situations and then test those hypotheses with your users before turning them into concrete action. This gives you (and hopefully your clients) more confidence in your ideas or it highlights the need for changing those hypotheses because they don’t work in reality.

Let’s say that your objective is to create a network where people can access short (say a chapter) parts of a full text before they decide to buy the text or not. (Rather like Amazon does).

ux hypothesis testing

You can create some simple hypotheses around this objective in a few minutes brainstorming .

User-Attitude

We think that people would like to share their favourite clips with others on Facebook and Twitter.

User-Behaviour

We think that people will only share their favourite authors and books. They won’t share things that aren’t important to them.

User-Social Context

We think that people will be more likely to share their favourite authors and books if they are already popular with other users.

Why does this matter?

One of the things about design projects is that when you have a group of intelligent, able and enthusiastic developers, stakeholders , etc. that they all bring their own biases and understanding to the table when determining the objectives for a project. Those objectives may be completely sound but the only way to know this is to test those ideas with your users.

ux hypothesis testing

You cannot force a user to meet your objectives. You have to shape your objectives to what a user wants/needs to do with your product.

What happens to our product if our users don’t want to share their reading material with others? What if they feel that Facebook, Twitter, etc. are platforms where they want to share images and videos but not large amounts of text?

ux hypothesis testing

If you generate hypotheses for your user-research; you can test them at the relevant stage of research. The benefits include:

  • Articulating a hypothesis makes it easy for your team to be sure that you’re testing the right thing.
  • Articulating a hypothesis often guides us to a quick solution as to how to test that hypothesis.
  • It is easy to communicate the results of your research against these hypotheses. For example:
  • We thought people would want to share their favourite authors on social networks and they did.
  • We believed that the popularity of an author would relate to their “sharability” but we found that most readers wanted to emphasize their own unique taste and are more likely to share obscure but moving works than those already in the public eye.

Header Image: Author/Copyright holder: Dave. Copyright terms and licence: CC BY-NC-ND 2.0

Journey Mapping

ux hypothesis testing

Get Weekly Design Tips

What you should read next, empathy map – why and how to use it.

ux hypothesis testing

  • 1.2k shares

Master Mobile Experiences: 5 Key Discoveries from the IxDF Course

ux hypothesis testing

Design for Virtual Reality: Top Learnings from the IxDF Course

ux hypothesis testing

Human-Computer Interaction: Top Insights from the IxDF Course

ux hypothesis testing

Everything You Need To Know About Triadic Colors

ux hypothesis testing

Mobile UI Design: Top Insights from the IxDF Course

ux hypothesis testing

AI Challenges and How You Can Overcome Them: How to Design for Trust

ux hypothesis testing

How to Get Started with UX Writing

ux hypothesis testing

Learn the Role of Perception and Memory in HCI and UX

ux hypothesis testing

  • 2 weeks ago

The Top 6 Insights from Our Agile Methods for UX Design Course

ux hypothesis testing

Open Access—Link to us!

We believe in Open Access and the  democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change , cite this article , link to us, or join us to help us democratize design knowledge !

Privacy Settings

Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.

Experience the full potential of our site that remembers your preferences and supports secure sign-in.

Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.

Enhanced Functionality

Saves your settings and preferences, like your location, for a more personalized experience.

Referral Program

We use cookies to enable our referral program, giving you and your friends discounts.

Error Reporting

We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.

Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.

Analytics Storage

Collects anonymous data on how you navigate and interact, helping us make informed improvements.

Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.

Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.

Advertising Storage

Stores information for better-targeted advertising, enhancing your online ad experience.

Personalization Storage

Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.

Advertising Personalization

Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.

Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.

Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.

Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.

Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.

LinkedIn Insights

Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.

LinkedIn CAPI

Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.

Google Ads Tag

Tracks ad performance and user engagement, helping deliver ads that are most useful to you.

Share Knowledge, Get Respect!

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this article.

New to UX Design? We’re giving you a free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

New to UX Design? We’re Giving You a Free ebook!

Hypothesis statement

  • Introduction to Hypothesis statement
  • Essential characteristics

Introduction to hypothesis statements

Image showing an empathy map

Brainstorming solutions is similar to making a hypothesis or an educated guess about how to solve the problem.

In UX design, we write down possible solutions to the problem as hypothesis statements. A good hypothesis statement requires more effort than just a guess. In particular, your hypothesis statement may start with a question that can be further explored through background research.

How do you write hypothesis statements? Unlike problem statements, there's no standard formula for writing hypothesis statements. For starters, let's try what's called an if-then statement.

It looks like this: If (name an action), then (name an outcome).

Hypothesis statements don't have a standard formula. Instead of an if-then statement, you can formulate this hypothesis statement in a more flexible way.

Essential characteristics of a hypothesis statement

To formulate a promising hypothesis, ask yourself the following questions:

Is the language clear and purposeful?

What is the relationship between your hypothesis and your research topic?

Is your hypothesis testable? If so, how?

What possible explanations would you like to explore?

You may need to come up with more than one hypothesis for a problem. That's okay! There will always be multiple solutions to your users' problems. Your job is to use your creativity and problem-solving skills to decide which solutions are best for each user you are designing for.

  • #ClearLanguage
  • #HypothesisVSResearchTopic
  • #PossibleExplanations

Previous article • 5min read

Create, define problem statements, next article • 8min read, understand human factors, table of contents, esc hit escape to close, introduction to ux, what is user experience.

User experience, definition of a good design, lifecycle product development

UX Design Frameworks

Key frameworks.

User-Centered Design, the five elements of UX Design, Design Thinking, Lean UX, Double Diamond

Equity and Accessibility

Designing for all.

Universal design, inclusive design, and equity-focused design

The importance of Accessibility

Motor disabilities, deaf or hard of hearing, cognitive disabilities, visual disabilities

Design for the Next Billion User (Google)

Majority of people that gets online for the first time

Design for different platforms

Responsiveness, Call-to-actions, navigation and more

The 4Cs of princples of design

Consistency, Continuity Context and Complementary principles

Assistive Technology

Voice control and switch devices, screen readers, alternative text and speech

Impostor Syndrome/Biases

Overcome the impostor syndrome.

Impostor Syndrome is the feeling that makes you doubt that you actually earn your accomplishments.

Most common Biases

Learn about favoring or having prejudice against something based on limited information.

Prevent Biases

Recognize your own biases and prevent them from affecting your work.

Design Sprint

Introduction to design sprint.

Introduction to the framework, benefits and advantages

Plan a Design Sprint

Tips and tricks about design sprint planning

Write a Design Sprint brief

Insights and free canvas about making a design sprint brief.

Design Sprint retrospective

What went well? What can be improved?

UX Research

Introduction to ux research.

Learn techniques of research for designing better products

Foundational Research

Easily center on a problem or topic that's undefined in your project's scope

Design Research

Find stories, engage in conversations, understand the users motivations

Post-Launch Research

Know the impact of pre- and post-launch publicity and advertising on new product sales

Choosing the right Research method

Learn which research method to pick depending on the questions you still have unanswered

Benefits and drawbacks

Learn how to create an optimal product for the user

Recruit interview participants

Learn how to determine interview goals and write questions

Conduct user interviews

Insights on how to be prepared before speaking with real users

Create interview transcripts

Discover new topics of interest through a written transcript

Empathize with users

Master ability to fully understand, mirror a person's expressions, needs, and motivations

Consider a11y when empathizing

Learn why empathizing and accessibility go hand in hand

Empathy Maps

Learn how to empathize and synthesise your observations from the research phase

Identify pain points

Identify a specific problem that your users are experiencing

Understand personas

Learn how to shape product strategy and accompany it the usability testing sessions

User stories

Learn how to base your user stories on user goals and keep the product user focused

Create a user journey map

Learn how to make a visual representation of the customer experience

Determine value propositions

Set and explain why a customer should buy from you

Create and define a problem statement

Learn how to focus on the specific needs that you have uncovered yet

Learn how to predict the behavior of a proposed solution

Learn how people interact with technology

Competitive Audits

Introduction to competitive audits, limits to competitive audits, steps to conduct competitive audits, present a competitive audit, design ideation, understand design ideation, business needs during ideation, use insights from competitive audits to ideate, use "how might we" to ideate, use crazy eights to ideate, use journey map to ideate, goal statements, build a goal statement, introduction to user flows, storyboarding user flows, types of storyboards, wireframing, introduction to wireframes, paper wireframes, transition from paper to digital wireframes, information architecture, ethical and inclusive design, identify deceptive patterns, role as a ux designer, press shift to trigger the table of contents.

  • Arrow Down Resources
  • Envelope Subscribe
  • Cookies Policy
  • Terms & Conditions

ux hypothesis testing

What if we found ourselves building something that nobody wanted? In that case, what did it matter if we did it on time and on budget? —Eric Ries

Lean User Experience (Lean UX) is a team-based approach to building better products by focusing less on the theoretically ideal design and more on iterative learning, overall user experience, and customer outcomes.

Lean UX design extends the traditional UX role beyond merely executing design elements and anticipating how users might interact with a system. Instead, it encourages a far more comprehensive view of why a Feature exists, the functionality required to implement it, and the benefits it delivers. By getting immediate feedback to understand if the system will meet the fundamental business objectives, Lean UX provides a closed-loop method for defining and measuring value.

Generally, UX represents a user’s perceptions of a system—ease of use, utility, and the user interface’s (UI) effectiveness. UX design focuses on building systems that demonstrate a deep understanding of end users. It considers users’ needs and wants while making allowances for their context and limitations.

When using Agile methods, a common problem is how best to incorporate UX design into a rapid Iteration cycle, resulting in a full-stack implementation of the new functionality. When teams attempt to resolve complex and seemingly subjective user interactions while simultaneously trying to develop incremental deliverables, they can often churn through many designs, creating frustration with Agile.

Fortunately, the Lean UX movement addresses this using Agile development with Lean Startup implementation approaches. The mindset, principles, and practices of SAFe reflect this thinking. This process often begins with the SAFe Lean Startup Cycle described in the Epic article. It continues developing Features and Capabilities using the Lean UX process described here.

As a result, Agile Teams and Agile Release Trains (ARTs) can leverage a common strategy to generate rapid development, fast feedback, and a holistic user experience that delights users.

The Lean UX Process

In Lean UX , Gothelf and Seiden [2] describe a model we have adapted to SAFe, as Figure 1 illustrates.

Benefit Hypothesis

The Lean UX approach starts with a benefit hypothesis: Agile teams and UX designers accept that the right answer is unknowable up-front. Instead, teams apply Agile methods to avoid Big Design Up-front (BDUF), focusing on creating a hypothesis about the feature’s expected business result. Then they implement and test that hypothesis incrementally.

The SAFe Feature and Benefits matrix (FAB) can be used to describe the hypothesis as it moves through the Continuous Exploration aspect of the CDP:

  • Feature  – A short phrase giving a name and context
  • Benefit hypothesis – The proposed measurable benefit to the end-user or business

Note : Design Thinking practices suggest changing the order of the feature benefit hypothesis elements to identify the customer benefits first and then determine what features might satisfy their needs.

Outcomes are measured in the Release on Demand aspect of the CDP. They are best done using leading indicators (see Innovation Accounting in [1]) to evaluate how well the new feature meets its benefits hypothesis. For example, “We believe the administrator can add a new user in half the time it took before.”

Collaborative Design

Traditionally, UX design has been an area of high specialization. People with a talent for design, a feel for user interaction, and specialty training are often entirely in charge of the design process. The goal was ‘pixel perfect’ early designs, done before the implementation. But this work was often done in silos by specialists that may or may not know the most about the system and its context. Success was measured by how well the implemented user interface complied with the initial UX design. In Lean UX, this changes dramatically:

“Lean UX has no time for heroes. The entire concept of design as a hypothesis immediately dethrones notions of heroism; as a designer, you must expect that many of your ideas will fail in testing. Heroes don’t admit failure. But Lean UX designers embrace it as part of the process.” [2]

Continuous exploration takes the hypothesis and facilitates an ongoing and collaborative process that solicits input from a diverse group of stakeholders – Architects , Customers , Business Owners , Product Owners , and Agile Teams . This group further refines the problem and creates artifacts that clearly express the emerging understanding, including personas, empathy maps, and customer experience maps (see Design Thinking ).

Agile teams are empowered to design and implement collaborative UX, significantly improving business outcomes and time-to-market. Moreover, another important goal is to deliver a consistent user experience across various system elements or channels (for example, mobile, web, kiosk) or even different products from the same company. Enabling this consistency requires balancing decentralized control with centralizing certain reusable design assets (following Principle #9 – Decentralize decision-making ). For example, creating a design system [2] with a set of standards that contains whatever UI elements ARTs and Value Streams find helpful, including:

  • Editorial rules, style guides, voice and tone guidelines, naming conventions, standard terms, and abbreviations
  • Branding and corporate identity kits, color palettes, usage guidelines for copyrights, logos, trademarks, and other attributions
  • UI asset libraries, which include icons and other images, templates, standard layouts, and grids
  • UI widgets, which include the design of buttons and other similar elements

These centralized assets are integral to the Architectural Runway , which supports decentralized control while recognizing that some design elements must be centralized. After all, these decisions are infrequent , long-lasting, and provide significant economies of scale across both the user base and enterprise applications, as described in Principle #9.

Building a Minimum Marketable Feature

With a hypothesis and design, teams can implement the functionality as a Minimal Marketable Feature (MMF). The MMF should be the smallest amount of functionality that must be provided for a customer to recognize any value and for the teams to learn whether the benefit hypothesis is valid.

By creating an MMF, the ARTs apply SAFe Principle #4 – Build incrementally with a fast, integrated learning cycle to implement and evaluate the feature. Teams may preserve options with Set-Based Design  as they define the initial MMF.

In many cases, extremely lightweight and not even functional designs can help validate user requirements (ex., paper prototypes, low-fidelity mockups, simulations, API stubs). In other cases, a vertical thread (full stack) of just a portion of an MMF may be necessary to test the architecture and get fast feedback at a System Demo . However, in some instances, the functionality may need to proceed to deployment and release, where application instrumentation and telemetry provide feedback data from production users.

MMFs are evaluated as part of deploying and releasing (where necessary). There are various ways to determine if the feature delivers the proper outcomes. These include:

  • Observation – Wherever possible, directly observe the actual usage of the system. It’s an opportunity to understand the user’s context and behaviors.
  • User surveys – A simple end-user questionnaire can obtain fast feedback when direct observation isn’t possible.
  • Usage analytics – Lean-Agile teams build analytics into their applications, which helps validate initial use and provides the application telemetry needed to support a Continuous Delivery model. Application telemetry offers constant operational and user feedback from the deployed system.
  • A/B testing – This is a form of statistical hypothesis comparing two samples, which acknowledges that user preferences are unknowable in advance. Recognizing this is liberating, eliminating endless arguments between designers and developers—who likely won’t use the system. Teams follow Principle #3 – Assume variability; preserve options to keep design options open as long as possible. And wherever it’s practical and economically feasible, they should implement multiple alternatives for critical user activities. Then they can test those other options with mockups, prototypes, or even full-stack implementations. In this latter case, differing versions may be deployed to multiple subsets of users, perhaps sequenced over time and measured via analytics.

In short, measurable results deliver the knowledge teams need to refactor, adjust, redesign—or even pivot to abandon a feature based solely on objective data and user feedback. Measurement creates a closed-loop Lean UX process that iterates toward a successful outcome, driven by evidence of whether a feature fulfills the hypothesis.

Implementing Lean UX in SAFe

Lean UX differs from the traditional, centralized approach to user experience design. The primary difference is how the hypothesis-driven aspects are evaluated by implementing the code, instrumenting where applicable, and gaining user feedback in a staging or production environment. Implementing new designs is primarily the responsibility of the Agile Teams, working in conjunction with Lean UX experts.

Of course, this shift, like many others with Lean-Agile development, can cause significant changes to the way teams and functions are organized, enabling a continuous flow of value. For more on coordinating and implementing Lean UX —specifically how to integrate Lean UX in the PI cycle—read the advanced topic article Lean UX and the PI Lifecycle .

Last update: 21 February 2023

Privacy Overview

caltech

Caltech Bootcamp / Blog / /

Understanding Usability Testing Methods For Effective UI/UX Design

  • Written by John Terra
  • Updated on May 20, 2024

Usability Testing Methods

Website and application developers can pour all their time, talent, and resources into creating the perfect product that functions smoothly and does everything it’s designed to. Still, if the users need help interacting with it or have bad experiences, those efforts are doomed to failure. That’s why we need usability testing methods.

This article explores UI/UX testing methods, including website usability testing. We will define the terms, detail the various types, outline testing benefits, and explain when the testing should be performed. We’ll also share a comprehensive online UI/UX program that can help aspiring designers boost their careers.

What is Usability Testing?

It is a branch of user research that evaluates user experiences when interacting with an application or website. This testing method helps designers and product teams to assess how intuitive and easy-to-use products are.

It reveals issues with the product that the designers and developers may have yet to notice by having real users complete a series of usability tasks with the product while also noting the customers’ behavior and preferences. The paths taken to complete the tasks, results, and success rate are then analyzed to highlight potential issues and areas for improvement.

The ultimate goal is to create a product that remedies the user’s problems, helping them achieve their objectives while delivering a positive experience.

Also Read: How to Design a User-Friendly Interface?

What ISN’T Usability Testing?

Now that we’ve shown what usability testing is, let’s show what it isn’t . People often need clarification on usability testing with user testing and user research. Hey, they all sound the same, right?

However, user research describes collecting insights and feedback from product users and then using this data to guide and inform product decisions. Usability testing, on the other hand, is a specific type of user research conducted to assess the usability of a product or design. So yes, it can be considered a sub-group in the user research family.

User testing is an umbrella term that can describe user research as a whole or the specific process of testing ideas and products with real users. The latter adopts a quantitative approach to collecting user feedback, usually before usability testing. However, it doesn’t provide qualitative data on why users struggle to finish tasks.

The Key Benefits of Usability Testing

It brings many benefits to the table, including:

  • You can tailor products to your users. Even if you understand your product, users might have a different take. By talking to users directly and watching how they interact with and experience the product, you can better comprehend their needs and adjust the product to work for them. These changes will ultimately serve their needs and solve their issues more effectively.
  • It reduces developmental costs. Usability tests save time and money by avoiding costly development mistakes. For instance, if you discover users struggle to navigate a specific feature, you can fix it before launch. Changing a product before launch rather than after release is considerably cheaper.
  • It increases user satisfaction and brand reputation. It lets product teams identify potential issues and make necessary improvements before a release. This process can lead to a consistently better user experience, creating a loyal customer base and reflecting well on your overall brand reputation.
  • It increases accessibility to all. Accessible products are designed and developed to be enjoyed by as many people as possible, regardless of their visual, auditory, physical, or cognitive requirements. Of course, your product must comply with codified accessibility standards and regulations, but it will also benefit from prioritizing accessibility. When you use the usability testing process to include customers with diverse needs and abilities better, you promote and contribute to a more equitable digital marketplace and landscape.
  • It mitigates cognitive biases. Our minds love hastily making up shortcuts to draw quicker decisions or inferences. Although this is just an effort to be efficient, it can lead to subconscious beliefs or assumptions, otherwise known as cognitive bias. Usability testing helps remedy biases such as the false-consensus effect by offering objective feedback from actual people, ensuring that product design decisions are based on actual user behavior instead of the assumptions and opinions of the people already with the product and may already have a very subjective opinion.

When Should You Perform Usability Testing?

You must continuously perform usability testing to ensure the product stays relevant and solves the customer’s most urgent issues throughout its lifecycle. Here’s a quick summary of when to conduct it:

  • Before you begin designing
  • Once you have created a wireframe or prototype
  • Before launching the product
  • At regular intervals after the product launch

Also Read: A Guide to Improving and Measuring User Experience

The Main Usability Testing Methods

Usability testing can be split into five categories, each offering two options. In many cases, usability testers can use more than one category simultaneously.

Qualitative vs. Quantitative

  • Qualitative. Qualitative testing emphasizes gathering in-depth insights and comprehending participants’ subjective experiences. It involves listening to and observing users while interacting with a service or product, identifying issues, and collecting detailed feedback.
  • Quantitative. On the other hand, quantitative testing involves gathering numerical data and analyzing the measurable metrics to assess the product’s usability. The quantitative option gathers statistical information, like error rates, task completion time, and user satisfaction ratings.

Explorative vs. Comparative

  • Explorative testing. Explorative testing uncovers insights and gathers feedback during the product’s early stages of development. It involves brainstorming sessions and open-ended discussions and collects participants’ thoughts, opinions, and perspectives.
  • Comparative. Comparative testing compares two or more versions of the interface, service, or product to determine which offers customers a better user experience. Participants are asked to evaluate different designs or complete assigned tasks, and their feedback and preferences are collected.

Moderated vs. Unmoderated

  • Moderated. As the name implies, moderated testing involves a moderator interacting with the participants, guiding them through tasks, and collecting qualitative data via questioning and observation. It can be performed in person or remotely.
  • Unmoderated. Unsurprisingly, unmoderated testing is performed without a moderator. Participants independently complete tasks and provide feedback using pre-designed surveys or tests.

Remote vs. In-Person

  • Remote. Remote testing occurs when the researchers and participants are in different locations. It can be moderated or unmoderated and conducted via online tools or platforms.
  • In-Person. In-person testing runs usability tests with participants physically, like a usability lab or the user’s established environment.

Website vs. Mobile

  • Website. Website usability testing evaluates the usability of a website or web application and typically involves testing prototypes, newly launched websites, or digital product redesigns.
  • Mobile. As the name says, mobile usability testing is conducted on mobile devices. This testing evaluates the user experiences with a given mobile application or prototype. Mobile testing requires the user to install the app on their testing device and assess its usability, navigation, responsiveness, and overall mobile-specific interaction.

Usability Testing Methods

The following is a sample of specific usability testing methods broken down by their benefits, disadvantages, and when they should be run.

Guerilla Testing

This testing occurs casually and spontaneously. It typically involves user testers approaching people in coffee shops, public parks, or shopping malls.

  • Benefits. Low cost, fast feedback, minimally needed resources.
  • Disadvantages. It may not be as comprehensive as other testing methods.
  • When to run? When you want a quick, low-effort, and cheap way of getting a random sample of opinions.

Lab Testing

As the name says, these tests are conducted in a lab or controlled environment with equipment such as eye trackers, cameras, and testing software.

  • Benefits. Results in detailed analyses and precise data collection.
  • Disadvantages. Time-consuming, expensive, and may not capture the spirit of real-world usage scenarios.
  • When to run? When you’re looking for precision results in a controlled environment.

Card Sorting

Card sorting places concepts on virtual note cards and allows the participants to move the cards around into groups and categories. After sorting the cards, the users explain their logic in a moderated debriefing session.

  • Benefits. Shows how people (potential users) organize information.
  • Disadvantages. Limited information gained.
  • When to run? When you want feedback on layouts and navigational structure.

Session Recording

This involves recording participants’ interactions with a system or product using screen-recording software or specialized usability testing tools. It measures things like mouse clicks, scrolling, and movement.

  • Benefits. Tracks how people interact with a site, pinpoints stumbling blocks, and measures CTA effectiveness.
  • Disadvantages . It may be costly and involve special tools and setup.
  • When to run? When looking for possible issues with a website’s intended functionality, see how users interact with your product.

Phone Interviews

A moderator verbally guides participants in completing tasks on their computer and then collects feedback while the user’s electronic behavior is remotely recorded automatically.

  • Benefits . Cost-effective, collects data from a wide geographic range, collects more data in a shorter period.
  • Disadvantages. Interviewees may need help understanding instructions; only some are interested in answering their phones.
  • When to run? When you want to gather test data from a large population sampling quickly.

Contextual Inquiry

This testing involves watching people in their natural contexts (e.g., home, workplace) as they interact with the product in ways they usually do. The researcher watches how the users perform their activities and asks them questions to comprehend why they acted as they did.

  • Benefits. Provides valuable insights into product context and identifies usability issues that other testing methods may otherwise overlook.
  • Disadvantages. Requires close collaboration with participants and risks disrupting the user’s typical daily routines.
  • When to run? When results need to reflect an organic scenario in the user’s real-world circumstances.

Also Read: UI/UX Designer Salary: What Can You Expect in 2024?

Before You Begin Usability Testing

Before initiating usability testing methods, the team should ask these questions:

  • What’s the goal?
  • What results does the team expect?
  • Who will conduct the testing?
  • Where will the team find the participants?
  • What usability testing software tools, if any, will be used?
  • How will the results be analyzed?
  • Which testing method will be used?

How to Conduct Usability Testing

When you’re ready to start, follow these simple steps.

  • Planning. In this initial stage, you define the testing’s goals and objectives. The test plan specifies the target audience, tasks to be performed, schedule, test environment, and needed resources. The testing scope should be clearly outlined, and any necessary specific test methods or tools, such as usability testing software, should be decided upon.
  • Recruitment. Assemble the testing team based on the requirements outlined in the previous phase’s test plan. The team typically comprises end-users representing the target audience and test engineers conducting the testing. Team members actively participate in all test sessions and supply valuable feedback designed to improve the product’s usability.
  • Test Execution. Now, we get to the actual testing! The test team executes the planned test cases, following the details outlined in the test plan from the first phase. The team sets up the test environment, and the users are guided through their tasks while the team observes and records their interactions. The team also notes any issues or difficulties encountered by the users.
  • Test Results. The data gathered during the test execution phase is now compiled and analyzed to identify problems, issues, and areas for improvement. This analysis categorizes and prioritizes the identified issues based on severity and how they impact the user experience. The test results will supply valuable insights into the product’s usability strengths and weaknesses.
  • Data Analysis. The collected data is now analyzed in detail to extract meaningful, actionable information. This process involves reviewing the recorded survey responses, user interactions, and qualitative or quantitative data collected during the testing phase. Analysis helps uncover trends, patterns, and specific usability problems to be addressed, often by leveraging usability testing software.
  • Reporting. The usability test report documents the findings and recommendations from the data analysis. It includes an analysis or summary of the test objectives, methodology used, identified issues, and suggested improvements. The report may also include video recordings, screenshots, or other supporting evidence to showcase the identified issues. The report is then circulated among relevant stakeholders, such as designers, developers, and project managers, to guide further usability improvements.
  • Repeat as needed. Repeat the entire process until the product or service passes with flying colors.

Do You Want to Acquire Key UI/UX Design Skills?

If you want to gain valuable user interface and experience design skills, consider this intense UI/UX bootcamp . This 20-week professional training offers live online classes, capstone projects, a designer toolkit, and Dribble portfolio creation as you learn how to effectively use design tools like Balsamiq, Figma, Invision, Mural, and Sketch.

Glassdoor.com shows that UI/UX designers earn an average yearly salary of $88,246. So, if you want to expand your skill set or try a new career, check out this highly effective UI/UX design bootcamp and gain the necessary skills to build top-notch products for today’s digital-savvy market.

You might also like to read:

How to Become a UI UX Designer: A Comprehensive Guide

UI UX Designer Career Path: A Comprehensive Guide

All About UI UX Design Principles

Accessibility in UX Design: A Definitive Guide

Career Prep: Linux Interview Questions for UI/UX Design Professionals

UI UX Bootcamp

  • Learning Format:

Online Bootcamp

Leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Recommended Articles

Design Thinking Process in UI UX

Exploring the UI/UX Design Thinking Process

This article explores the UI/UX design thinking process. It defines the term, explains its goals and importance, then covers its stages and benefits.

mobile UI design best practices

The Ultimate Guide to Mobile UI/UX Design: Elevating User Experience in the Digital Age

Our comprehensive guide explores the essentials of mobile UI/UX design. Learn best practices, discover standout examples, and elevate your app’s user experience to captivate and retain users in the digital age.

What is ui ux testing

What is UI/UX Testing? Exploring This Critical Function of Digital Design

This article explains UI/UX testing, including its definition, importance, different types, differences between them, and more.

UI Design Trends

Top UI Design Trends in 2024

This article covers user UI design trends for 2024, including how they’ve changed and the top ten trends.

UX UI Design Tools

Mastering UI/UX Design Tools: A Comprehensive Guide

In the ever-evolving digital landscape, the importance of user experience (UX) and user interface (UI) design cannot be overstated. Creating compelling, intuitive, and visually appealing

Guide to Dark Mode Design

Embracing the Dark Side: A Comprehensive Guide to Dark Mode Design

Dark mode design is all the rage for users and UI/UX designers today. This blog dives deep into this in-demand design and shares a way for aspiring designers to gain the skills required to be at the top of their game.

Learning Format

Program Benefits

  • 5+ top tools covered, 4 hands-on Industry Projects
  • Masterclasses by distinguished Caltech CTME instructors
  • Live interactive sessions with instructors
  • Industry-specific training from global experts
  • Call us on : 1800-212-7688

New NPM integration: design with fully interactive components from top libraries!

A/B Testing in UX Design: When and Why It’s Worth It

ux hypothesis testing

Table of contents

  • What is A/B Testing?
  • Why is A/B Testing Important in UX?

How to Conduct A/B Testing Just Right

A/b testing tools, stop guessing, start testing, try uxpin for free.

A/B testing (split testing) is a quantitative method of finding the best performing version of CTA, copy, image, or any other variable. To start A/B testing, prepare two or more versions of a single element, randomly split your user group in two, and see which version performs better. Great tools for A/B testing are Unbounce, VWO, or Optimizely.

Designing a digital product brings about numerous dilemmas: which font reads best? What call-to-action copy converts more? The multitude of options to choose from can give designers a headache. Sure, following best practices and gut feelings is a good place to start, but it won’t take you far in a business setting, and bad design choices can negatively impact your revenue stream. So, what should you do? Base all your UX decisions on solid data. Where do you get them from? Use A/B testing. Continue reading to learn all about it.

What is A/B Testing ?

An A/B test – also called split testing – is a simple experiment where users are shown two variants of a design (e.g. background image on a webpage, font size or CTA copy on a homepage, etc.) at random to find out which one performs better. The variant that makes the most users take the desired action (e.g. click the CTA button) is the winner and should be implemented, while the alternative should be discarded.

What can be tested using this method? Well, pretty much everything – from text or phrasing on buttons, through different sizes, colors, or shapes of buttons, to button or CTA placement on the page.

ux hypothesis testing

You can also test using different images in the project, compare using photography vs. illustration, test different tones of voice for the copy, as well as different form lengths, labels and placement, and so on.

Why is A/B Testing Important in UX ?

As mentioned, A/B testing allows you to base your product design decisions on data rather than on an opinion or a guesswork. It both democratizes design and allows your users to participate in your decision-making. A/B testing can help you learn how small changes influence user behavior, decide which approach towards design to implement, and confirms that a new design is going in the right direction. Using A/B testing for different elements of your digital product will also improve the overall user experience over time, as well as increase your conversion.

Importantly, a good UX will make users stay on a website or in the app – or will make them visit it again – while bad UX will do the opposite. So, running A/B tests is a great way of conducting UX research while your product is live, and deciding what works and what doesn’t work for your target users. It’s a much more effective approach as it saves your time and resources spent on the expensive, environmental testing conducted before bringing a product to market.

You need to base your A/B test on an educated guess – try to figure out what could be your target users’ pain point, i.e. what could be preventing them from taking a desired action. To conduct an A/B test you need to define a goal (e.g. I want my “Request a demo” page to generate more leads ), formulate a solid hypothesis (e.g. I think that changing the CTA copy from “Contact us” to “Book demo” will engage our website visitors more and increase the number of leads ), and two versions of a variable (e.g. Book demo and Contact us ). The latter are called the altered test (test B, the variable), while the controlled test (test A) is what you compare your altered test against.

ux hypothesis testing

Create the two versions of a single variable and make your prototype ready to share for testing . Then monitor it to make sure the test is running correctly. For high-traffic websites, test the smallest change possible; for low-traffic websites you can go bigger and test e.g. two completely different versions of a web design.

Your test should run long enough to provide you with meaningful, statistically significant results. The bigger the sample size and the more information you collect, the more reliable your test results will be. Remember to only analyze the results of a completed A/B test and only implement the clear winner into your digital product. And what should you do with a “no difference” result? Well, be glad about it, as it proves that you can implement the design you prefer with no risk!

Remember: don’t be afraid to formulate different hypotheses and test them. In A/B testing there are no stupid questions! Just make sure you prioritize your tests according to what you know from your customer research.

There are a lot of tools dedicated to A/B testing out there. Among the most popular are:

  • Unbounce – a drag-and-drop landing page builder that allows you create and publish them without the need to use coding. It is an easy-to-use and fast tool for getting more conversions from your traffic.
  • VWO – the world’s leading web testing and conversion optimization platform. It allows you to conduct qualitative and quantitative user research, build an experimentation roadmap and run continuous experiments on your digital products.
  • Optimizely – an experimentation platform that helps build and run A/B tests on websites. The service allows you to create and run a variety of experiments for making design choices that will increase your conversion rates.

As you can see, A/B testing is not brain surgery and you can get it started all by yourself. It will give you valuable data to base your UX decisions on – also when it proves that your design assumptions were wrong.

ux hypothesis testing

A/B testing is a powerful addition to any UX designer’s toolkit (check what UX design is ). Using A/B testing on a regular, coherent basis with end-users in mind will incrementally make your digital product much more user-friendly. And that will have a very positive impact on your bottom line.

Ready to start designing your digital product in the UXPin prototyping tool? Get started with a 14-day free trial to experience its powerful features!

UXPin is a product design platform used by the best designers on the planet. Let your team easily design, collaborate, and present from low-fidelity wireframes to fully-interactive prototypes.

No credit card required.

These e-Books might interest you

Design Systems & DesignOps in the Enterprise

Design Systems & DesignOps in the Enterprise

Spot opportunities and challenges for increasing the impact of design systems and DesignOps in enterprises.

DesignOps Pillar: How We Work Together

DesignOps Pillar: How We Work Together

Get tips on hiring, onboarding, and structuring a design team with insights from DesignOps leaders.

We use cookies to improve performance and enhance your experience. By using our website you agree to our use of cookies in accordance with our cookie policy.

IMAGES

  1. Hypothesis Testing Steps & Examples

    ux hypothesis testing

  2. How to Conduct UX Design Validation: UX Problem Discovery and User Testing

    ux hypothesis testing

  3. Hypothesis Testing- Meaning, Types & Steps

    ux hypothesis testing

  4. Guide to identifying appropriate UX methods

    ux hypothesis testing

  5. Define Stronger A/B Test Variations Through UX Research

    ux hypothesis testing

  6. A/B Testing: Evaluative UX Research Methods

    ux hypothesis testing

VIDEO

  1. Types of Hypothesis Testing in Lean Six Sigma

  2. UW Data Science Seminar 04/10: Jihyeon Bae

  3. Ep 22: An Interview Brad Williams, Dreamer and Astrological Counselor

  4. Why Darwin really gave up Christianity, John van Wyhe (2010)

  5. Class 11th

  6. Hypothesis Test Step 1 of 5

COMMENTS

  1. How to Create a Research Hypothesis for UX: Step-by-Step

    Here are the four steps for writing and testing a UX research hypothesis to help you make informed, data-backed decisions for product design and development. 1. Formulate your hypothesis. Start by writing out your hypothesis in a way that's specific and relevant to a distinct aspect of your user or product experience.

  2. Hypothesis Testing in the User Experience

    Hypothesis testing is at the heart of modern statistical thinking and a core part of the Lean methodology. Instead of approaching design decisions with pure instinct and arguments in conference rooms, form a testable statement, invite users, define metrics, collect data and draw a conclusion. Does requiring the user to double enter an email ...

  3. Hypothesis testing in UX

    Hypothesis testing is a statistical method used in UX design to test assumptions and make informed design decisions. By formulating and testing hypotheses, UX designers can gain insights into user behaviour and validate their design decisions. Formulate a clear hypothesis: The first step is to identify a specific question that you want to ...

  4. How to create a perfect design hypothesis

    A design hypothesis is a cornerstone of the UX and UI design process. It guides the entire process, defines research needs, and heavily influences the final outcome. Doing any design work without a well-defined hypothesis is like riding a car without headlights. Although still possible, it forces you to go slower and dramatically increases the ...

  5. Design Hypothesis: What, why, when and where

    Design Hypothesis is a process of creating a hypothesis or assumption about how a specific design change can improve a product/campaign's performance. It involves collecting data, generating ideas, and testing those ideas to validate or invalidate the hypothesis.

  6. How to Test UX Design: UX Problem Discovery, Hypothesis Validation

    Step 4: Form a list of UX problem hypotheses. After the problem discovery and user testing phases, you can form a backlog of UX problem hypotheses. Based on this backlog, the UX design team should ideate solutions during the next steps of UX design validation.

  7. Hypothesis Testing · UX Strategy Kit by the User Experience Strategy

    Hypotheses can be made in relation to different topics, groups of people, or other things. Various UX methods help to test these hypotheses and thereby falsify or verify them. The Hypothesis Testing template offers a simple way of recording hypotheses, assigning them to Personas, and a specific testing method. The results can then be recorded ...

  8. 6 Powerful User Research Methods to Boost Hypothesis Validation

    Photo by UX Indonesia on Unsplash 1. Card Sorting. Card Sorting is a user research method where participants are requested to group content and features into open or closed categories. The outcome of this exercise unveils valuable patterns that reflect users' expectations regarding the content organization, offering insights for refining navigation, menus, and categories.

  9. 5 Rules for Creating a Good Research Hypothesis

    Five rules for a good hypothesis. 1: A hypothesis is your best guess about what will happen. A good hypothesis says, "this change will result in this outcome."The "change" is a variation on an element—a label, color, text, etc.The "outcome" is the measure of success, the metric—click-through, conversion, etc. 2: Your hypothesis may be right ...

  10. A Simple Introduction to Lean UX

    Creating a Hypothesis in Lean UX. The hypotheses created in Lean UX are designed to test our assumptions. There's a simple format that you can use to create your own hypotheses, quickly and easily. ... User Research and Testing in Lean UX. User research and testing, by the very nature of Lean UX, are based on the same principles as used in ...

  11. UX Hypothesis Testing Resources

    The 6 Steps That We Use For Hypothesis-Driven Development. Throughout this paper, an expert explains hypothesis-driven development and its process, which includes the development of hypotheses, testing, and learning. 8. A/B Testing: Optimizing The UX. This paper explains how to effectively conduct A/B testing on hypotheses.

  12. How to create product design hypotheses: a step-by-step guide

    Great work, we've got our first testable hypothesis. Do this for the rest of the ideas and when you're done, don't try to prove these to be true. Do the opposite. Step 5: Testing the right thing and testing the thing right. By now you're probably a bit excited about my massage chair, I know I am.

  13. How to design & launch A/B Tests with confidence

    Now that you've got the basics down of A/B testing, here's how you get started. Step 1. Specify the problem to solve. Your testing success hinges on how well you've defined and documented the problem you're solving. A well-defined problem statement creates a launchpad for ideation and generating hypotheses.

  14. UX Research: Objectives, Assumptions, and Hypothesis

    With qualitative research in mind let's start by taking a look at a few examples of UX research hypothesis and how they may be problematic. Research hypothesis Example Hypothesis: Users want to be able to filter products by colour. At first it may seem that there are a number of ways to test this hypothesis with qualitative research.

  15. User Research

    The benefits include: Articulating a hypothesis makes it easy for your team to be sure that you're testing the right thing. Articulating a hypothesis often guides us to a quick solution as to how to test that hypothesis. It is easy to communicate the results of your research against these hypotheses. For example:

  16. Hypothesis statement

    Brainstorming solutions is similar to making a hypothesis or an educated guess about how to solve the problem. In UX design, we write down possible solutions to the problem as hypothesis statements. A good hypothesis statement requires more effort than just a guess. In particular, your hypothesis statement may start with a question that can be ...

  17. User Experience Research and Usability Testing: When and How to Test

    March 17, 2023. User experience research is the practice of identifying the behavior patterns, thoughts, and needs of customers by gathering user feedback from observation, task analysis, and other methods. In plain terms: UX research uncovers how a target audience interacts with your product to gain insights on the impact that it has on them.

  18. Hypothesize This!. How to rapidly identify and test the ...

    Step 1: Figure out your hypothesis. What you're really doing when you're making an app or a new feature is advancing a hypothesis about the world. For example, if you're building an app for landlords and tenants, your hypothesis could be this: "We think tenants are more likely to pay their rent on time if we offer them cashback.".

  19. Hypotheses in user research and discovery

    The unit of measurement is user research. As this is about research and learning (discovery), the measure is simply what we want to learn from user research. Each assumption can become testable ...

  20. The ultimate guide to designing and testing MVPs

    When you test an MVP, you are running an experiment. Its purpose is to test hypotheses about the problem you are targeting, your proposed solution, or both. To define your hypotheses, follow these steps: Determine which risk areas to focus on — Decide which of the four risk areas your MVP will assess. While you can assess any type of risk ...

  21. Lean UX

    The Lean UX approach starts with a benefit hypothesis: Agile teams and UX designers accept that the right answer is unknowable up-front. Instead, teams apply Agile methods to avoid Big Design Up-front (BDUF), focusing on creating a hypothesis about the feature's expected business result. Then they implement and test that hypothesis incrementally.

  22. Understanding Usability Testing Methods For Effective UI/UX Design

    This involves recording participants' interactions with a system or product using screen-recording software or specialized usability testing tools. It measures things like mouse clicks, scrolling, and movement. Benefits. Tracks how people interact with a site, pinpoints stumbling blocks, and measures CTA effectiveness.

  23. A/B Testing in UX Design: When and Why It's Worth It

    A/B testing (split testing) is a quantitative method of finding the best performing version of CTA, copy, image, or any other variable. To start A/B testing, prepare two or more versions of a single element, randomly split your user group in two, and see which version performs better. Great tools for A/B testing are Unbounce, VWO, or Optimizely.

  24. Hypothesis Testing

    Table of contents. Step 1: State your null and alternate hypothesis. Step 2: Collect data. Step 3: Perform a statistical test. Step 4: Decide whether to reject or fail to reject your null hypothesis. Step 5: Present your findings. Other interesting articles. Frequently asked questions about hypothesis testing.

  25. Flight price alert: UX Research case study

    Hypothesis validation. Validated. Creating a price alert gives the benefit of not seeking airline tickets. Users get frustrated if they stay too long without receiving a notification. Invalidated. Most price alert users are frequent flyers; Conclusions. Need to change the price alert email to include all price variations instead of only price ...

  26. Evaluating initial usability of a hand augmentation device ...

    It is often the case that diverse populations are not considered during the development of motor augmentation technology. To ensure inclusive wearability, Clode et al. explored the usability of a 3D-printed hand augmentation device called the Third Thumb across a range of demographics at the Royal Society Summer Science Exhibition. Of 596 participants, 98% were able to successfully wear ...