Skip to the content


Evidence-based decisions. This phrase has gained popularity in the public sphere, reflecting a desire for people to base their actions on proof that they have a reason to expect a certain outcome. Unfortunately, it is easy to use good data in a bad way, even with the best of intentions. Professionals need to be ever-vigilant not to mislead people with evidence, and consumers of information need to be wary about what they are being told.

How can you mislead people when you present real evidence? Without ever saying something false, you can make people come to the wrong conclusion. With user experience (UX) research, presenting evidence is a little more nuanced. Whether it’s discovering customer needs or designing a new product or website, UX research should be carefully woven into the process to achieve the best results.

A large part of UX research involves qualitative data. Qualitative research comes from observations and open-ended responses, and each data point can be a full sentence or paragraph. These data points come from talking to people in interviews, in focus groups, and through many kinds of media. Qualitative data specialize in answering the “whys” and “hows” behind how people make decisions. You can ask very specific questions and get clear responses and adjust the questions to follow up when things are unclear. Furthermore, the nature of qualitative research and the questions addressed means that you may only need a dozen or fewer people to find most of the answers you are looking for—assuming you do it correctly. Nielsen reports that five to eight participants can find 85–90% of the usability issues through iterative testing.

The downside, of course, is that qualitative data alone can be misleading if it is presented and interpreted the same way as quantitative data. Have you ever seen a commercial advertising a drug or other medical-related product? This may sound familiar: “Nine out of 10 doctors recommend you use what we’re selling.” There are several problems with this: first, it gives the impression that 90% of doctors would tell everyone to do something (which would usually require tens or hundreds of responses to be meaningful), when in reality the producers may have only spoke to 10 doctors during the study. Second, the lack of context around the claim can lead to misinterpretation: it is possible that nine doctors did not explicitly say that people should not use the product because they didn’t think it was necessarily lethal. By attempting to quantify findings from qualitative research, it is easy to mislead audiences even with the best of intentions. Audiences can be anyone from consumers evaluating a product to product teams deciding how to create one.

Luckily, other research methods balance out the weaknesses of qualitative research. For instance, quantitative data involve numbers and measurements: accuracy rate, task time, and number of visits. However, you can also create standardized measures to quantify abstract concepts like how satisfied people are, how likely they are to visit your website, and how likely they are to use it.

The benefit of having numbers is that you can answer questions about “how much.” How much did people tend to use this feature? How big of a reaction did this message generate compared to the other one? The problem comes when the numbers reveal something that is unexpected and people start asking the “why” questions. “Why is it that people spend an average of five hours on this page? They must love it!” It is easy, even for experts, to jump to these conclusions improperly without the right evidence. With numbers alone, it is incredibly hard to reasonably suggest the cause of something, or the nature of the relationship between things.

Luckily, used in the right way, qualitative and quantitative methods can be combined to cover each other’s weaknesses. Qualitative research sets the stage by identifying the core problems and potential solutions, and quantitative research follows up by confirming the size of the impact and verifying that it applies broadly. Qualitative and quantitative experts on the Customer Experience (CX) team at Fors Marsh Group work together every day to provide evidence using the most cutting-edge techniques, and leverage their experience to make sure it is put to good use.

About the author

David Tang

David Tang

David Tang joined the Customer Experience Division at Fors Marsh Group as a user experience (UX) researcher in September 2016. He is trained in both quantitative and qualitative research methods and experimental design, including both online (e.g., surveys, video interviews) and in-person procedures (e.g., usability tests, cognitive walk-throughs). Further, he is experienced with the collection, analysis, and interpretation of several physiological measures (e.g., electroencephalography [EEG] and eye tracking). As a researcher on the Customer Experience team, David helps clients create and improve products and services that address user needs. Before joining Fors Marsh Group, David conducted UX research at X (formerly known as Google X), helping teams develop and evaluate large-scale innovations.

His research has been published in more than 10 peer-reviewed journal articles and numerous conference presentations and book chapters. His specialties include using surveys to assess emotional responses, cognitive functions, and behavior relating to willpower, behavioral assessment, and psychophysiological techniques. David holds a Ph.D. in Social Psychology from Texas A&M University and a B.S. in Psychology from Stony Brook University.

Let's Work Together

We’re here to help, so ask us anything.

Insight Delivered

Subscribe to our newsletter to stay up-to-date on our latest news and outlook.

Please enter a valid Email address