Qualitative and quantitative research

image of a person considering either quantitative or qualitative research, with two thought bubbles representing this decision

The members of our Ad Hoc research team come from a variety of backgrounds — we are a diverse group of social scientists, designers, and information scientists. In addition to caring deeply about the people who will be using the digital tools our teams build, we are all data mavens, and focus on ensuring our research can be turned into action by ensuring key decisions are based in data. Most of our work is focused on qualitative studies, and we find that there is often a lack of clarity among our stakeholders on the benefits of qualitative versus quantitative data, what methods are appropriate for what kinds of questions, and how different types of data sets can work together to inform design, development, and product roadmaps.

My own background is in anthropological archaeology, and my training included both qualitative, observational methods and quantitative statistical analyses. I’ve used both throughout my career in design and innovation. The key to knowing what methods and approaches to use is to start with the questions. The questions vary at different stages of a project, and in fact the open questions themselves may inform what stage a project is in.

Coming up with a plan

While there are many different ways of dividing up research stages, much of the qualitative work we do at Ad Hoc falls into what we call the “front end” of the development process — in other words, we try to learn as much as we can before we start to build. Discovery research helps us form narratives around what and why. Observations of people “in the wild” (doing the relevant tasks in the way they normally do them, in the environment in which they normally do them) tells us about what they really do and inform us about their experiences, needs, and values. In-depth interviews and contextual interviews are also useful at this stage. This work puts boundaries around the problems, and tells us in broad strokes what is it important to change and what is important to maintain.

Helping to define solutions

Once the shape of the problems and solutions are clear, our research focuses on defining solutions that will work for users. To do this, we iterate with people using participatory methods that enable them to interact with artifacts such as early stage designs or post-its. These become touch points for conversations, and enable individuals to talk about what is important to them and reveal their mental models in ways that direct questioning does not allow. This definitional research emerges more specific concept and solution ideas.

Validation

Validation research tests the solutions, enabling product improvements and measuring changes and impact. This is when we look for usability problems — where users are confused, when the tools may not operate as they expect them to. Often these studies are “task based” so that every participant is asked to do the same thing, and it is possible to measure the frequency of particular problems. Nonetheless, even validation studies are qualitative, and necessarily limited in the number of participants.

Qualitative work

When we conduct qualitative research with a small number of users, we are frequently asked by worried stakeholders how we can justify recommendations without “statistically significant” data. The answer is that the questions we are addressing do not need statistically significant answers, but they do need what we refer to as a “saturation” of data. Qualitatively, we stop when we are not gaining new information from new participants. There are a wide range of recommendations on when this occurs. One study found that this occurs within 12 interactions, and frequently in as few as six. Jakob Neilsen recommends as few as 5 participants of each user type.

Quantitative measures

At the same time, as researchers, we also value key quantitative measures that can tell us more about how people use the tools we build, and the demographics of the groups using them. At the discovery stage, this data can help us frame our interview questions and what we want to be alert for in observations. In later stages, when websites are active, they give us a view into where people abandon the tools, and can help resolve specific design questions through A/B testing. As with qualitative research, defining the questions up front is crucial. In order to learn from how real users use the real site, we need to ensure the site is instrumented properly to collect data that we can use to make ongoing decisions. It is not enough to know number of clicks. We need to address questions that tell us if we have made significant improvements for users, such as:

  • Are more people able to complete tasks they started?
  • Are people able to complete the tasks in less time?
  • Do more people come back to use the tools again?
  • Are there fewer calls to the call center to address basic questions that should be clear on the website?
  • Do more individuals create accounts?

Our work does not end here, as these metrics lead to new questions that we can address with new rounds of qualitative research. These what and why questions constantly interplay and interact with each other, allowing us to learn from people to create tools that work better for them.

Illustration by Barb Denney