How to do design research without blowing the bank

Today we welcome Milly, Director of Product at UsabilityHub, for our “Learn from” series. The UsabilityHub platform offers a comprehensive suite of testing tools that helps worldwide users (like Amazon, Google, Asana, and many more) uncover design issues early, preventing wasted time, effort, and user frustration. Whenever you want to build an app or a website, user tests are a gold mine. More on the topic, below.

Take the stage, UsabilityHub!

Have you been hoping to get into design research, but feeling overwhelmed? Is the idea of many in-person interviews already making you feel tired? Not sure how you or your team will afford the inevitable time and expense of a deep research process when you mainly just want to check that your website design is going to hit all the right notes? 

It’s not just you. Even experienced researchers and designers hesitate on in-depth research projects, which can be difficult to integrate into a fast-paced innovation environment. Luckily, there has been an explosion of amazing tools online to help you get started with (or expand your practice of) design research with a much lower cost on your time and hip-pocket.

But don’t give up on in-person research. There will come a time and a place in your process that you will still want to do it. But sometimes something lightweight is more appropriate. In this article, I’ll walk you through five great reasons to do more lightweight research that won’t make you feel like you’ve cheated.

1. Your research toolkit should be diverse

As the saying goes, “If all you have is a hammer, everything looks like a nail.”

Just like your team, your research toolkit should be diverse. Different research tools allow you to probe and investigate, test, and measure with specificity and precision.

It’s certainly true that in-person interviews are a rich and rewarding experience, allowing you to really see the participant as a full human being and empathize with them. But constantly forcing the team to prioritize and run depth interview studies can be time-consuming, costly, and – dare I say it – sometimes inappropriate. Eventually, trust in the research process can be eroded by insistence on using the heaviest methodology available at all times – especially for teams who are early on their research maturity development journey.

UX designers new to research will inevitably be excited to get some face-time with users – but like all disciplines, the senior practitioner knows how to use a variety of tools. A good MVP research toolkit might consist of:

  • Depth interviews (in-person and remote)
  • In-person usability studies
  • Surveys
  • Remote usability studies (moderated and moderated)
  • Pulse surveys (e.g. NPS).

Knowing how to design and run various types of studies using different methodologies is a critical part of building an efficient and effective research practice.

2. Choose the right tool for the job

So, assuming you know how to use all the tools in your toolkit – how do you know when to use which one?

A good first lens is checking what part of the double diamond you’re in. The Design Council UK’s double diamond is a helpful, high-level illustration of the two big stages that we call “the problem space” and “the solution space”.

design process

In the problem space, you’re still learning about what customers or users are struggling with and where their pain points are. This is where you deploy your depth interviews, card sorts, surveys, and other types of exploratory/generative research. You may dip into the world of usability studies if you’re benchmarking a user experience that is part of the pain point.

In the solution space, you’re testing the ideas you’ve formulated that you believe will help solve these customer problems. This is where you deploy evaluative research – usability tests, pulse surveys, tree tests, and other responses to designed artifacts.

The second lens that we use is around idea fidelity. For lower fidelity ideas, it’s more appropriate to go deeper and broader, and as the ideas progress to higher fidelity solutions, you can be more specific. In particular, we use remote unmoderated usability testing in later, higher-fidelity iterations of a design, where we have discovered and validated the broader user problems and are now focusing on the finer details of the solution.

The third lens we use to decide on which research tool is appropriate is a lens of risk. Sometimes, for example, it might not be necessary to spend a lot of time exploring customer problems if the possible solutions are obvious and – crucially – low effort. For example, in the case of a bug fix, where the problem is clear and the solution is clear, it’s not necessary to run depth interviews with users.

3. Double check lingering assumptions

As much as we want to eradicate assumptions as we develop our ideas as designers and researchers, we can never entirely eliminate risk. Further, the need to proceed only with perfect information can hold teams back when a lean build-measure-learn loop might be more appropriate.

Often towards the end of the solution development process when we start thinking about launching our work to production, we may notice some new assumptions have crept in over the course of design and implementation iterations. As new people are involved with the project, new ideas and interpretations of various decisions are woven in.

Rather than forcing all of the creation of the solution to be policed by the design and research team, we prefer to allow input from various team members but make sure that any parts of the solution that have gathered their own assumptions are tested before we launch, especially if there is the risk that of the design hypothesis is wrong, the efficacy of the solution might be diminished.

For example, if part of the implementation means that we can’t use the same layout as was tested in earlier iterations, we’ll throw together a quick remote unmoderated usability test on UsabilityHub based on screenshots from the local development environment to double-check that our target users can still achieve their goals. It’s much easier and a lot better for the cross-functional team dynamic than being dogmatic – but it doesn’t replace the earlier research done to develop the solution in the first place. Another example is the use of tables to structure information, such as in this article that explores if Audible is worth it.

4. Leverage short feedback loops for rapid iterations

Short feedback loops are critical in enabling teams to integrate research into their workflow smoothly. Not all research feedback loops are short – some longitudinal studies might take weeks or months to gather data, and that’s even before synthesis has begun.

As we go along the process from problem space to solution space, our feedback loops get shorter and our iterations faster. Usually, in the early stages, we will be spending more time in conversation, going deep on data, and thinking carefully about the implications of the work. But as we enter the solution space, those feedback loops speed up in order to allow us to test multiple ideas and learn faster.

Toward the very tail-end of the process, we sometimes run our research rounds so quickly that the results are instantly integrated into the design iteration by a developer. At this point, it might even make sense for the developer to be involved in the research to get them the insights as soon as possible!

We love using remote unmoderated research at this stage and really forcing ourselves to run short, sharp tests (rather than long, in-depth ones) in order to stay laser-focused as we push to the finish line. Earlier on in the process, it’s unlikely we’ll be running at that same pace, and other methodologies and tools are more appropriate.

5. Reach more participants with increased flexibility

One huge reason why we have started using unmoderated remote research more is that it helps us diversify our participant pool.

After emailing our beta tester list to be involved in some remote moderated tests, we noticed that the response rate was smaller than we expected. Rather than assuming, we sent a super quick survey to ask why they couldn’t participate, selecting from the following options:

  • the incentive is too small
  • too busy
  • not convenient
  • not interested
  • something else.

We found that the majority of our participants simply couldn’t find crossover in their calendar and ours as we are based in Australia. By shifting to unmoderated sessions, we were instantly able to test with more customers as they could complete the session on their own, in their own time.

For us, this is important as our customers are spread all across the globe, but in general, being able to test outside of your local area is a huge win for research, especially when your users are not necessarily your neighbors.

Another big advantage is that because we don’t have to moderate every session, sessions can be done concurrently, allowing us to massively decrease the time from hypothesis to insights. Using the UsabilityHub panel allows us, for example, to turn around results from 50 participants in less than half an hour – something that could take us close to 50 hours if we had to be part of every session.

Obviously, this would be less helpful for a conversation, and in that situation, a remote moderated approach is preferable. But where it’s possible, we find remote unmoderated to be convenient for us AND for our participants, so it’s a win-win.

Part of a balanced diet

Hopefully, I’ve convinced you that remote unmoderated research isn’t just a poor imitation of in-person, moderated research, but instead a complementary tool that you can deploy in addition to your existing research practice.

Sometimes you need to go deep, spend time, expand your feedback loop, and invest in mitigating as many risks as you can, but sometimes speedy, lightweight, and simple research is more appropriate.

If you’re looking to build a more complete research toolkit, head to UsabilityHub and start testing today.

Milly Schmidt
+ posts

Milly Schmidt is the Product Director of UsabilityHub. Her background spans writing, editing, teaching, photography, engineering, UX research & design, people and project management, and entrepreneurship. Above all that, here are her 2 superpowers: being a technical person and deeply obsessed with people.

 

How to Kickstart Your Wordpress Website with No Coding Skills

[Marketing plan included]

FREE GUIDE