An easy guide to the most common user-experience research methods
By Martyn McDermott
5 min read
We're pretty big on user-experience research here.
When the average person makes around 122 decisions a day, how could we not be?
Thorough UX research should inform every online choice we make. It allows us to empathise with users and put the right ideas front and centre. That's because user-experience research is based on real data. And this allows designers to make the right decisions for users.
Sounds simple, right?
However, there are around 20 user-experience research methods that we most commonly employ on our projects - so that's a great place to start. Without further ado, let's get into it...
Behavioural - what people actually do
Attitudinal - what people say they do
Don't worry. If UX is tasked with making experiences accessible, we'll get straight to the point with the science behind it too. So without further ado, let's get into it...
#1. Usability testing
This is where participants are brought into a controlled setting (usually one-on-one) to evaluate the usefulness of a product or service. They do so by completing a set of typical tasks whilst testers observe, take notes and record insights.
#2. Concept testing
Essentially, this is where you ask participants their thoughts and opinions on a concept or idea, e.g. your brand's value proposition. It's an effective way of figuring out whether the concept or product is a good fit for the market, whilst meeting customer expectations; crucially, before going live.
#3. Field studies
The difference with field studies is that they take place in the user's context as opposed to the office or lab. Home or work is a more natural environment for the user to test in and can, consequently, provide the most realistic results.
#4. Diary studies
A long-running (longitudinal) research method that collects qualitative data. Participants are asked to keep a diary (or use a camera) to record their thoughts and feelings about a product or service.
#5. Contextual inquiry
This is where testers and participants collaborate in the latter's own environment. It involves in-depth observation so that testers can understand how each user undertakes tasks in a real-world setting. It's similar to a field study as participants are being observed in their own natural context. But in this scenario, the participant plays a more leading role in the process by describing what they are doing as they are doing it.
#6. Customer feedback
The information that's been given by a sample of users. It can come in via a feedback button, link, contact form or email. Alternatively, users can provide verbal feedback in an interview.
#7. Participatory design
This approach invites all stakeholders, e.g. customers and employees into the design process. The idea is that you get a well-rounded idea of what matters most to a variety of audiences. That's because they've all been allowed to participate and express their feelings about the experience.
#8. Desirability studies
The goal is to learn how each design influences a participant's perceptions of the product, e.g. its trustworthiness or how well it communicates an idea. This method is most effective in the early stages of product development and can be qualitative or quantitative.
#9. Focus groups
These are informal ways of assessing a group of users' thoughts, feelings and concerns about a product. Generally, these groups bring together 3-12 users who are led through a selection of topics and exercises.
#10. Card sorting
This involves a group of participants organising subject or topic labels (often written on notecards) in a way that makes the most sense to them. It's usually a helpful activity for refining information architecture, e.g. category pages on a website.
#11. Interviews
Where researchers meet participants one-on-one to understand how the user feels about the product or service.
#12. Eyetracking
Eye-tracking testing tools measure where a participant is looking as they perform tasks or interact with online products, tools, apps or websites. They give researchers a unique insight into how users engage with a product or design.
#13. Tree testing
This method helps you evaluate the hierarchy and findability of topics in a website or app. In a tree test, users are presented with a text-only version of the site's hierarchy and asked to complete a series of tasks. Combined with a card sorting exercise, it can be used to improve the information architecture of a site or app.
#14. Usability benchmarking
Usability (or UX) benchmarking is the process of evaluating a product or service's user experience via precise and predetermined measures of performance. These metrics allow researchers to gauge relative performance against a meaningful standard. Data is usually collected using quantitative usability testing, analytics, or surveys.
#15. Analytics
This is the measurement and analysis of user activity on a website or app. It provides valuable insights into how users behave and, consequently, how a design could be adapted to better satisfy the needs of users.
#16. Clickstream analytics
This method records exactly where the user clicks or which pages they navigate to while using a website or an app.
#17. Remote moderated testing
Conducted with the help of tools like video conferencing and screen sharing, these are real-time user research sessions - conducted over distance.
#18. Unmoderated testing
This is an automated form of testing where no one's present but the user. They usually complete tasks around a site, app or prototype but do so on their own time and in a location of their choosing.
#19. A/B testing
This method randomly shows users two or more variants of a design to find out which one performs better.
#20. Surveys
This is a questionnaire that's sent to a targeted set of users. These are typically quantitative and ask questions that are relatively closed-ended.