Tips on creating and conducting your first usability test.
At this point, you’ve learned a little about scoping and planning a session, creating a testing and scripting, recruiting users, and moderating sessions. Now, let’s talk about analyzing results.
In truth, I think analyzing results is the hardest step because even if you ask all the right questions and “do everything right”, it’s still incredibly easy to misinterpret results. But the following rules always sets me up for good data collection and analysis:
Consider your biases.
There are lots of biases that can affect your research — do the best you can to be aware of them as you plan and conduct your research. This isn’t something to be scared of, just be aware that biases affect how you construct and analyze datasets. The more sessions you conduct, the easier it will become to recognize a bias, but, until then, know that even if there is an accidental bias, you’ll still get lots of helpful data. Every session will help you create a better experience.
Record. Record. Record. I say this a lot, and I can’t overstate its importance. Even if you have a partner assisting you, something will be missed and having a record allows you relax during the session and not worry about detailing every possible moment.
If you can, include the cost of transcriptions into your budget because an accurate transcript will make finding key areas much faster, as well as allow you to search a document for quotes and keywords. A transcript will help you increase the depth of your evidence and your speed in finding key moments.
Here are some transcription services I’ve used (and liked):
Don’t trust your memory.
Our memories are imperfect. So, as a researcher, rely on measurable evidence as much as possible. When you think you see a pattern or story, go back to your notes and recordings to confirm it. It doesn’t take long and is very much worth the extra effort. (You might be surprised how often session details are incorrectly remembered.)
A few years ago, I conducted 7, hour-long sessions in 1 day. (As an aside, I don’t recommend such a schedule.) At the end of the day, I was talking to the note taker and one of the observers, and we all had very different recollections of one specific sessions. The participant and session were spectacular and confirmed many of our hypotheses. On one particular task, however, we all disagreed on both her performance and commentary. When I looked back at my notes and transcription, I found that we were all shades of right and wrong. I shared this with my team, and we all were able to easily agree on what actually happened. It’s important to note, though, that if I couldn’t have exposed the facts of that session, then we all would have still disagreed about a moment that we all experienced together.
For an interesting read on memory, check out You Have No Idea What Happened by Maria Lonnikova.
Use a framework to document data.
It’s important to identify the data you want before you begin testing. Luckily, for usability testing, this is usually pretty straightforward — the tasks your participants perform correlate directly to your quantitative measurements. But you’ll also be gathering plenty of qualitative data as well. I use the following documents to log both types of data so that I easily access them later.
Excel is a great way to organize all the data that comes with your tasks, such as whether a task is completed, how long it takes, if they struggled with it, etc.
Here’s an example template.
This template is not perfect, but it should give you a place to start. When you’ve finished all your sessions and recorded all your data, you can apply filters to the spreadsheet to quickly find patterns across users, screens, and prototypes.
A 1 page summary sheet for each session is a great way to quickly note high level themes and pain points. Ask every person in the session (other than the participant) to complete one before they leave. These sheets can also double as a quick summary to the session.
It’s a good idea to to transcribe summary sheets into the quantitative spreadsheet as well. Then, all of your testing data is in one place!
Keep in mind that this is not the only way to analyze information and might not be feasible everything you test. This is merely a framework to help understand some of the things you need to consider