While working as a Web Developer I saw many project functionalities fail due to the lack of usability testing before launch. Stakeholders often believe that they know what their users want and choose to build a finished version based on their own wishes rather than collected user data. This eventually leads to the company losing site visitors, and therefore revenue.
Usability testing doesn’t have to be a long-term, large project; it can be conducted fast and still deliver results that will save developers lots of re-coding. Let’s run through a few easy steps that show how to run cost-effective, reliable, face-to-face usability tests.
Make Tests a Part of Your Design Process Early on
Firstly, we test the competitors’ websites (or the client’s existing website if we’re building a new one) to get an idea of what the users want from them. After that it’s time to start building our own wireframe based on those results.
Because I prefer to test several ideas as early as possible instead of developing one fully straight away, I tend to use clickable low fidelity wireframes and make them very simple. The idea comes from Jake Knapps Design Sprints, which shows that it’s more productive to test prototypes early instead of when the product’s already built. At that stage, there’s a higher risk of failing to meet the users’ needs and then having to rebuild the product.
I usually start out using the online tool Balsamiq for creating fast low-fidelity wireframes. There, we can draw the product with uneven lines so that it almost looks handmade. This helps to show the customer and user the structure, rather than colours and graphic elements that aren’t relevant in the early stages of the process. As the tests are run and the results come in, I start making the wireframe more and more similar to the finished product until all the questions have been answered and no more tests need to be run.
How to Run a Face-to-Face Usability Test
Whether I’m testing a competitor’s website, low-fidelity wireframe, or high-fidelity prototype, I always use the same steps to conduct the test. This is based on the fact that we already have a user base that fits the target group we need to test on.
Step 1: Preparation
- Find a maximum of five people to run the test on. We want to run the tests as small possible to be able to iterate the design quickly and then test again. Make sure the people fit the demographic and if not, ask them to put themselves in the situation of the user that will use the product.
- Write down a maximum of five scenarios that the user should test. For example: “Book a restaurant for two people in London tonight at 6pm” or “Send an email via the contact form with the subject Opening Hours”. More scenarios than that will take too long and can make us focus on too many design aspects at the same time. We want to make small changes effectively. Write down instructions for these tasks.
- Set up a recording program that will record how the users move the mouse on the screen. Quicktime is fine for this. If we want to take it one step further, we can also record their voice and turn on the webcam. This is beneficial if we forget something and want to look or listen to the session at a later stage to remember where the user was struggling. Just make sure their consent is given first.
- Write down an introduction that explains the test and how the participant will help. Also write down any questions you might have for them such as questions about their demographics or experience with your product. For example “How old are you?” or “How many hours a day do you use social media apps on your phone?”.
- Prepare the wireframe on your laptop, mobile or paper (prototype) and bring it with you to the session.
Step 2: Conducting the Usability Test
Everything is prepared and we’re now 100% confident that our testing session will go as planned!
- Let’s start by inviting the first participant and running the introduction with them. I usually start by saying the following:
Before we start, I thought I’d give you some information. We’re currently building a solution for *name of project here*. We have designed a clickable prototype that the product will be based on and now need to test it to see that we’ve implemented the right features. I’ll be watching when you use the prototype. Since this is a prototype and not the finished product, you will experience some bugs. I ask you to still try to use it as if it was a real app/website.
It’s important that you’re aware that we’re observing how the workflow can be improved and not how you use this product. You can’t do anything wrong! If something doesn’t work, it’s the product’s fault, not yours. During the course of the test, I’ll ask you to try to think out loud as much as possible: say what’s not going right, what you’re trying to do and what you’re thinking. This’ll be very helpful.
The test will take *time here*. I’ll record this session with a microphone, webcam and Quicktime recording program which will help us a lot when analysing the data and enables us to take fewer notes. Is this ok for you? Do you have any questions before we start?
- Give the participant a consent form if you have one and ask them to sign it.
- Start recording.
- Ask the demographics and experience questions that were prepared. Start filling out the following user test form:
- Participant’s first name, age, technical level or other demographics:
- Observation Notes:
- Problem points:
- Stop the recording and save it with their name as the title.
- Thank the participant.
- Give them their “salary”.
- Write down any additional notes and complete the above user test form.
- Set up a new recording session.
- Ask the new participant to come in.
Step 3: Analysing the Results
When the tests have been completed, the analysing path can go in two ways depending on where in the design process we are. If it’s at a later stage and the test data is more quantitative, I would recommend using a free online data analysing tool like Voyant where the frequency of a problem can be measured by adding your notes from the current and previous user tests. The tool will show which sentences and words have been used before so that we can see if any problems are recurring. But if it’s in the early stages, we can analyse the test manually:
- Take the notes from the test and write down five user goals and five user problems. See if there are any duplications amongst users. It can also be helpful to write down specific quotes from the participants that are related to the goals and problems. For example, a goal might be “Alex wanted to book a restaurant with vegetarian options” and a problem might be “There were no filters for vegetarian restaurants”.
- After that, we gather the UX team and discuss what worked and what needs to be improved. Present your results and if they ran tests as well, ask them to do the same. Here we can nail down the main concerns and come up with possible solutions.
- Then we build a new wireframe based on the UX team discussion, test and analyse again. We do this over and over again until we feel that we’ve reached a stage where we are satisfied with the functions and content.
- For minor improvements in the future, HotJar can be used to measure whether users are using the app or website correctly through heat maps, user recordings and feedback polls.
- Now that the wireframe is complete, we show it to the customer and match it to their vision. Again, there will be more changes and the wireframe design will go back and forth a few times before being approved by the customer.
When we’ve worked with a product for a while, it’s hard to predict how the user will behave on the site since we already know which button goes where. A face-to-face usability test can show us all kinds of unpredictable user scenarios and allows us to ask the “Why did you do that?” question, rather than just “What did you click on?”.
This gives us a greater understanding about the product, users, and makes the design (and coding) process much faster. The developers will be given a bullet-proof design that has been tested several times which minimises the risk of them having to re-code the product straight after it’s been released. Instead, we can start monitoring the revenue. It’s a great way to uncover and repair possible usability issues.