Creating a website project with a great user experience is much more than testing and designing. There’s also that critical step in between the two; when you interpret those test results and use them to make highly useful sites. Usability reporting is the process of delivering your test results to other designers, key decision makers and anyone else directly involved in the creation of the final project.
Not surprisingly, there are some established protocols for sharing this valuable information, although, there’s also room to develop your best strategy. To get you started, this article covers how you can use usability reporting to create actionable results from your usability testing.
🎬 Learn what Slickplan can do!
We filmed a short video to show you exactly how to use Slickplan
The Value of Usability Reporting
Before we can begin talking about ways to report the results of your usability tests, it is helpful to understand why we conduct usability tests in the first place. Usability tests help create positive user experiences by ensuring a website has high usability. Keep in mind that usability and user experience are not necessarily the same thing. Creating a highly usable site is only one aspect of ensuring a positive user experience. Regardless of this fact, improving usability is never a bad idea.
A usability test is the best way to improve a website’s usability; however, its value is limited by how well those results are communicated. Obtaining feedback from test participants is useful, but to improve a site, that feedback must be conveyed into the final design of the site, if possible. Of course, every suggestion that you receive from a usability test might not be actionable. The trick to keeping most people happy is making compromises.
Usability reporting, regardless of how useful (or actionable) the feedback is, helps project owners understand the needs of the intended user. How that feedback is used is often a unique balance between schedule, availability, and budget.
Reportable Usability Test Metrics
Usability testing can produce a lot of data, some of it is more relevant than others. Before you can begin reporting, you must first know what you should be reporting. Here are some of the metrics that usability.gov suggests you consider.
At its core, usability is a measure of how easy it is to complete a task, so it makes sense that the task completion success rate is a primary metric. How often a task is completed is its success rate. A task can mean answering a certain question, or it can also mean the completion of a specific task.
Sometimes there are extenuating circumstances that keep participants from completing the selected task or answering a question. These circumstances can include technical difficulties during the test or any other issue that prevents them from finishing their task. You’ll want to include these in your results to account for any deviations.
Not every issue leads to unsuccessful task completion. Sometimes users get distracted from their task or have temporary difficulty in completing them. These types of disruptions are called non-critical errors since they do not impact task completion.
The number of problems you encounter during usability testing is important, but so is the number of times the test goes smoothly. Keep track of the percentage of test participants that complete their tasks without any critical or non-critical errors. This useful metric makes it easy to see how much of the testing was error-free.
Time Spent on Task
Make a note of how much time is spent completing each task. This metric can help significantly improve your website’s user experience.
Feedback such as likes, dislikes, and recommendations are also useful metrics to keep track of and report. Keep records of what test participants liked about the site as well as what they didn’t. If they make any recommendations, include that in your usability reporting as well.
Whereas much of what is reported from a usability test is objective, there is room for more subjective measures. Subjective metrics are highly pertinent to building a great user experience and are reported by users for qualities such as ease of use and overall satisfaction. These types of metrics are best collected by having participants rate their answers on a 5 to 7-point Likert scale.
Analyzing Your Usability Test Data
The metrics collected during a usability test can be separated into two types: quantitative and qualitative. Quantitative data includes measurable metrics such as task completion, time spent on tasks, and error-free rate, as well as ratings from subjective measures. Qualitative, on the other hand, is more descriptive data that includes any answers to open-ended questions, comments that are not rated on a scale, and critical and non-critical errors. Qualitative data also consists of any observations made by the test administrators about the pathways that participants took to complete the selected tasks.
Quantitative and qualitative data are very different and your usability reports should be structured to accommodate them. Once they are in the proper format, it is easy to get a broad picture of the overall results. This step also increases the likelihood of your usability reporting producing actionable results. Here are some general guidelines on how to set up usability testing data for optimal usability reporting.
- Enter your data into a spreadsheet and calculate overall rates
- Add sorting variables, such as demographics, to identify potential trends by groups
- Identify the task scenarios and link them to the appropriate metrics
- Record all qualitative data in a separate document
- Keep data concise
- Use exact language for all problem statements
Once you have organized your data by qualitative versus quantitative, it is time to take a more in-depth look into the problems identified during testing. Chances are, your usability testing identified more than a few issues and you won’t be able to solve all of them. The best way to deal with this is to assign a severity to each problem, essentially choosing which ones should be addressed first.
These rankings are included in your usability reporting. While these rankings aren’t technically a part of the test, they are particularly helpful in empowering business decision-makers, and any other non-technical members of the development team with deciding how to approach the results of the testing. Because hey, if they don’t know how significant a problem is, they may not be inclined to address it.
Here’s a general guideline for applying severity to usability problems:
- Critical: Users cannot complete the task if it is not fixed
- Serious: Users may not complete the task if it is not fixed
- Minor: User experience will be impacted, but the task can be completed
Presentation is Key
Now that you have organized your usability test results and assigned severity, it’s time to do some usability reporting. If you thought you did that in the last step, you were wrong. Usability reporting is much more than delivering a bunch of numbers in a spreadsheet attached to a list of ranked problems. How you present that information is directly related to how easily the test results can be adopted into your end project. Think of it as usability for usability reporting.
Usability testing is a scientific process; therefore, it should come at no surprise that your results should resemble a scientific report. Include these four categories in every report to help identify actionable steps that everyone involved can take.
Start your report with a summary of what was tested, such as the website or a specific application. Include details such as when and where the test took place, the equipment used, the process taken, and the members of the testing team. You’ll also want to include a brief description of the problems that occurred during the test but keep it brief – they can read more in the results. Testing materials can be listed as an appendix.
Here’s where you get to explain what you did so that the process can be duplicated. Why is this necessary? Well, as mentioned above, usability testing is a scientific test, and scientific tests can be recreated to learn more later. Your methodology should include descriptions of test sessions, explain which interfaces were used, which metrics were collected and provide an overview of task scenarios. Also use this section to describe the participants; perhaps summarizing their background and demographics, but never disclosing identifying information, such as full names.
Visualize ideas with diagrams
Build intuitive user flows, stronger customer journeys and improve information architecture.
All that hard work writing problem statements and creating spreadsheets, goes here. Highlight tasks with lowest and highest completion rates. Summarize successful task completion rates by participant, and task. Illustrate your average success rate and show tables of your data. Organize and present your data in a way that is easy for everyone to understand.
Findings and Recommendations
Review all of your data and list your findings and recommendations. Be sure to link your findings to the relevant data (as presented in your test results section). This section can be customized to explain your general findings, specific findings by scenario, or a combination of both.