Customer engagement is a hot topic these days, according to digital customer experience trends. For long-term customer success, you need to design better products that reach more people.
Usability testing is how you test your product with real people.
As a method, it helps companies test the success of their design, raise their game, and keep their conversions healthy. This post will share how to perform usability testing on your enterprise website to fix issues and optimize your conversions.
Whether you’re evaluating the user experience of a new site before launch or upgrading an existing one, we’ve got you covered with principles and tricks you can use to find out what’s working and what’s not.
What is usability testing?
Usability testing is the process of evaluating your product by seeing how your end-users interact with your system while completing a list of tasks. You observe their interactions to see how usable your site is while noting areas for improvement.
It’s about understanding your product's functionality and seeing if users can make intuitive sense of their experience of it. You then let the feedback guide you as to how you’re enabling users, further informing your design decisions.
Usability testing will help you understand problems that are negatively affecting conversions.
A case where usability went wrong was a problem in Myanmar when users couldn’t view the UI or content that used another standard for encoding and had to rely on converters from unicode to zawgyi to understand what international apps were saying. But that's a low bar.
We can distill the issues that can concentrate the user experience down to three primary usability metrics. The effectiveness with which users can accomplish their goals on your website.
The best time to do usability testing is... anytime. It’s always good to know how usable your product is, whether before you start designing your current site or once you have a prototype ready, for example.
Usability testing helps you probe these indicators about what the user thinks and feels when they approach your site.
Once you’ve determined which metrics you’re using, you need to create task analyses.
Typically, a designer - with a deep understanding of app lifecycle management, for example - will start the ball rolling and walk UX researchers through the prototype.
They annotate assumptions and break down the complex interaction flow within business applications for UX researchers who facilitate testing sessions.
Task analysis involves breaking down your overarching goal into small steps. So as many as you need to achieve your objective and develop an idea of the path users take.
Use your task analysis to set the metrics you’ll use to benchmark effectiveness and efficiency - such as task completion rate and time spent on any given task. Typically, you build your task flow by laying out and connecting up shapes - diamonds for decisions and squares for the process.
Having built your task flowchart, it now functions as your guide for adjusting your metrics and conducting testing sessions.
Narrow down test type
Let's suppose you're a business that doesn't need to ask, 'what is SaaS marketing?' because you're serious about selling your products. You recognize that the user experience is about their interactions with your brand and your product.
Once you develop quality software, you know that differentiating yourself means showing how your product improves the lives or businesses of your customers. Well, how do you ensure it does exactly that?
That's where usability testing comes in.
First, you need to identify the kind of usability test you should perform for your enterprise website.
Your choice will depend on the tasks you’re looking to measure and the metrics you’re using.
Let’s take a look at a few common use cases.
Card sorts are an early usability test. They enable tests to be conducted at speed and test how well users’ mental models match your site’s architecture.
However, because card sorts involve minimal user feedback, they’re not well-suited to testing other metrics like satisfaction and effectiveness.
Then there are field studies in which UX researchers embed and get users to walk them through how they interact with their UX implementations.
Also known as contextual inquiries, field studies are perfect for usability testing because they allow you to gain face-to-face feedback. While they let you take a deep dive into usability issues, they’re less time-effective.
Another usability testing option is eye-tracking tests. These studies can be done in person or remotely via webcams. They’re excellent at uncovering problem areas where users disengage from your site or content that might be irrelevant. They help you discover what users care about.
A/B tests, focus groups, and surveys do have their uses as techniques in user research and testing. If you don’t need to ask, ’what is automation testing’, you’ll know that code analysis employs different tools to reduce time and effort and test some usability issues.
But these don’t qualify as usability tests in the strict sense, given the absence of real people interacting with your product.
When performing usability testing, you can choose from several approaches that help you assess your product from the user’s viewpoint.
Whether they had trouble trying to accomplish their tasks or whether the experience was a frictionless experience, observe and record their actions.
These are your actionable insights - the basis for optimizing the user experience.
How to find participants
The vast majority of businesses use fewer than ten users to perform a usability test, and for most, a handful will make a difference.
When you’re seeking participants for your study, be sure to pick valid proxies for real users to bake in external validity from the start.
In the worst-case scenario, you could roll out a redesign that turns out to be a flop with your core audience and cost yourself a fortune in the process.
In which case, you can turn to third-party platforms where you can pay for representative users. Usertesting is an environment that allows you to arrange live testing and filter user-testers by a range of demographics. Userlytics is another excellently customizable subscription-based service that makes it easy to find the right participants.
If you have difficulty sourcing an adequate sample of users and gaining quick feedback is your biggest concern, you can always use internal testing.
How to run internal usability tests
Internal usability testing can be an alternative when sourcing participants takes too much time, effort, or expense. Internal usability testing is also known as dogfooding - alluding to the pet food quality-control worker who went above and beyond to validate the products.
This remarkable case of ethnographic immersion not only saved the company by highlighting issues before the product went into mass production, but it also proved great fodder as a marketing event.
It’s a legendary backstory that illustrates the importance of including anyone involved in the project in testing out the user experience.
Qualitative vs. quantitative data and results
Depending on how you carry out your testing and your original objectives, you’ll likely be working with some mix of both quantitative and qualitative data.
Consider the task completion rate metric in the chart above. While a fundamental usability measure, it doesn’t provide much insight for business applications because longstanding users of the product - power users - can still complete tasks unassisted.
In such cases, you’ll aim to observe their behavior and solicit feedback while they’re engaged in tests to learn about potential usability issues.
A mental model refers to user beliefs about how the experience works. How does their experience match their expectations? (See chart below.) You need to understand this to make sense of any pain points that teams whose role is to improve product UX should know of. This involves asking open-ended questions that lead to richer insights.
Generally, qualitative testing is good for unveiling insights and ideas or confirming initial assumptions.
Quantitative testing is usually for testing a larger sample of participants and is ideal for tweaking your design once you’ve come up with a finished product.
It will provide insights as to why a certain number of your users decided to bail out at the checkout before they bought your products. But you’ll need qualitative data to find out their reasons why.
Note you will need to rely on qualitative data for tests taken before you launch a site since you’ll lack the numbers to collect significant quantitative insights at this stage.
Post-launch is when to use A/B testing. Hone in on any trouble spots in your site’s experience that are causing you to lose conversions. How you review your findings and iterate your projects will vary according to your goals and the mixed bag of data you’ve collected.
Do note that qualitative data works fantastic in data visualization.
When, where, and who
Now that we’ve explained the basic principles of running your usability test - the “what” - decide if you’re going to carry it out online or in-person and whether you need a moderator.
Your goals and platform - a high-definition conference call tool will determine whether you use a moderator for remote tests.
Facilitators in remote sessions should remember to be clear in advance about which screen share platform they intend to use and give participants a head’s up about how they want to run the session at the warm-up.
Perhaps you’ll go with what’s known as the think-aloud protocol, whereby you test the participant’s use of the system by getting them to think out loud continuously.
Whatever methods you use to observe and record your conversation, tell participants what they need to know before sharing the prototype and starting the test.
In-person tests are generally performed in the user’s natural setting to control for any artificial conditions.
Moderated testing involves having a usability specialist present to talk things over and support the user while carrying out the test. Employed in tests of incomplete interfaces or when there are enterprise cybersecurity threats, it allows you to dig deeper into your testing. That said, arranging to have a tester present is a challenge in itself.
Unmoderated tests offer more flexibility and can be taken at users' leisure.
Use these dry-runs opportunities to flush out issues that are preventing you from moving forward with your projects. Bring in fresh eyes, subject-matter experts, and designers - the people who know how to build an app - to spark new ideas.
Whether you’re developing a new product or improving the developer experience in a headless CMS - helpful because they need much less coding knowledge than traditional systems - your design should be an evolving project that adopts - a key principle that accelerates innovation.
That means you create, test, iterate, repeat.
Amend until you enable users as you intend.
Such iterative testing is at the core of great usability.
Again, there’s virtually no wrong time to conduct usability studies.
So get to testing early and test often.
It’s the only way of ensuring you’re involving real folks and minimizing your reliance on assumptions that could lead you astray.
‘70s jazz-rock legends Steely Dan put it best: ‘You go back, Jack, do it again.’
It's no secret that companies that invest in UX enjoy higher customer retention rates, among many other benefits. It’s also no secret that usability testing can increase leads, click-throughs and convert online visitors to customers.
Usability tests help represent the point of view of the real people who'll use your product once completed, your actual power users undertaking real tasks.
It’s a simple idea in theory with endless moving parts in practice.
Look at people interacting with your website to identify any steps in the user journey that could potentially confuse or confound visitors and take what you learn to diagnose problems. Then, once your test concludes, review the results to check what changes your website could use and make them before the next round of testing.
Lastly, remember that usability tests aren't just good for testing usability and improving your results by analyzing your data. It helps you iterate the usability testing process itself.
Iteration improves the usability of your product and delivers the kind of changes to your UX that leave no gob unsmacked.
Use it all.
Kate Priestman - Head Of Marketing, Global App Testing
Kate Priestman is the Head of Marketing at Global App Testing, a trusted and leading end-to-end functional testing solution for QA challenges and app testing. Kate has over 8 years of experience in the field of marketing, helping brands achieve exceptional growth. She has extensive knowledge on brand development, lead and demand generation, and marketing strategy — driving business impact at its best. You can connect with her on LinkedIn.