User Experience (UX) means making better products and services by involving users in the design process. Sounds simple, but we’re always amazed by how often we come across businesses claiming to be doing ‘UX’ without a real live user in sight. If you don’t involve users, it's not UX! (That’s what we shout when we see this. Inwardly.) But this isn’t a quibble about terminology. If users aren’t involved in your design process then you’re seriously weakening that process, and any research you are doing is potentially wasted.
These are the top four things we see companies doing because they think it’s somehow helping with their UX; but which aren’t UX at all.
Good designers are experts at what they do; they’ve seen a lot of great design, they were probably the creative force behind some of that design. They’ll have read a lot about user behaviour and hopefully worked on services and products that were successful and loved by users. But that doesn’t mean that asking them what you should build is a substitute for user testing. Because designers aren’t your users. In fact, they’re probably very different to your users. For starters, they’re likely to be highly experienced and tech-savvy web users, whereas your actual users may not be. What a designer thinks is user-friendly might not be friendly at all to an actual customer.
Expertise is no substitute for user testing. It has its place in the process, of course, but it should be backed up by proper user research that tests ideas on the people who will actually be using the end product.
You might think that psychological study from 20 years ago that tells you exactly how people behave in a certain situation is good scientific evidence to build your design around. But is it, actually?
The reality is, long-held and much quoted psychological theories are failing replication tests. A 2015 study tried to replicate a hundred psychology experiments and found that less than half of them got the same results. It’s forcing researchers to question the strength of their methods and the institutions that underlie them.
So while you might use psychological research to inform your design, you still need to test those principles on users. We carry out hundreds of hours of user research every year and we’ve seen some long-held theories around usability fail with specific audiences.
Of course, the issue is sometimes down to how someone interprets and applies psychological theories to their process, but this reinforces the benefit of testing. Without testing, how do you know your interpretation is right?
And users aren’t infallible either. People don’t experience design rationally and predictably. They respond to words on a page differently and absorb information differently. There’s an unpredictable emotive reaction to design that is based on that individual’s subjective experience and which won’t be uncovered without user research.
And the senior execs in your business are not your users. Yet we often hear tales from friends who work in UX of companies where putting a design in front of stakeholders is considered a form of user testing.
Stakeholders offer valuable input from a business perspective and it should be them who decide what action to take based on evidence from user research. But with the exception of internal software projects, it’s very likely that these internal decision makers aren’t your users. Ask your stakeholders what users will think of your design, and the answers they give will be unreliable.
The trouble is, your stakeholders know too much. They’re vastly more knowledgeable about their products, services and industry than your users are likely to be.
Imagine you’re designing a pension advice service. Your stakeholders are experts on pensions. They know their lifetime allowance from their annual contribution allowance; they understand tax relief and annuities. These people live and breathe pensions. Ask them about the new service design and you’ll probably get a valuable perspective, but these people are not representative of a typical consumer who may know nothing about pensions and who might use the service differently because of this.
The views of your stakeholders are assumptions and they need to be tested on real users. They can offer valuable perspectives on resolving issues that users experience, but their assumptions are no substitute for user testing.
This might seem counterintuitive, but relying on analytics data is not the same as user testing. ‘But the data comes from users who are using our site!’ you might be thinking. And yes, analytics data is excellent for finding out what users do. But it doesn’t answer a key question - why? Why do they do that?
Analytics data provides empirical evidence on how users are currently interacting with a website, but it doesn’t get behind the curtain to show motivation and emotion. But why users do things is the most useful piece of information when you’re trying to resolve issues with a design. Data can be a blunt instrument and is open to interpretation. For example, is it better to have a longer or shorter average time on a page? Does it mean users are happily absorbing the information they need or searching fruitlessly for what they want?
There are types of user research that use analytics methods, like split testing, which can show you which of two options converts better, but it’s even more powerful to back this up with other types of testing that give you more nuanced information. Without knowing about motivation, you can make educated guesses, but user testing shortcuts this guessing game and may well save you time and money down the line.
UX covers a wide range of disciplines; product design, service design, information architecture, usability, visual design and many more. But the common thread through them all, the thing that makes them UX in the first place, is that they take a user-centred approach to design. Leave the user out of that, and what you’re doing isn’t UX. And it might not be getting you the right answers.