Adam at Digital Gaggle – validating ideas fast with remote user research

October 24, 2016
 
by
 
Adam Babajee-Pycroft
,
 
Managing Director (UX)
 
, in
 
Research
digital gaggle 2016 speakers

Adam spoke at the Digital Gaggle conference in October 2016 at Colston Hall, organised by Noisy Little Monkey

Involving users in the design process is critical to digital success but how many times have you been told things like: we don’t have time, we can’t afford it, we’ll do it after the site is built. You needed to get the feature live yesterday, but there’s no reason that user research should suffer. The solution? Remote user research. In this talk, Adam will share what he’s learnt having executed hundreds of hours of remote research on a variety of projects including common mistakes and issues, how to find the right participants and the advantages or disadvantages of certain tools.

Transcript

The following transcript has been prepared for people who are hard of hearing or those of you who’d rather read the talk than watch the video.

Introduction

So let’s start by looking at the definition, what do I mean by remote user research?

Definition

Remote user research can really include anything, from self-moderated or remotely moderated usability testing, so whether users are going through doing tasks on their own or whether there’s someone guiding them. Idea validation: so really validating whether there’s a demand for a feature or a service or an idea. Card sorts: understanding how users would organise content on a website if it was up to them and then analysing that across multiple users in order to understand how things should be structured. Depth interviews: so really just asking people a lot of questions to try to get an understanding of how they interact with a product or a service. And tree tests: coming up with a structure and validating it quantitatively to understand whether people can locate content. So these are all examples of types of research that we can all carry out remotely. Simply put, it’s the same research that we’ve hopefully all been doing, but doing it remotely.

I used to be a sceptic

I used to be a sceptic when it came to remote research. So I used to think, well actually, it can’t be as good. You’re not in the same room as people, you can’t see the whites of their eyes, you’re not going to build the same relationships, well can it be really worthwhile? Anyway, once a client came along, they wanted a project done, they didn’t have as much budget as they normally would. But given the nature of what we were doing, it was still important that there was some user research and usability work carried out. So I decided to acquiesce and go down the remote research route. The project manager originally suggested it and after convincing myself I said ‘well, it will be better than nothing, so let’s do it.’

Advantages

Speed

And I’ve got to admit now that I’m definitely a convert because I think there are many advantages to it. For example, speed. So it’s quicker to arrange, you don’t need to arrange for someone to be in a specific place because often they can do it in the context they actually use the product, be that at home, at work, probably not running for the bus unless 4G speeds in the UK improve. Multiple tests can happen simultaneously as well. So once you’ve set up some tasks people can often conduct those in a self-moderated basis- you don’t need to be sitting in the room while they do them which can save time.

Cost

That then has implications for cost. So if users are doing self-moderated research then that doesn’t consume your time during the research, only the analysis and reporting. And participants can be cheaper to recruit because you don’t need them to come into your office in central Bristol or London or South Carolina or wherever it is. Instead, you can get people to carry out the research in the context and time convenient to them and it could often be out of hours as well, when they can fit it in.

Participant locations

Participant locations: so often if you’re in a room with a participant specifically during a usability test even the best of us are sometimes tempted to intervene when someone is really struggling to complete a task, they’re getting really frustrated with a product, it’s almost socially awkward not to intervene. But actually, because you’re not sat there they carry on and it might be 15 or 20 minutes of pain they go through but actually I think it’s a good thing because really it’s stopping us releasing poor products into the world. We’re not intervening and it may, you know, everyone here’s brilliant so I’m sure you’re not responsible for those terrible design decisions that were made, it’s someone else, probably someone who’s paid a lot more. That’s fine because then you can take the evidence from that, play it back to them and help them understand the implications of the design decisions they’ve made, the impact on the users. And you can recruit participants from anywhere. So I love to travel but actually, if I’m doing a global site with users across multiple continents generally client budgets don’t stretch to that many business class flights. So realistically, remote research is a good option. If anyone here is willing to fly me somewhere though get in touch afterwards. So another advantage is you can see users in their natural habitat. So you’re not pulling them into your comfortable meeting room full of biscuits and coffee and lovely agency things. Instead, they’re probably on their slightly ropey old PC or are sitting in their office with people working in the background. Really I think in some ways it’s a less artificial situation and that improves the quality of the research you can carry out

Disadvantages

However, it’s not all sunshine and rainbows.

Building relationships

Actually, I’d say sometimes it’s harder to build relationships especially with self-moderated where you basically set up the test, someone goes and does it and then you don’t necessarily follow it up individually and talk to them afterwards. But it’s not impossible so you can still follow those things up, you can speak to people on Skype, you can maybe have Google Hangouts, that kind of thing to bridge the gap.

Wider context

Another disadvantage though is you can’t always see the wider context in which they operate. For things like ethnography sometimes it’s not so good to do remotely. Sometimes there’s no substitute for going and sitting there.

Connection speeds

Connection speeds vary as well. One of the tools I’ll talk about later, Validately, the way it works is as people go through and complete tasks it uploads video. And although broadband speeds and download speeds in recent years have increased, actually upload speeds aren’t particularly good still. So if someone’s going to live steam video of what they’re doing that can often lead to connection issues and that can waste people’s time and effort. So you have to be careful around those things.

Choosing tools

So choosing research tools. There are a number of different types of research you can carry out remotely.

Surveys

For example, surveys. There’s SurveyMonkey which everyone knows as a good commercial product. Typeform are probably the nearest competitor I could find to them. There’s also Google Forms as well which is free and does a lot of that stuff and is fairly useable as well.

Interview tools

So really there are a litany of video conferencing tools out there. I’m sure we could name 20 or 30 more and put them on the slide but really what’s the point? Essentially find one that’s easy for the users to get on. It might be that actually, that’s not necessarily the same one for every participant. For example, if some of your customers are on Apple devices then FaceTime might be a good option but Hangouts might be better if they’re on Android. And you can screen record those sessions as well for future analysis.

Quantitative tools

So quantitative research, for example validating a structure of a website or understanding statistically whether people click in a certain place, whether they’re interested in clicking certain things, that can be done with tools such as Optimal Workshop, Notable, which also does things like annotation where you can get people to write all over your designs about what they think of it. Memories, so you can show a design and ask people ‘so what did you remember about that page?’ or whatever it is that you’re showing them. And there’s also mood as well, ‘how did that make you feel?’. I haven’t used that last one but if you want to gauge mood though there’s also satisfaction type questions, there’s things like net promotes[?]  score that you could potentially survey as well at the end of a research piece, be it a depth interview or something like a usability test. There’s also User Zoom as well, so those guys as well as doing card sorting and tree testing they do the quantitative self-moderated usability testing as well which I’ll show you a bit more about in a minute.

Qualitative tools

There’s plenty of choice of qualitative tools. There’s going to be a giant table now so this is the warning if you’re offended by giant tables look away. So these are the 5 most popular remote usability testing and user research tools. There’s Usertesting.com, I’d say they’re the market leader. What Users Do, UK based, which is quite handy because they price in pounds so you’re not necessarily beholden to currency fluctuations. Validately, which are a kind of US start up, as are Loop 11. And User Zoom are the one I mentioned before.

Price

When it comes down to the commercial side of these tools I’m not going to talk so much about price other than to say that essentially the first four are fairly affordable, your exact price will probably depend on your ability to negotiate with them, it’s definitely worth talking to all of them. User Zoom I think are a bit more enterprisey in their pricing, as in, it costs thousands and thousands and thousands and I’ve never been able to get a sensible price out of them in comparison to what they’re offering versus the rest of it. But actually, as a tool, I have used it. One of my corporate clients had paid  for User Zoom and it was pretty good in terms of the actual features and also it does offer all of the stuff that Optimal Workshop, a couple of slides ago, offer which is quite useful.

Speed

So, in terms of judging these tools. First, there’s speed, so how quickly can you get responses? We were talking about getting the speed of response well with User Testing you can sometimes get people back within 2 hours, there are downsides to that which I’ll touch on in a second. What Users Do you can do things within  a couple of days, about 5 days for mobile with a simple screener. Validately you can get something within 1 to 2 hours or 5 days for a more complex screener. Loop 11 they just say the same day when I’ve quizzed them on that and User Zoom is 1 to 2 hours.

Recruitment

But what’s interesting with these tools is it comes down to recruitment to a certain extent. So User Testing, you can get a response in 2 hours but the problem is they have a big panel of users that you could test with, I think the panel is about 2 million people worldwide now. And that’s fantastic but the problem is they don’t limit the number of tests that people do. So you need to be very careful with your screeners in order to route out people who maybe are too accustomed to the process and are not representative of your actual user base.  What Users Do have a similar panel but they limit participation. Validately just do bespoke recruiting through social media. And also with What Users Do, Validately, Loop 11 and in fact User Zoom, you can bring your own participants. So if you want really specific participants such as cognac drinking Jaguar drivers with a PhD then you can go and bother the poor people at your research participant recruiting company, there’s one in Bristol, People for Research, there are a few others out there as well, who will go and source those people for you. And then you can go and get them to do that and incentivise them yourself. Alternatively, you can dig into your CRM or your wider network depending on the budget available to you and recruit participants.

Options

So all of them do self moderated testing where you set up some tasks and then someone goes away, does them, and then you get to review the video. Validately also offer remotely moderated as well which means that you can actually sit there and carry out a moderated session like you might if you were sat in the room with someone talking them through a task, asking them follow up questions, kind of traditional usability testing. Or you’ve got the quantitative stuff that I touched on when talking about User Zoom before. So another factor which can limit what tools are available to you is what do the users have to do at their end to use it? So a lot of them use native applications on Mac OS or Windows and on the mobile platforms. Validately use a Chrome plug-in which is good because quite often users can install that a lot easier and they do talk them through it, without having any administrative permissions or anything like that. Downside is a) they have to have Chrome, which isn’t always possible in a more enterprise type environment and b) as I mentioned before it uploads video as the user does it so quite often participant videos can go missing which is not too good if you’re having to incentivise people and have paid a lot to recruit them. So that’s a consideration around that. And there’s the table in all its glory.

How to do it

So how do you do it? Well, if you’re new to usability testing, typically the first thing you need to do is actually work out what your objectives are. What is it you’re hoping to research or test?

Writing tasks

You need to write tasks. So whenever you write tasks, especially for self moderated, people need to do things alone, you need to make sure things are succinct and easy to understand. A rule I use is: can it be written on a post it note? If it can’t be, maybe break it down into smaller tasks in order to ensure that people can be very clear on what you’re asking of them.

Ambiguity

Avoid ambiguity. So especially with self moderated you’re probably not going to be there to explain what you’re asking for, so if anything can be ambiguously interpreted then you need to ensure that you go through it and make it as succinct and clear as possible. Use plain English, try not to use jargon, don’t assume that people will know what you’re on about.

Leading questions

Don’t use leading questions. So Jared Spool, who’s a globally renowned UX author and general expert, did some user research for a Swedish furniture store a few years ago. They knew there was a conversion issue around bookshelves. They had a product called Bookcase and they said ‘ok, we know from analytics people aren’t buying the bookshelves online, what’s going on?’ So they set up some remote usability testing in order to understand what was actually happening. They wrote a task which was ‘find a bookcase and add it to your basket’. Now that’s all well and good but actually, everyone they tested with was able to complete it, no issues at all, and they said ‘well this isn’t really helping us get to the bottom of what the real issue was.’ It’s because it was a leading question.  

Basically, they then repeated the test and changed the question to ‘find an item of furniture you would use to store books’. And pretty much all of their participants went to the search bar at the top, typed ‘bookshelves’ and no results were returned because they’d been indexed under ‘bookcase’ rather than ‘bookshelf’. So that’s the impact of a leading question. You need to make sure that you don’t exactly lead the user. There’s a fine line between being unambiguous but at the same time not just saying ‘find a bookcase’.

Rhythm

You need to establish a rhythm  as well. So on some of these tools, they allow you at the start of each task to move the user onto another page. If you do that for 3 out of 4 tasks and 1 task it doesn’t move the user over then the chances are they’ll get confused. They’ll think it’s broken, ‘why has a new page not loaded?’ because you’re not explicit about it. So there’s an implied rhythm that people pick up as they go through and complete your tasks.

Common mistakes

Common mistakes as well, poor screeners. So especially on some of these sites that offer panels, some people will lie in order to participate. So you need to be very careful about the screener questions you ask. Rather than saying, for example, ‘are you a teacher?’ yes or no, instead ask a question like ‘what do you do for a living?’ and list multiple choices and only allow people who identify as teachers through. It's not fool proof but nine times out of ten that’s going to eliminate participants who wouldn’t be useful to your research, who are not representative of your audience. You also need to avoid participants who are too familiar with the process, so as I mentioned some of the sites such as User Testing you need to screen out, maybe even ask the question ‘when did you last do a usability test?’ or ‘have you done a usability test before?’ But often those sites are quite good, if participants are almost too adept to it they will occasionally replace them for free. But generally, you need to try to avoid drawing conclusions from people who use xxx because unfortunately some people just sit there doing usability testing on these sites repeatedly over and over and over again.

Quick tip is have a look at the bookmarks bar in their browser, if they have paid surveys and about three or four different usability testing sites ask the companies to replace them because they’re not representative of typical people, that’s probably their only source of income, they might be students or they might just actually make a living off of that. Common mistakes as well, so not testing the test yourself. 

There have been so many times I’ve seen where people have launched usability tests and users have got hung up on certain things not working and actually that could have been avoided by just testing to make sure that a) everything you’re asking them to do can be achieved and b) doing functional testing to make sure that what you’re putting in front of people works to the best of its ability. Because if something is atrocious and doesn’t work at all your participant may get hung up on it for the rest of the test and that will affect what you can find out from it.

So even if you guerrilla test it, maybe send a link to your friend, get them to go through, or you go through, you just need to make sure that when you actually out your participants on there they have a chance of completing the tasks that you’ve set. You need to remember the country filter as well. Once I was doing some user research for MINI and I forgot to set the country screener and ended up with a really nice video of a drunken man in the Las Vegas area telling me why he would swap his wife for a Mini Paceman. Hilarious at the time, the client found it funny as well but actually in reality that’s not particularly useful for our user research when designing them for the UK.

Getting permission

So to touch a bit on the theme for today, to go back a bit to what Leonie was saying about accessibility, you can ask for permission but the best thing really is to use free trials, demonstrate the benefit to stakeholders and share evidence as well and then seek forgiveness for doing so. And if you’ve made a tangible and positive impact usually people will begin to buy into it. I think the same applies with clients as well, quite often using some of these free trials you can maybe do one or two usability tests in order to convince people to commit to a larger project.

Conducting analysis

So really in terms of analysing this stuff, there’s far too much, we could probably do a whole day on how to analyse usability testing in remote research. But really you need to hold yourself to a high standard of proof, don’t just draw conclusions from one, try to ask unambiguous yes/no questions about what users have done or what you’ve observed them do. Try not to make too many assumptions. If you do make assumptions, in your report back, declare those assumptions because often if people see something in a fairly official looking report that’s well presented they will believe it and those things can live on for years within certain organisations. Show your working outs. So if you aren’t sure or if you’re demonstrating something it’s best to show video evidence. If you’ve got the tools and you can record then produce highlight reels which show your stakeholders or your clients exactly what you learnt from it and allow them to share that evidence with other people in the organisation. And finally remember your research objectives as well. So when you analyse, look at specifically against those objectives, how the site performed. It may be that you were way off and that’s fine because you can go back, you can repeat things in the future. But always remember the objectives and use that as a lens for your analysis.

Takeaways

1) Ideally try remote research at least once. Try to avoid preconceptions.

2) Don’t ask for less budget for user research. Try to get the same budget as you would have used before but try to do more iterations because rather than just producing something, usability testing it once and releasing it, instead, test, and assuming you find issues, then try to fix those issues and then retest using that remaining budget in order to validate that you have actually fixed the problems or the problems you perceived were happening before and haven’t unexpectedly introduced anything new.

Ensure instructions are easily understood. Pretty unambiguous there. Use the right tool for the job and be rigorous in your analysis, hold yourself to a high standard of proof.

Adam Babajee-Pycroft

Managing Director (UX)

Our founder Adam has over 13 years of experience in UX. He’s fuelled almost exclusively by coffee (using one of his seven coffee making devices),curry and heavy metal. Before founding Natural Interaction in 2010, Adam managed UX for AXA Life’s UK business. Since then, he’s worked with a range of clients across the automotive, eCommerce and tech startup sectors, delivering impressive results for brands including BMW, Mini, The Consortium and National Trust.

Get in touch

To find out more about how our UX services can help your business, contact Alex now.

Contact us