Research should inform every step when designing user experiences. And as we move through a UX project, research at each step builds on what we have already figured out, like a high school chemistry class: starting with the basic stuff, like the periodic table, then learning about electron configuration, working out bonds, and then finally building complex molecules.
But what if your chemistry textbook had a single error at the start: it told you that hydrogen has two electrons instead of one? Sure, as a percentage of facts in the book that one error is tiny, but because our working knowledge is cumulative, this one glitch can end up having huge consequences when you try to construct molecules for your final exam.
I’ve seen similar things happen with UX projects, in which an assertion early on turns out to be dramatically false, and the team only realizes it at the end. Often the error can be traced back to the team relying too heavily on one form of user research at the expense of other perspectives. This is why it’s crucial to involve both analytical and anecdotal data at all steps in a UX process — especially at the start.
For example, website analytics is often a first step when it comes to pulling in hard numbers. How many people visit your website each day? How did they arrive? What are the most popular pages? How long do users remain on the site? What is the most common conversion path for a sale or email signup? All useful stuff, but without context, it could point us in a disastrously wrong direction.
Say we discover that users on average spend twice as much time on our client’s website than their major competitor, and both scroll and click more as well. That’s hard data! Great! But what does it mean? Here are two possibilities:
A: Our client’s website is better-designed, more interesting and engaging than the competitor’s.
B: Our client’s website is more complicated, confusing and disorienting than the competitor’s.
Going with A or B without any evidence to back up your choice (essentially going on a hunch) is the kind of error made at the start that can — and will — come back to bite you down the road. And it’s bad UX practice.
So which one is correct? The only surefire way to know is to take that analytics data and pair it with anecdotal data.
Sit down users in front of a computer or their phone. Watch them try to accomplish tasks. Watch how they fail. See what they’re truly doing and feeling during the 2.25 minutes Google says they spend on your client’s site. Interview them afterward. Only then will you know for sure whether they’re happy to casually explore because the content is so compelling, or whether your client has made the login button so damn hard to find that they’re plowing through most of your site before figuring it out.
User surveys, phone interviews, focus groups (however overrated they are), one-on-one sessions, beta testers — you’ve got to make sure to put the “user” in “user interface.” And only by combining analytical and anecdotal data will you succeed.