Menu

Erika Hall – Cultivating Shared Understanding from Collaborative User Research

by Sean Carmichael
Play

[ Transcript Available ]

Erika Hall

Traditionally, user research has taken on more of a scientific identity. You would do usability testing and research, take a ton of notes, and then compile all of your findings into a report. The effectiveness of that research depended on whether anyone read the report, and then if they could do anything actionable with that data.

Erika Hall, of Mule Design, has taken a more effective approach she calls team-based research. She says her goal is to shift the focus away from a more academic way of thinking. Rather than being concerned with doing high quality research, determining how much the research actually helps you will allow you to build and create better.

Another part of team-based research is to view research as having its own customer. Research is sometimes done as a checklist item and doesn’t go much further than just being able to say you’ve done it. When you consider that the “customer” for your research is the people who are going to change or design the product, research becomes more of a service itself. You can measure how well you’re doing the research by how well the results help that “customer.”

Erika will be presenting one of 8 daylong workshops at UI20, November 2–4, in Boston. For more information, visit uiconf.com.

Recorded: August, 2015
[ Subscribe to our podcast via Use iTunes to subscribe to UIE's RSS feed. ←This link will launch the iTunes application.]
[ Subscribe with other podcast applications.]

Full Transcript.


Jared Spool: Hello everyone. Welcome to another episode of “SpoolCast.” Today I’d like to introduce you to a wonderful woman who is one of the smartest people I know when it comes to user research. She is the author of the fantastic book “Just Enough Research.” We’re of course talking about Erika Hall.

Erika is going to be teaching a workshop on November 4th at the User Interface Conference which is going to be in Boston, Massachusetts, November 2nd through 4th. She would be doing a full day workshop called “Cultivating Shared Understanding from Collaborative User Research.”

Ladies and gentlemen, let me introduce you now to Miss Erika Hall. Erika, how are you?

Erika Hall: I am fantastic. Thank you for having me on your podcast.

Jared: Thank you so much for coming. Your session is an interesting approach to user research that is a change from the way a lot of us have done it over the years. I’ve been doing usability testing and ethnographic research, gosh, since the late ’70s, early ’80s. When we used to do it, we used to approach it very much like a scientific project where you would go out and you would collect all sorts of notes.

You’d come back and you’d assemble the notes into a giant report. You would write the report in passive voice, and then distribute it. Of course, the thicker you made the report, the more of a thump up it made when you dropped it on the table, which was, of course, the most impressive way to do reporting, and then nobody read it. Then you wondered why it was there.

I remember one of my clients having this wall, this bookshelf wall of all the research data ever done. They didn’t know how to deal with any of that. That ineffective thing I think has given way to a more effective thing, which you’re calling team based research. Am I right?

Erika: Yes, absolutely, because the goals are maybe not different. But I’ve really looked at changing the focus, because especially when people who come from more academic backgrounds think about research, they really have a focus. It’s not necessarily the long focus on the rigor, on finding new information, and thinking of the research, the goal of the research being to do very high quality research.

Because I’ve been working in client services for a very long time, my goal is always to create the most effective design. While the research should still be sufficiently rigorous, ethical, and sufficiently documented, the goal and the way you measure the success of the research isn’t the fun factor of the report. It’s how much it actually helps you create something that changes the world to some degree.

Jared: To me, that’s really fascinating because this idea that research has a customer and the customer is the people who then use it to change the product or service is not something that I have seen a lot of discussion of. It’s a new idea that we don’t just do this research as a checkbox part of the process, but it’s actually a service we provide and that there are clear ways that we can measure whether we were good at it or not and whether we did a good job.

Erika: The way you measure it is I think it can be a little bit uncomfortable for people who come from a more traditional research background. The person is introduced me to this way of working, who really got me interested in doing research at all, did come out of academia. But he had a much more collaborative perspective and approach than a lot of other traditionally trained anthropologists or ethnographers.

Jared: Who was that?

Erika: Jared Braiterman. I worked with him at Studio Archetype way back in the day.

Jared: How was what he was doing a bit different?

Erika: At the time, I didn’t know it was different and that was the beauty of it, because this was the first time, working there was the first time I had worked with a research based process.

I thought it was completely normal and the way things were done to have somebody who was academically and professionally trained as a researcher act as more of a point person, a facilitator, and a coach to our team while the whole design team, everybody, was involved in the research, the developers, the strategists, the project managers.

We all conducted the activities together. Jared led the planning and assisted with the documentation. But we, because we all participated in it directly, we had that direct experience and we contributed to compiling and creating the insights and had him there to keep us honest essentially. I thought that was the way it was done.

Of course, you go out in the world, work with other people, and you find out that was a very special way of doing things. That, while it seemed to make sense, was not necessarily the most common.

Jared: I wonder if what happened…I was part of the very first usability tests that were ever done on software. We were back at Digital Equipment Corporation, and we had one of the first usability labs that was ever built. Now I’m talking like 1980, 1981. We were building out process, and equipment, and stuff.

We had no idea what we were doing, literally making it up. A lot of the protocols, a lot of the things that we take for granted about how to moderate a session, or how to do a usability test plan, even just how not to ask a leading question, we were completely naive.

There wasn’t any rules, about any of this. There had been some work done in the way of psychological testing in the past, but to actually work on behaviors, and then change the design and see if the behavior changed, that was all very new. I’m trying to think now as you’re talking about, how did we evolve into this very scientific thing?

I think maybe what happened was that we had gotten it into our heads, that if we didn’t somehow suggest that there was a lot of science behind what we were doing, that people wouldn’t pay attention to the outcomes. They wouldn’t pay attention to our recommendations.

They would just say, “Well you know, you just asked four people. So how scientific can that be?” We created this illusion of scientific rigor, and then used the structure of the reporting, and the methodology. All of our reports had these long sections on the protocols, and you put the whole test script in the appendix.

You did all your data analysis, out in the open. All of that was to prove, that we had used science. It had the effect of alienating anybody, who actually wanted to use it.

Erika: No designer is going to go back in, and read a report. Realistically.

Jared: There was a period where if you didn’t have statistical P values, to show the statistical significance, you couldn’t say a finding was true or not. It was so silly. The whole thing was silly [laughs] . I’m embarrassed. It’s like when you look back, and you imagine what it was like to watch black and white TV, and you go, “Really? People liked that?”

Erika: That’s the whole problem I set out to solve, and the whole reason that I bothered to write a book. I’m really concerned about in design, the basis people use to make decisions. They’re just saying that we have in our business…that a design project is just a series of decisions.

As a designer, I want those decisions to be as good as possible. I saw what was happening was that, all of this work and effort was going into the research. If you were lucky enough to get a design project with a research budget, then the focus was on creating these reports, which were then going to be ignored.

The design decisions were still made based on politics, or the designer’s preferences, or things that had inspired them. All of the value in that research was lost, and that was killing me. If you get the research done, then the problem wasn’t that the research was insufficiently rigorous, or insufficiently statistically valid.

The problem was the knowledge wasn’t making the leap into the heads of the people, actually working on the project together, and actually making decisions.

Jared: It would take us so long to get the reports done, that by the time we got them done, the project had moved on. It was no longer an issue. Back in the day, we were working on fundamental things. One of the things we had to figure out was when you have arrow keys on a keyboard, what’s the best configuration?

When I started, the arrow keys were in a row, up down, left right. It was pretty clear that was wrong. The idea that we use now which is the inverted T, that was something that came about in the early ’80s. We did it first on the DEC LK201, and then IBM copied it.

It took us months to prove the data that that inverted T was the best. We tried an actual cross configuration. We tried like six other configurations. A sideways T where up and down and then left and right were to the right of it. We did all these different types of studies where we were looking at how fast could someone arrow around a text editor.

It was all crazy stuff. Months. Months it took us to do these things, and analyze the data, and prove that we had removed all of the flaws from our hypothesis to get to the results that we needed to get to. Now, none of that rigor helps. It made sense back then for very specific tasks, but it was completely gone.

Given that, where we’re at is a different place now in that we need something different from our teams.

Erika: Everything’s moving much, much faster, for one. People have to move very quickly. I had a funny conversation with our lawyer about terms of a contract and talking about the software we were creating. I said, “I’m not sure if front-end code really qualifies as software.”

We’re working with so many existing conventions, and so many constraints, and so much that we don’t control. When you’re talking about the configuration of the arrow keys, you really are going to control the material they’re made out of, the location they are on the keyboard.

Now, we’re talking about creating something that’s 1 of 50 apps on somebody’s phone, and you don’t control the design of the phone, or you don’t control which model of the phone they’re going to have, or you’re talking about something that’s going to exist in 1 of 50 tabs that someone has open.

It’s not the foundational things. It’s really finding the things that will differentiate you and finding the things that will help people the most and get their attention. All of those things are changing very quickly, because everything you don’t control is changing just as quickly. What you find right now is true might not be true in six months.

It’s not about the rear, because you’re going to be wrong. The value in research is not the answer. This is something I talk about a lot. It’s not finding the right answer. It’s creating that culture where you’re constantly asking the same question, and you’re in a position to keep asking the question, finding what today’s answer is, and finding a way to respond to that in your work.

No one person can do this. You can’t have the research specialist off in the corner finding this stuff out and then running back saying, “Hey everybody, I have a new answer.”

All of the people working together on an interface or on an experience have to be aware of this and have to be attuned to this or you’re going to be having a constant argument over whose interpretation of reality wins, and you’re going to waste a lot of time.

Jared: That’s interesting. There’s something to this idea that people come in with a different notion of reality. There might be a person on the team who keeps looking introspectively and saying, “This is how I would use it. If it were me, this is how I would want to use this.” That’s an important part of reality, but it’s not necessarily the right part for that project.

Erika: Right. That’s something that a lot of startups get criticized for is the founders come in and say, “I’m designing a product that I would use, and I’m going to design it for myself.” Without taking into consideration that there are far more people who aren’t like you in some critical ways than who are like you.

There are far fewer people who share the qualities of, say, a Silicon Valley entrepreneur than there are people with other lives, and other priorities, and other ways living out there in the world who are going to be potential customers or users for your product or service.

Jared: Then, there’s the product owners who say, “My customers talk to me all the time, so I know what they want.” That’s similar, right? Customers are coming to them and saying, “Wouldn’t it be cool if we actually had the text upside down on the screen, because then I could read it when I’m hanging from the ceiling?”

Erika: That’s actually one of the criticisms of research is that you’re asking people what they want. People will speculate, and this is something you have to be really careful of when you do research about people and their actual behaviors and habits.

If you ask the question the wrong way, what you’ll hear is what people are speculating about, which might have no connection to how they actually behave. We humans are terrible, terrible reporters of our own behavior.

That’s one of the key research skills is getting through that. Through how people think of themselves, which is usually a little bit more optimistic than how they actually behave. The product owner’s going to be really optimistic about, “Our customers are going to work a little bit harder to get what we offer.” Whereas, people in the real world are actually pretty lazy and pretty habit-driven.

If you talk to people about, “Would you use this feature? What do you like? What do you want?” they’ll imagine these scenarios that may have very little relationship to what they actually do, and what they actually need, and the choices that they make if they’re using something in a real-world scenario.

Jared: How does team-based research, how does that approach change this?

Erika: It changes this by making the documentation secondary. The goal is not to produce a report. The goal is to create this shared understanding so that everybody in the team knows here’s what our goal is, and we’re very clear about our goal. Here’s what our constraints and requirements are.

To really think about the assumptions together and develop the shared vocabulary about here’s what we’re betting on, and here’s our evidence that those are good bets.

Once you work together to establish that, and figure out what your research questions are, and work together to find the answers to those questions, then throughout the entire rest of the design process, you don’t have to have a discussion about what your basis is.

You don’t have to argue about, “This is the behavior we’re designing for.” “No, this is the behavior we’re designing for,” or, “This is the most important business goal,” or, “This one is…” What you can say is, “Given that we all know the same things, then you use that as a basis to evaluate something,” and say, “This design is going to be more successful based on what we already know.”

It changes where the discussion happens to a much more productive place. I’m not trying to eliminate arguments, because I think having strong different points of view on a team is very, very productive and very useful.

That’s one of the greatest things about working in client services is we have a perspective, and our clients have the perspective based on their business, and we argue things out. Through that debate, you get the really strong solution. What you’re doing is not arguing about your givens or your fundamentals. You’re arguing about how you’re going to achieve your goals.

That makes everything more more quickly, and it makes everything more productive. It’s also a lot more fun. That’s what gets lost in the whole rigorous research, writing a report. The research process is actually really fun and interesting. There’s all this resistance.

There’s this stereotypical resistance you get from clients, or designers, or engineers who just want to start working, start coding, start designing. You say, “We have to do some research first,” and there’s this collective groan. It’s like doing homework.

Once you’re actually in the process, once you’re doing the user interviews, or doing the contextual inquiry, or doing some competitive usability testing or something like that, everybody participating in that is all of a sudden seeing the world with new eyes and really enjoying this new information.

Not just pecking their own heads thinking about what they already know. It’s an amazing experience that makes you want to go out and do more and do more. It also removes that antagonistic relationship, which is not really antagonistic, between the researcher and design and development team.

It’s like everybody’s on the same team. Everybody’s on the same mission to have the best possible information that they can get given their time and resources in order to solve a problem.

Jared: I could see how researchers that might feel that they’re somehow giving up control if they let folks participate in the research this way. Somehow, they might not see the same problems that I see, or they may not come to the same recommendations that I see.

That would frustrate and potentially push someone who’s been doing research for a while back from taking this more collaborative approach. Do you think that that transition from the older approach to this is difficult, or do you think that this actually is something that if they give it a shot, they may realize that times have changed and there’s a new way of thinking about this stuff?

Erika: What you’re talking about is absolutely a real thing that happens. Researchers can really feel that they’ve lost control or, even more than control, that they’ve lost value in an organization, or in a project, or in a process.

Jared: That’s true. If everybody can do research, what do you need me for?

Erika: Right. I was on a panel at a grad school…

Jared: I’m sorry.

Erika: [laughs] …last year. It was really interesting. It was a user experience class. That evening’s class was about research, so they brought in various professionals. I was the only person at the table who didn’t have a PhD.

Jared: If you pay enough money, you could have bought one that evening.

Erika: Yeah, seven years of my life and a lot of money. The instructor asked the panel, “Which do you think is more important, specialists or generalists?” Of course, I raised my hand, and I said, “Generalist,” because of my perspective. There was a woman sitting next to me who was like, “Specialist, definitely.”

She was a career researcher who had worked in a couple of very large corporations, and she loved being able to really go deep, and go rigorous, and do these very expansive studies, and work with her team of specialist researchers.

As she was talking, I felt this cold feeling in the pit of my stomach, because the way she was describing her career was pretty much the opposite of everything I enjoy about my work and what I do. It took me a few minutes to really separate that and realize she’s the great person. She’s really smart, and she really enjoys her work, and she’s a delightful person.

It’s just like what she likes in her career is very different, and that’s OK. I had this whole set of feelings about what she was saying.

[laughter]

Erika: I’m like, “You’re fine, but wow, I’m glad I don’t have your job.” She told this really heartbreaking story about an organization she was working at, a giant Internet company. She was working with one executive who was advocating for this new product direction. They were going to be redesigning some things.

She knew that he was wrong in some of his assumptions, and she said, “You know what? Let’s talk, and I’m going to go over some of the work we’ve been doing that I think is really pertinent to what we’re talking about here.” She set up a meeting with this executive, and then she went back to her research.

She spent days, she described it. She even worked over the weekend going back over her findings and making sure she had her case all laid out. She put together a really nice presentation and an accompanying report. She was all set. She walked in Monday morning to find out that this executive had canceled their meeting and never rescheduled their meeting.

That right there, to me, is the crux of the issue. It doesn’t matter how good your research is if you’re not integrated into the decision-making process. You will be ignored, and it doesn’t matter how good the research is if it doesn’t have an impact.

That’s very different from the academic perspective, where you are doing research for research’s sake. There isn’t this sense of your research is valuable to the extent that it assists some external goal. When you talk about applied research, not pure research, what we do, the research only matters…

When you come right down to it, you’re not doing this for publication in a journal. You’re doing this to help a business, so the value of your research is measured against how much it helps the business. I’m saying business in the broadest possible sense.

For people who still have that academic perspective, that can be just as distressing and repellent as I saw this career path. [laughs] That whole moment and that whole conversation both helped me empathize more with people that come from that type of career and that type of background and made me feel much more confident in my own position.

When I go out and talk to people, a lot of times I get researchers who are very happy that they have a way to bring people into the process, but I do still find that there are people who come from academia, which is not a traditionally collaborative environment. A lot of it has to do with that.

When you get your professional training in an environment that absolutely does not reward collaboration and then you go into an environment where you’re really supposed to help make a team succeed, you don’t have the tools for that. This is one of the things I really emphasize and what I’ve really been working on.

People think that working together, talking about the same things in the same room with a group of people means you’re collaborating. Collaboration, really, really doing it, is both unnatural for people and very difficult and requires this high level of attention and commitment.

The nice part about working with research is doing this research together makes teams more collaborative. By being collaborative and doing the research, it makes the research more effective. It’s this really virtuous cycle, but it doesn’t happen on its own. It absolutely does not happen on its own.

Researchers, as humans, will do what’s habitual and comfortable for them, which is want to be a specialist, and go off in a corner, and do a rigorous course of study, and write up a report. Then, designers and developers will do what they want to do, which is not read anything and go off and do things that are interesting and feel productive to them.

You have to recognize that changing these behaviors is not insurmountable, but it requires intentional effort on the part of everyone involved. Once you have that, then it’s all great, and you get people working together. Sometimes, you have to make a change if people come from a much different culture.

If people come from a really strong, engineering-driven Agile culture or people come from an academic research culture and you get them all together, you have to work on change.

Jared: In order to make change in the team, part of it I think people run into is this idea, “Well, if I’m going to do research and the team’s going to be heavily involved, now I have to train them how to do the research. What if they do it crappily and the research is not as rigorous as my standards allow?” They get themselves wrapped up in this.

There’s all this practice that has to happen that if it doesn’t happen, we’ve made something that we can’t base decisions on. One of the things that I’ve seen when people start to do this collaborative stuff is that none of that really matters.

The real epiphany is that moment where, I don’t know if this has ever happened to you, you have a stakeholder or developer or somebody who has been working on this project for months or maybe even years. The first time they see a real user using the thing that they worked on, they go, “Oh my, why didn’t we do this two years ago? It would’ve settled so many arguments.”

Even though it was one participant, or even half a participant by the time they come to this epiphany, they’re suddenly all engaged, and they’re like, “Wow, this is so different than what I expected.”

Erika: That’s an amazing moment. That just points out what a sociologists are studying right now is the fact that data doesn’t change minds. It’s just like that researcher in that big organization who couldn’t physically reach that executive. People’s minds are very good at shutting out data that they don’t want to hear.

The first thing you’ve got to do is rally everybody around what the goal is. Once you know what your goal is, yeah, then you start showing people how going through this process makes you more likely to reach that goal. It’s a very experiential thing.

If you come from a place of, “I use data to make my case,” and you say, “You’ve also got to be a sales person,” which, again, is something that most academically-trained researchers aren’t accustomed to being.

That’s a sales moment for research, when you bring a stakeholder in and you’re like, “Watch this, and see the power, and feel the change in your own mind when you see all of your assumptions blown away.” Those are really, really powerful moments.

It’s incumbent on everybody working to think, “What is my real goal?” and try to be honest with yourself. That’s how working with a team really helps. People can check each other’s biases. One of the big research biases is to be treated as special because you have the keys to these methodologies.

Once you break it down and explain that when you really boil it down, the fine statistics is hard, but humans aren’t naturally statistical thinkers. The actual process, the research processes that we talk about, is really, really straightforward.

You have to make sure, as you talked about, learn how not to ask leading questions. Learn how to ask open-ended questions. Learn how to identify things like sample bias or confirmation bias.

If you distribute that into your team and say, “Here’s what these things are. Let’s all keep each other safe out there and learn to check each other and say, ‘Gosh, I think your interpretation might be a little off, because I think you were looking to see that pattern. That pattern’s maybe not really there…”

One of the other big obstacles is that that open dialogue is also not a part of every organization’s culture. The idea that you could just ask questions or that you could challenge somebody’s interpretation of data, that’s really scary for a lot of people. They’re like, “I’m going to ask a question, and I’m going to get in trouble because I’m going to be challenging authority.”

That’s what research is. Research is challenging given ideas, so it’s naturally anti-authoritarian. If you’re in an authoritarian business culture, you have to work very carefully to change that.

Jared: So many organizations say they’re data driven, and then ignore all the data that they have at their fingertips. They want things put on a chart and so they’ll use data like net promoter score, which asks people on an 11 point scale whether they would recommend this product to a friend and then say, “Look. Our number has dropped from 7.2 to 6.9. We obviously have a huge problem.”

But then they don’t ask any questions to find out why it might have dropped, or whether that data’s even meaningful. Maybe the difference between 7.2 and 6.9 is just noise.

Erika: That’s what happens with a lot of decision makers is they like the idea of making decisions based on quantitative data, on things you can count. There’s all these industries that have popped up to turn really sketchy data into something measurable. Then people who want to think of themselves…it’s the great irony.

People who want to think of themselves as more logical and more rational end up irrationally preferring this numerical stuff over really useful, qualitative information.

Jared: That’s fascinating to me, right? Because this whole idea of people…let’s say you serve a thousand people and you get back a hundred responses, which would be a good return on something like a net promoter score. First, no one’s asking the question, “Would the other 900 responses have mimicked the data we saw?”

Maybe people are so fed up that they don’t think you care anymore. They don’t bother to answer your survey. If that’s the case, then the other 900 people would not rate the same as the hundred people were in love, and were like, “I’ll do anything for you. Let me fill out your survey.” That’s the first thing.

The second thing is, let’s say out of the hundred people — I don’t know — half of them, 50 percent rate you an eight. If we asked each of those 50 people why they rated an 8, would we get the same answer? And none of these questions get asked. We just look at this result and say, “Our MPS is 7.2.”

No one asks, “What does that actually mean?” It feels like to question it is crazy, but then you go and you have some qualitative research. The qualitative research involves this question where you say, “This thing I just showed you, can you tell me how this would be useful in your work?”

They go, “I can’t imagine a single time I would want to use this. It doesn’t do the thing here, it doesn’t do the thing there, it doesn’t do the thing there.” They’re telling you all these things that when you’re listening to them you go, “Yeah, it doesn’t do that. That would be something it would need to do.”

You’re doing all this and then you have some product owner say, “That was just one respondent. How statistically significant is that?” It’s really statistically significant, because everything they said made perfect sense from what we just learned.

It’s like” I have a car, and I don’t have a way to turn it on.” It’s almost like the way we develop our products, if we developed bridges the way we develop online products, the way people seem to want to do it, we would build the bridge and then we would send a car across it. We would watch the car inevitably plummet into the depths below, and then we’d go, “Huh. Maybe there’s something wrong with the bridge.”

Someone on the team would go, “I don’t know. One car doesn’t seem statistically significant. We should watch a thousand cars and see if they go.” It’s like, seriously? That sort of formalization of thought that wants to ignore the data that’s right in front of us which is telling us the story about our products and services in such rich, meaningful ways…sorry, you’ve caught me on a grumpy day.

Erika: [laughs] It’s true, because this is what we’re up against, especially surveys are so seductive and so dangerous, because they seem objective. That’s what’s going on here. We always talk about it in our work. We’re dealing with ego and fear, when you think about how people make decisions.

People are afraid of a situation where there’s something to interpret. But that’s how humans work. You can’t reduce humans to an engineering problem to solve. There’s been a lot in the news lately about the failure of Google+. We can call it a failure now, I think the business press is calling it officially. That didn’t work out so well.

Jared: You mean it’s supposed to do something more than just keep track of Google employees?

Erika: [laughs] One of the things that has happened there is that Google is very good at engineering problems. They’re excellent, that’s why the search is fantastic. Everything having to do with data is amazing and fantastic, and all the mapping is fantastic.

But if you’re designing something to be used by humans not by other computers, you have to get comfortable operating in this irrational, subjective realm, because it’s not like you plug data into a human. People are so irrational. That’s the whole problem with traditional economics — if a rational actor was selecting this product at this price, or this other product at this price.

That’s not how people make decisions at all. There have been all these fascinating studies. You’ve probably seen the studies with juries and the verdict they’ll return might depend a bit on whether the deliberations were before lunch, or after lunch.

Jared: Yeah. There’s all sorts of crazy stuff, and other types of bias too, right? I read this study about they would pass resumes…some place where they were interviewing for a police chief. They were passing resumes, and the resumes, they had two flavors of the resume.

One was very academically oriented, the other one was very street oriented. They were giving it to people who were saying, “Yeah, we don’t have any gender bias,” but then they would have the male name on the academic one, and the female name on the street experience one.

The people would say, “This position requires a real academic approach, so that’s why we chose the male candidate over the female. It wasn’t because they were male. It was because it was academic.” But then they would switch the names on the resumes so that it was obvious that the female was now academic and the male was street.

Then they evaluators would say, “This job requires someone with real-street smart, so we chose…but we’re not biased.” Then they found — this was interesting to me — then they found that if they had the folks rate up front which is more important, street-smarts or academics, and they were to write that down before they looked at the resumes, then they were more likely to choose the person based on the actual criteria they had decided, not based on gender bias.

The thing is that these people were completely unaware that they had this built-in bias in them.

Erika: Right, and that’s the metaphor that I always use is, if you have a sprained ankle, you know that you can’t run. Your body will give you some feedback that’s like, “Oh, I have an issue.” But if your thought process is biased, there’s no signal. There’s like, “Oh, you can’t think clearly about this issue, because there’s a bias.”

There’s no sensation that you have for that. We get a lot of great signals about our physical well-being, but we get no feedback on our cognition. That’s why you absolutely work with a team, because somebody else can help check you.

You get your meta-cognition buddy to say, “OK, we’re going to work together. We’ve agreed that we want to remove as much bias as possible, and so we’re going to help each other.”

Jared: Is part of this team-based approach to have conversations up front about what does success in our product look like, and how will we interpret the data?

Erika: Absolutely, because you need to know what your overall goal is, that the research is supposed to be helping you with. You really need to know, why are we doing research at all? We’re doing research so that we can be more successful in this way. Then you say, “OK, what are our major assumptions that we’re betting on, that carry a lot of risk?”

You’re like, “OK, so we all agree that these are the major assumptions and these are areas of risk.” Then you say, “OK, what questions do we need to ask? What are we trying to find out before we get down to work, or as we continuously work, to help us validate our assumptions?”

Once you agree on those questions, then you can go off and say, “We’re going to do these activities to get the data.” Then you use those questions in your analysis of the data, because you can’t look at what you get out of any data, whether qualitative or quantitative. The value of that absolutely depends on your goal, and on the quality of your questions.

Otherwise, the human brain is a pattern recognition machine. It’s so easy, if you don’t have a really clear question, and those really clear priorities and that focus, then you’ll start seeing patterns. This happens a lot with that bad quantitative data that you were talking about, that you’ll have some data without a clear question to ask the data. Then you’ll start seeing things that might not be there, because people want to see patterns.

Jared: Yeah. I’ve been calling this, “Agenda amplification effect.”

Erika: [laughs] I like that.

Jared: I believe that we need to change our pages. I go and I look at some analytics, say bounce rate. I see that the bounce rate’s really high on the pages that I think we need to change. I walk into the meeting and I show the statistic and I show how the bounce rate, which is measuring is this the last page that people visit on our site, or do they continue on to the site from visiting this page?

I say, “Look at this high bounce rate. We need to change the page so we lower the bounce rate.” I’ve got this rationale that this bounce rate is bad, but then maybe my thing is that I don’t want to change the page. The page is fine the way it is.

I go to the analytics and I find the exact same bounce rate and I say, “Look at this high bounce rate. People are finding exactly what they want. They don’t need to look anywhere else on the site. The page does everything it needs to do for them. Our bounce rate’s perfect.” It’s the same data, we’re just interpreted it differently. It’s through the lens of whatever agenda we walked into the room with.

We use the data to amplify our agenda, but that’s much harder to do with qualitative data when you have someone sitting at the page and they’re going, “Yeah, I came to the page and the page did what I wanted it to do. I probably would have stayed, except I couldn’t find anything on the page that told me where I should go next.” That seems a lot more valuable to me.

Erika: The truth is you really do need both together, right? You need to know what’s going on, what you can measure. You need to know why it’s happening. But neither one of those things is helpful unless you actually know, what does your business need to happen? That’s why people ask me about, “What about net promoter scores? What about your KPIs and all this?”

I say, “You have to start with what makes your business successful?” No metric matters except the ones that draw a direct line to the things that make your business successful. What needs to happen to cut your costs, or increase your profits, or do something else that means your business is thriving and doing what you need it to do?

That’s why people talk about these vanity metrics. Some people get really excited about it like, “Oh, we have this many visits, or this much time on site.” I still hear people talking about time on site, whereas it might be that you are very, very successful if people get the information they need and get out. Time on site might mean people are lost, and really frustrated.

Jared: Or engagement.

Erika: Engagement, [laughs] I like that one. The only company that makes money off engagement is De Beers.

Jared: Yes. That’s good. That’s tweetable.

Erika: You like that one?

Jared: I do.

Erika: [laughs] It’s true, it’s actually true.

Jared: Yeah, the engagement thing is like, “Oh, yeah. We need to increase engagement on our website.” What does that actually mean?

Erika: It means you don’t know what actually makes your business successful. I think in the case like especially out here…you’re more in the Boston area where you have companies that actually have a business, that need to make money. But out here, we have a lot of early stage companies that don’t have a business model yet.

They need something to show somebody so that somebody gives them more money to keep doing what they’re doing, because they can’t just talk about profit and loss, or something normal, or old-timey. They have to find something that’s going up and to the right, and I think that’s where you start to get these other things.

We can’t measure things that don’t exist, so what do we have to measure, and then let’s put some spin on those.

Jared: You know, it’s the same thing in large organizations. You can be a large multi-national company that makes monitors for shipping crates, or something like that. You still don’t know what the value of your home page is. You still don’t know how to measure the press part of your website.

You don’t know how to tell if you’ve done a good job on the knowledge base, FAQs that you’ve surfaced. It’s not just start ups that have this problem of not understanding what to measure. In large organizations, particularly large organizations with a lot of moving parts.

I heard Tony Hsieh of Zappos go on about how he hoped that every customer of Zappos at some point would have a problem, so that they could call and see how amazing the customer support team is. As a customer of companies that often has problems, I’d really prefer not to learn how amazing the customer support team is. That feels to me like a reactive solution to something that shouldn’t have happened.

But that’s often the metric that folks use, is some crazy thing like, “We’ve put so much investment into our customer support, we should actually create problems so people need to call them.” They’re not looking at the whole experience.

Erika, this has been really fun and exciting, and I’m really thrilled about the workshop that you are going to be giving at UI20, which is going to be November 4th in Boston. The workshop’s on cultivating shared understanding from collaborative user research.

You’re going to go into this team-based research approach, and talk about creating strategic research plans, and running studies and then presenting results for the biggest influence. It sounds like such a fun and fantastic day to be able to get a research process that really is actionable, and makes things happen in New York. I’m really looking forward to that.

Erika: Yeah, I am too. It’s always really fun for me, because people who attend this workshop really walk out with a different perspective, and maybe more optimism and optimism based on having tools to start. There are things you can start doing right away, to get more of that collaborative research oriented question asking mindset going in your organization, that makes everybody more successful at working on anything.

Jared: I really want to encourage people to go check out the workshop description, because we’ve put a lot of great information about what you are going to be talking about. People have a really good sense as to what they’re going to get.

Again, that’s at the UI20 conference. If you want to check it out, it’s uiconf.com. It’s going to be one of the really popular sessions. If people want to come, they’re going to need to register early because it’s like to fill up.

Erika: I’m looking forward to it a lot.

Jared: Thank you for taking the time for talking to me today.

Erika: Thank you, it’s been a fantastic conversation.

Jared: I want to thank our audience again. If you want to check out the conference it’s uiconf.com, and UIe.com. We have a lot of great stuff there. As always, thank you for encouraging our behavior. We’ll talk to you soon. Take care.