Eyetracking: Worth The Expense?
Is eyetracking a valuable usability tool? I’m not sure.
One benefit, recently pointed out to me, was it is eye opening for clients to see how a user’s eyes bounce around the page. Clients believe that people read the content on pages in an orderly fashion. Seeing users’ gazes meander across the screen like a drunken sailor, hopping from one element to another, can convince the client their users don’t view the content the same way they do.
And Maria Stone, from the User Experience group over at Google, told me, of the two labs they use for usability testing, the developers prefer to use the lab equipped with the eyetracker because it’s more fun. She believes the developers pay more attention to the test when the little dot is bouncing around the screen.
I do agree that it’s always good when your clients and developers become aware of how users behave. And an eyetracker is a great demonstration of fine grain behavior. In just mere moments, you can easily see how users gaze at the screen. So, I agree eyetrackers have demonstrative value.
But do they have diagnostic value? Can we actually learn what to change in our designs from them?
Well, after watching hundreds of eyetracking tests, I can tell you it’s still really hard to know what you can learn from them.
First, they are expensive devices. It’s not cheap to outfit a lab with a decent eyetracker. (Even Google, which as far as I can tell has all the money in the world, has only outfitted one of their many labs.) In addition to the money spent on the equipment, you have to spend money training people how to use it. Not a cheap proposition.
Second, not every participant can work with an eyetracker. Depending on the hardware, people with a variety of attributes automatically are disqualified from eyetracking. Everything from contact lenses to long eye lashes can get in the way of the device working properly.
Third, they reduce the amount of time you actually collect data from your users. Getting a participant set up and calibrated with the device can take time away from learning about your design. The most valuable piece of any usability test is the time the participant is interacting with your design, not setting up the measurement equipment. What’s worse is many devices lose calibration quickly, forcing the test to stop and the participant to spend more time futzing with recalibrating. This tool time is distracting and not adding to the session’s value.
Fourth, the results are really hard to analyze. The colorful heatmaps are cool (or warm?) to look at, but what are they actually telling you? When someone is gazing at something, is it because they want to look there? Or because the page made them look there? Or because they are resting their eyes there?
When we first started conducting eye tracking, we noticed some interesting behaviors:
- Participants often would acquire the scroll bar without looking at it. They’d move their mouse over to the right edge of the screen and start scrolling, but their gaze wouldn’t leave the center of display. It seemed they were using their peripheral vision to acquire and use the scroll bar.
- Participants would orally tell us they couldn’t see something their gaze was focused on. (Women in my life have referred to this as “Male Refrigerator Blindness” — the inability to see something right in front of you.)
- Participants often would click on objects they barely gazed at. They’d focus their vision on some part of the screen, then move their mouse to some place else to actually click.
From this, we began to question what the eyetracker was actually trying to tell us. It seemed to us that what the user focused their gaze on was not necessarily what they were seeing. So, if the eyetracker doesn’t tell us what a user sees, what does it tell us? I’m not sure.
Eyetracker vendors go to great lengths to try to justify the value of their devices. For example, this vendor used eyetracking to show how changing a useful portion of the SFPD web site to a non-useful graphic encouraged users to spend more time focusing on the site’s navigation, claiming this somehow improved the site’s usability. But the vendor didn’t explain how focusing more on the navigation improved the usability of the site. In fact, it’s likely users spent more time gazing at the navigation because the site was now more unusable.
Eyetracking is fun to watch and produces cool output. It can serve as a good demonstration that users approach designs differently than we imagine. But can we find a useful place in our research process that is worth all the hassle and expense? I’m still not convinced.