For this blog post I wanted to address an issue user researchers often encounter when conducting an eye tracking study with different sized areas of interest (AOIs). Specifically, researchers often attempt to identify which AOIs are attracting the most attention.
For example, imagine the heat map in Figure 1 represents the results from 30 participants asked to identify how many followers this Twitter profile has. The image on the left shows five main AOIs: Profile, Trends, Feed and Suggestions.
In general, the heat map tells a clear story: most people looked at the "Profile" AOI, which makes sense since that is where followers are listed. However, let's say we wanted to provide some quantitative support for this conclusion. One logical next step would be to compute the proportion of fixations that fell in each AOI on average (Figure 2).
Compare this to the participant in Figure 2. Here, the participant is required to look away from the screen to find their PIN on a paper that was sent to them. For these "gathered answers," participants must look away from the form or survey to find the information they need to enter in the form or survey. This does not necessarily mean that the eye-tracking data is poor, it just means that when we analyze the data, we have to pay attention to these points in time when it may appear that the eye tracker lost the person's eyes, when in fact, they looked away.
However, the quantitative results don't quite match up with the story the heat map is telling. Instead, the chart above suggests that the feed attracted the largest proportion of attention of the five AOIs. The problem at this point should seem obvious: the feed is really big and so lots of fixations happened to fall in there. So should we abandon any further quantitative analysis? Hopefully not!
We can correct for this issue by identifying what proportion of fixations we'd expect to fall into each AOI by chance. That is to say, if we assume each pixel has an equal opportunity of being fixated on by chance, then the proportion of fixations falling into each AOI is equivalent to the proportion AOI takes up of the total area.
In this case the total area of the page in question is 852 x 1504 for a total of 1,296,448 pixels. Table 1 shows the expected proportion of fixations for each AOI based on their area. We can use this information to provide a better context for interpretation of our quantitative results.
Figure 3 displays the same observed proportions but now we can evaluate the results relative to the expected proportions given the size of each AOI. For example, although the feed had the highest proportion of fixations, because it's so big that proportion is not much different from what we'd expect by chance. Conversely, the profile AOI garnered more than double the proportion of fixations that would be expected by chance. Now our quantitative results are starting to appear more in line with the qualitative results of the heat map. In order to aid interpretation we can also directly visualize the proportions of observed fixations over expected fixations as in Figure 4.
Finally, we can take things a step further by including some inferential statistics. If we compute a 95% two-tailed confidence interval around our averaged observed proportion in Figure 3 [online calculator], we can do two important things. First, we can compare confidence intervals between AOIs and identify where significant differences exist. Second, we can perform the equivalent a single sample t-test to identify if our observed proportion is significantly different from the expected proportion.
Ultimately, incorporating information about AOI area will provide a richer context for the interpretation of quantitative eye tracking data and provide opportunities for stronger inferences within and between AOIs. The computations proposed here are no more complicated than standard statistical procedures, but they provide the potential for a stronger connection between qualitative interpretation of heat maps and the quantitative data that underlies them.
A version of this blogpost was presented at UserFocus 2014. Slides available here.