As I write this, I received what feels like my 50th email in the past week about privacy updates, likely due to the General Data Protection Regulation (or GDPR), which went into effect in the European Union (E.U.) on May 25 and inspired a flood of privacy notification emails. As a researcher and someone who cares deeply about the intersection of technology and attitudes, I brewed a cup of tea, found a comfy chair, and set aside a few hours of my afternoon to get reacquainted with their terms and services. That’s my story and I’m sticking with it.
In one of my first psychology classes, my professor lamented how studying humans was nothing like the physical sciences. After all, a rock doesn’t know it’s being watched. Humans, on the other hand, are well aware of when you’re paying attention to them and, as a result, change their behavior, a fact that has led to interesting, unintended findings. The effects of social presence are profound enough that it has been referred to as a “mere presence effect.” Research on the topic goes back to 1898.
But do these presence effects exist when we’re not being watched by a person at all, but instead watched by an algorithm or organization? How does the idea of being watched through our phones or computer activity—whether we are really being observed or not—affect us? Most of us are unaware of all the data that are being collected about us at any given time. Luckily, there’s a range of research on how tracking influences the way we think, and it highlights why we need more transparent, ethical, and controllable data collection policies. Here are three takeaways from research and practice:
We’re More Open to Data Collection When it Helps Us.
Remember the utterly frozen sensation you felt when the teacher walked by your desk while you were taking a test in high school? Or perhaps you close out everything when your boss stops by to talk, even if you’re not doing anything wrong? Why is it that you don’t get that feeling when you use your cell phone, which is arguably collecting far more information about you than your boss or teacher ever could? It turns out that the reason someone’s monitoring our information matters quite a bit. Simply put, it matters whether we feel that we’re being tracked to help us do something better (like your Fitbit) or evaluate us (like your boss or teacher). We tend to like things more when we perceive them as being more developmental than evaluative (assuming, of course, you don’t think your Fitbit is judging you). Proponents of targeted advertisements argue they help consumers by providing more relevant advertisements, but it’s important to take a user-centric approach here: Do users feel they are getting some benefit out of how their data are being used? Or do they feel the data are simply used to help someone else make more money?
Transparency Matters, but Has Consequences.
Consider a recent study that showed that people were actually less interested in purchasing something from an ad if they had been told they were targeted for it based on personal information. By bringing the idea of being watched back to the forefront of their minds, consumers were forced to think about the fact that someone has information about them and had been tracking them without their knowledge. That said, we are generally at least somewhat suspicious that someone is tracking our behaviors on the web. Our willingness to knowingly share that information is often dependent on the context and the potential benefits we could receive. This data collection is all happening in the background, though, and isn’t something that occupies our everyday thoughts. However, as that ad-targeting study suggests, making someone aware of the fact that you’re collecting information about them results in increased feelings of stress and tension.
When FMG conducted focus groups about online privacy, a common theme we heard was that when participants weren’t sure who would be able to see something they put on social media, they would simply choose not to post it. Perceived control is such an important part of motivation and behavior, and it’s no surprise that this carries over to our digital lives; even just the ability to pause being tracked for a little while can make us feel better. This means that organizations that are eager to have users add content are well-served by making sure that their users understand exactly how the process works, have no questions about who can see and use their data, and feel that they ultimately have control of what they share and how.
Where does this leave us? There is both an academic and an ethical argument to be made for transparency and control, even if it does have an impact on user behavior. Much as my psychology professor lamented, collecting information from people is very different than studying rocks. In these situations, data are not some abstract entity; rather, they are the sum of people’s behaviors, and it is the right of those people to know what information is being collected about them, how it is being used, and how they can control or change this information. Providing transparency about the information being collected might make businesses or system owners feel uncomfortable and may potentially make users less likely to buy something from a targeted ad. But being transparent about data usage and providing users with clear, easy ways to control their data privacy can help ensure user buy-in and support a long-term relationship based on trust and transparency.
To learn more about Fors Marsh Group, visit our About page.