Privacy Self Management in a Click Happy Society


Throughout our daily lives we interact with several websites, apps, and many other data collection systems. A study conducted by Lorrie Faith Cranor and Aleecia McDonald from Carnegie Mellon University in Pittsburgh calculates the amount of time it would take each on of us to actually read all the privacy policies we encounter on average while we surf the web. According to their estimates, the average American visits between 1,354 and 1,518 websites in a year, and it would take us 76 working days to actually read all of them1, and this is just websites. Image how the numbers would increase counting the different data collection system that we engage daily while moving through public spaces. As we shift towards a future where ubiquitous computing is more widespread we will need to rethink what privacy is and how we protect it. At this moment in time we rely heavily on Notice and Consent, or what Solove (2013) calls the Self Management framework, which as many scholars and organizations agree, is not enough to protect our privacy. In this paper, I will work through the reasons why the Privacy Self Management framework it is not enough, but it can serve as a platform to build on top to create better privacy protections.

Part I will be focused on privacy and Big Data. It is important to understand these two concepts before going any further, because they will frame how we will understand what we are trying to protect. As pointed out by Kagal and Abelson (2010), there are two main ways to understand privacy. Both the UN Declaration of Human Rights and the Brandeis and Warren’s The Right to Privacy focus on (1) privacy as the “right to be let alone”, while Alan Westin’s definition focused on (2) privacy as the ability to control ones own information. In addition to understanding what privacy is, we also need to spend a little bit of space to discuss Big Data. On their report, Big Data and Privacy: A Technological Perspective, the President’s Council of Advisors on Science and Technology (PCAST) conceptualize Big Data as the collection of a large amount of data and its analysis in a big scale. As PCAST points out, this is possible due to the ubiquity of both analogous and digital data capture systems (Surveillance cameras, GPS, Cookies and so on).

Part II will be spent discussing the Notice and Consent framework we currently use to protect our data. This framework has its roots in the Organization for Economic Cooperation and Development’s (OECD) Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, which were conceptualized back in the 1980. As we will see, although this framework is useful, it is inadequate to protect our privacy in our current context. As we will see, Notice and Consent has many issues, from people not reading to the way it is written. Privacy Policies are not accessible by the average person, they are written by lawyers for lawyers. In addition to this, the Notice and Consent framework asks the individual for her consent at the moment where they first open an app or website, which does not allow the user to really foresee the ways the data provided could be use in the future.

Part III will be used to explore the different approaches that can be used to protect user’s privacy. Many scholars [see Cate and Mayer-Schönberger (2013); Solove (2013); Kagal and Abelson (2010); Mundie] argue that need to change our approach to the protection and regulation of the use of data, instead of its collection. Other like Alissa Cooper, John Morris, and Erica Newland propose using a system like the Creative Commons, in which the user can choose the level of privacy she wants. All of these frameworks can work together and on top of the self management framework that is already in place.

Finally, in Part IV will focus on some concluding remarks on the self management framework, the pressure it puts the individual and the need to add other mechanism to protect our privacy.

Part I: Privacy and Big Data

With the advent of “Big Data” and the wide spread of ubiquitous computing, data collection practices have become more simple to perform. As explained by PCAST, “since early in the computer age, public and private entities have been assembling digital information about people”, however with the large amount data that is being collected, through digital and analogous means, and the infrastructure that is in place to analyze all of it, it is now possible to process all of it, both for beneficial and harmful use.

This has raised many concerns about users’ privacy being violated. As pointed out by many scholars and organizations [Solove (2013); Cate and Mayer-Schönberger (2013); Gindin (2009); FTC (2015)] do consent to give up some data in exchange for certain types of services, however at the moment of doing so, users/consumers generally do not see how an innocuous piece of personal data could be harmful in the future, but with the use of “Big Data” and analytics, corporations and other groups are capable of merging large amount of data, predict future behaviors and even deduce consumers identities. This brings us back to the two different approaches discussed by Kagal and Abelson (2010) to understand privacy. The UN Declaration of Human Rights definition of privacy states “[no] one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation”. As Kagal and Abelson see it, this builds upon Brandeis and Warren definition of privacy, “the right to be left alone”.

The other way to understand privacy is based on Alan Westin’s work, who defines it as “the ability for people to determine for themselves when, how, and to what extent, information about them is communicated to others” (in Kagal and Abelson, 2010; 1). The difference between Westin’s definition to the UN and Brandeis and Warren’s one, is that is based on how information is accessed, while the latter is based how information is being used. Many of the policies and regulations in place today use Westin’s approach and focus on access control, in which the user treats its privacy as a digital currency to get services.

Kagal and Abelson, as many other privacy experts, call for a shift on how we approach privacy. Instead of focusing on what information is being collected and accessed, regulations and policies should start working on how to restrict how the information being used. In this section of the paper, we have discussed the different way to understand privacy and how “Big Data” has change the game. In Part II we will define the self management framework and the issues it has.

Part II: The Self Management Framework

As mention both in the introduction and the section above, we live in world in which data form consumers is collected in large amounts and processed to infer the behavioral patterns, to create targeted advertisement and to offer consumers services they might need. However, the collection of this might result in harmful effects for the consumer, due to her political views, sexual orientation and so on, that can be inferred when different dataset are combined for certain uses. In order to prevent these harmful effects, we rely on the self management framework, in which the consumer consents to the ways the data his giving up will be used after reading a notice which everything. Unfortunately this is not what actually happens.

The self management framework we rely on is almost 40 years old. The Fair Information Practice Principles (FIPPs) were first articulated back in 1973 by the U.S. Department of Health, Education and Welfare. Eventually the OECD adopted them as guidelines for privacy in 1980. These principles are notice, choice, access, accuracy, data minimization, security, and accountability (FTC, 2015; 19). As much as these principles are useful and do provide a framework to protect privacy, we are stretching their use in the digital context we live in today. Companies are able to access our data as long consumers are notified and they consent to the terms of service, however they are many issues with this. As noted in the introduction it would take 76 working hours to actually read all the privacy policies we come across in the World Wide Web, which no one really has time to do. On top that, if someone takes the time read one, the documents are full of legalese, which an average person would not be able to understand easily. However consumers still want to have access to these services and do consent by clicking accept on all the notices they receive, even though these notices are not meaningful. These issues are what Solove (2013; 1883) identifies as cognitive problems. Individuals are not well informed on the privacy policies they are agreeing to and they are making decisions (exchanging their information) based on conception that are not quite right. Many consumers give up sensible information for small benefits. The problem here is that users do not quite understand how the information they perceive as innocuous, could be used methods like data fusion.

Furthermore, Solove describes 3 different types of structural problems regarding privacy management. First, we have the problem of scale, which refers we live in a moment in which individuals are bombarded with “Notice and Choice” notification from many of the services we use and is not possible to keep track for a single person to keep track of how all her data is being collected and used. Second, there is the problem of aggregation; consumers cannot possibly know how the data that is out there will be put together and what can be revealed about them. Finally, we have the problem of assessing harm. Usually, the consumer will agree when the data is initially collected without fully understanding what could be the long-term harms that can arise in the future. All of these put together makes the Notice and Choice or privacy self management framework very flawed for the context we are using it in.

It is not necessary to discard this framework, however it is an imperative to build other mechanism on top of it help consumers better protect their privacy.

Part III: Different Privacy Management Frameworks

So, if the privacy self management does not work that well and people are not engaging in a meaningful Notice and Choice practices, where do we go from here. Privacy experts and scholars have engage discussions to see which other possibilities could be used. Most agree that it is not necessary to get rid of the framework that is in place as of now, but it needs other mechanism to help protect people’s privacy. In the next two subsections will focus on two specific mechanisms: (A) data usage and (B)privacy profiles.

  1. Focusing on Use, not Collection
  2. Mundie (2015) argues that instead on focusing on how data being collected, it would be more meaningful to focus on how it is being used. As he points out, when people are asked how their privacy is being violated, the vast majority tends to answer referring how their data is b used in different ways they do not fully are informed on. What Mundie proposes to create data wrappers, which are encrypted and can be used authorized users in authorized devices. This idea is borrowing heavily on Digital Rights Management (DRM) use in media such as music and movies.

    The user would consent to have her data collected, but it would be closed in a “wrapper” with many metadata associated with which would not revealed the identity of the user. In order to be able to open the encrypted wrapper, the person would need to have authorization, not just for him, but also for the device in which is being open and in the program it is being run. This would limit the ways information is being used and might provide the consumer with more meaningful notice on how the data is being used. However, this might turn cumbersome, if the consumer keeps getting notices every time its data is being used, on top that it might impair innovation on the other end, where the data could be used to discover new things.

  3. Privacy Profiles
  4. Another approach that is being discussed is the idea of having a set of privacy profiles. Professor Jones (2015) proposes to turn Notice and Consent on its head, and enable people to choose from a different set of preferences to let the collector know how the user wants its data handled. Cooper, Morris and Newland (2010) propose to follow a similar system as the Creative Commons, which “offers four simple license conditions (Attribution, Share Alike, Non-Commercial, and No Derivative Works) that users can combine to form licenses for creative works”. This rule-set lets other users know how they can use other individual’s intellectual properties.

    As the PCAST points out, such “privacy profiles” could be build up by organizations that work on the field of privacy, such as the FTC, and consumers could choose from a wide variety of options depending on who they trust more. Such on approach would lessen the weight that self management puts on the average consumer and distributes it into more entities.

Part IV: Conclusions

As discussed above, the privacy self management approach, although useful, it is not sufficient to actually protect consumers privacy. It need other mechanism build on top in order to provide a more meaningful notice and consent. In the present paper discussed two different ideas, data “wrappers” and privacy profiles. Neither of them is perfect, but they do provide different ways to control ones own data. In the case of the data “wrappers”, the user would need continue to give consent every their data was going to be used, which is not feasible, while the privacy profiles (or preferences) might take away agency from the user. However, these approaches are no mutually exclusive and can be combined. It is out of the scope of this paper propose a solution, however, borrowing ideas from other systems that have been successful, such as DRMs and the Creative Commons its important and useful, now that its time to look to future and think on how to keep privacy protected.


  1. Hal Abelson and Lalana Kagal, Access Control is an Inadequate Framework for Privacy Protection, W3C Workshop on Privacy for Advanced Web APIs, London (2010).‐privacy‐ws/papers.html
  2. Fred H Cate and Viktor Mayer-Schönberger, Notice and consent in a world of Big Data. International Data Privacy Law3.2 (2013): 67 - 73.
  3. Alissa Cooper, John Morris, and Erica Newland, Access Control is an Inadequate Framework for Privacy Protection, W3C Workshop on Privacy for Advanced Web APIs London (2010).
  4. Federal Trade Commision, Internet of Thing: Privacy & Security in Connected World, (2015).
  5. Federal Trade Commission, “Privacy Online: Fair Information Practices in the Electronic Marketplace,” (2000).
  6. Susan E. Gindin, Nobody Reads Your Privacy Policy or Online Contract: Lessons Learned and Questions Raised by the FTC's Action against Sears, Northwestern Journal of Technology and Intellectual Property 1:8, (2009‐ 2010).
  7. Meg Leta Jones, Privacy Without Screens & the Internet of Other People's Things, 51 Idaho Law Review 639 (2015).
  8. Craig Mundie, Privacy Pragmatism: Focus on Data Use, Not Data Collection, Foreign Affairs, (2014).
  9. Alexis C. Madrigal, Reading the Privacy Policies You Encounter in a Year Would Take 76 Work Days, The Atlantic (March 2012).
  10. President’s Council of Advisors on Science and Technology, Big Data and Privacy, A Technological Perspective, (May 2014).
  11. Lee Rainie & Janna Anderson, Digital Life in 2025: The Future of Privacy, PEW RESEARCH CENTER 28 (2014), 
  12. Daniel J. Solove, Privacy Self Management and the Consent Dilemma. (2013).