skip to main content


Title: Trends in Privacy Dialog Design after the GDPR: The Impact of Industry and Government Actions
Prior research found that a significant portion of EU-based websites responded to the GDPR by implementing privacy dialogs that contained inadequate consent options or dark patterns nudging visitors towards accepting tracking. Less attention, so far, has been devoted to capturing the evolution of those privacy dialogs over time. We study the evolution of privacy dialogs for a period of 18 months after the GDPR became effective using screenshots from the homepages of 911 US and EU news and media websites. We assess the impact of government and third-party actions that provided additional guidance and tools for compliance on privacy dialogs' choice architecture. Over time, we observe an increase in the use of privacy dialogs providing the option to accept or reject tracking, and a reduction of nudges that encourage users to accept tracking. While the debate over the extent to which various stakeholders' responses to the GDPR meaningfully improved EU residents' privacy remains open, our results suggest that exogenous shocks (such as government interventions) may prompt websites to enact changes that bring on-the-ground implementation of the GDPR at least nominally closer to its intended goals (such as making rejecting tracking easier for visitors).  more » « less
Award ID(s):
2237329
NSF-PAR ID:
10488641
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
ACM WPES '23: Proceedings of the 22nd Workshop on Privacy in the Electronic Society
Date Published:
Page Range / eLocation ID:
107 to 121
Format(s):
Medium: X
Location:
Copenhagen Denmark
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The EU General Data Protection Regulation (GDPR) is one of the most demanding and comprehensive privacy regulations of all time. A year after it went into effect, we study its impact on the landscape of privacy policies online. We conduct the first longitudinal, in-depth, and at-scale assessment of privacy policies before and after the GDPR. We gauge the complete consumption cycle of these policies, from the first user impressions until the compliance assessment. We create a diverse corpus of two sets of 6,278 unique English-language privacy policies from inside and outside the EU, covering their pre-GDPR and the post-GDPR versions. The results of our tests and analyses suggest that the GDPR has been a catalyst for a major overhaul of the privacy policies inside and outside the EU. This overhaul of the policies, manifesting in extensive textual changes, especially for the EU-based websites, comes at mixed benefits to the users. While the privacy policies have become considerably longer, our user study with 470 participants on Amazon MTurk indicates a significant improvement in the visual representation of privacy policies from the users’ perspective for the EU websites. We further develop a new workflow for the automated assessment of requirements in privacy policies. Using this workflow, we show that privacy policies cover more data practices and are more consistent with seven compliance requirements post the GDPR. We also assess how transparent the organizations are with their privacy practices by performing specificity analysis. In this analysis, we find evidence for positive changes triggered by the GDPR, with the specificity level improving on average. Still, we find the landscape of privacy policies to be in a transitional phase; many policies still do not meet several key GDPR requirements or their improved coverage comes with reduced specificity. 
    more » « less
  2. Recent developments in online tracking make it harder for individuals to detect and block trackers. Some sites have deployed indirect tracking methods, which attempt to uniquely identify a device by asking the browser to perform a seemingly-unrelated task. One type of indirect tracking, Canvas fingerprinting, causes the browser to render a graphic recording rendering statistics as a unique identifier. In this work, we observe how indirect device fingerprinting methods are disclosed in privacy policies, and consider whether the disclosures are sufficient to enable website visitors to block the tracking methods. We compare these disclosures to the disclosure of direct fingerprinting methods on the same websites. Our case study analyzes one indirect fingerprinting technique, Canvas fingerprinting. We use an existing automated detector of this fingerprinting technique to conservatively detect its use on Alexa Top 500 websites that cater to United States consumers, and we examine the privacy policies of the resulting 28 websites. Disclosures of indirect fingerprinting vary in specificity. None described the specific methods with enough granularity to know the website used Canvas fingerprinting. Conversely, many sites did provide enough detail about usage of direct fingerprinting methods to allow a website visitor to reliably detect and block those techniques. We conclude that indirect fingerprinting methods are often difficult to detect and are not identified with specificity in privacy policies. This makes indirect fingerprinting more difficult to block, and therefore risks disturbing the tentative armistice between individuals and websites currently in place for direct fingerprinting. This paper illustrates differences in fingerprinting approaches, and explains why technologists, technology lawyers, and policymakers need to appreciate the challenges of indirect fingerprinting. 
    more » « less
  3. The European Union (EU) General Data Protection Regulation (GDPR) has expanded data privacy regulations regarding personal data for over half a billion EU citizens. Given the regulation’s effectively global scope and its significant penalties for non-compliance, systems that store or process personal data in increasingly complex workflows will need to demonstrate how data were generated and used. In this paper, we analyze the GDPR text to explicitly identify a set of central challenges for GDPR compliance for which data provenance is applicable; we introduce a data provenance model for representing GDPR workflows; and we present design patterns that demonstrate how data provenance can be used realistically to help in verifying GDPR compliance. We also discuss open questions about what will be practically necessary for a provenance-driven system to be suitable under the GDPR. 
    more » « less
  4. The computer science literature on identification of people using personal information paints a wide spectrum, from aggregate information that doesn’t contain information about individual people, to information that itself identifies a person. However, privacy laws and regulations often distinguish between only two types, often called personally identifiable information and de-identified information. We show that the collapse of this technological spectrum of identifiability into only two legal definitions results in the failure to encourage privacy-preserving practices. We propose a set of legal definitions that spans the spectrum. We start with anonymous information. Computer science has created anonymization algorithms, including differential privacy, that provide mathematical guarantees that a person cannot be identified. Although the California Consumer Privacy Act (CCPA) defines aggregate information, it treats aggregate information the same as de-identified information. We propose a definition of anonymous information based on the technological possibility of logical association of the information with other information. We argue for the exclusion of anonymous information from notice and consent requirements. We next consider de-identified information. Computer science has created de-identification algorithms, including generalization, that minimize (but not eliminate) the risk of re-identification. GDPR defines anonymous information but not de-identified information, and CCPA defines de-identified information but not anonymous information. The definitions do not align. We propose a definition of de-identified information based on the reasonableness of association with other information. We propose legal controls to protect against re-identification. We argue for the inclusion of de-identified information in notice requirements, but the exclusion of de-identified information from choice requirements. We next address the distinction between trackable and non-trackable information. Computer science has shown how one-time identifiers can be used to protect reasonably linkable information from being tracked over time. Although both GDPR and CCPA discuss profiling, neither formally defines it as a form of personal information, and thus both fail to adequately protect against it. We propose definitions of trackable information and non-trackable information based on the likelihood of association with information from other contexts. We propose a set of legal controls to protect against tracking. We argue for requiring stronger forms of user choice for trackable information, which will encourage the use of non-trackable information. Finally, we address the distinction between pseudonymous and reasonably identifiable information. Computer science has shown how pseudonyms can be used to reduce identification. Neither GDPR nor CCPA makes a distinction between pseudonymous and reasonable identifiable information. We propose definitions based on the reasonableness of identifiability of the information, and we propose a set of legal controls to protect against identification. We argue for requiring stronger forms of user choice for reasonably identifiable information, which will encourage the use of pseudonymous information. Our definitions of anonymous information, de-identified information, non-trackable information, trackable information, and reasonably identifiable information can replace the over-simplified distinction between personally identifiable information versus de-identified information. We hope that this full spectrum of definitions can be used in a comprehensive privacy law to tailor notice and consent requirements to the characteristics of each type of information. 
    more » « less
  5. Fitness trackers are undoubtedly gaining in popularity. As fitness-related data are persistently captured, stored, and processed by these devices, the need to ensure users’ privacy is becoming increasingly urgent. In this paper, we apply a data-driven approach to the development of privacy-setting recommendations for fitness devices. We first present a fitness data privacy model that we defined to represent users’ privacy preferences in a way that is unambiguous, compliant with the European Union’s General Data Protection Regulation (GDPR), and able to represent both the user and the third party preferences. Our crowdsourced dataset is collected using current scenarios in the fitness domain and used to identify privacy profiles by applying machine learning techniques. We then examine different personal tracking data and user traits which can potentially drive the recommendation of privacy profiles to the users. Finally, a set of privacy-setting recommendation strategies with different guidance styles are designed based on the resulting profiles. Interestingly, our results show several semantic relationships among users’ traits, characteristics, and attitudes that are useful in providing privacy recommendations. Even though several works exist on privacy preference modeling, this paper makes a contribution in modeling privacy preferences for data sharing and processing in the IoT and fitness domain, with specific attention to GDPR compliance. Moreover, the identification of well-identified clusters of preferences and predictors of such clusters is a relevant contribution for user profiling and for the design of interactive recommendation strategies that aim to balance users’ control over their privacy permissions and the simplicity of setting these permissions. 
    more » « less