skip to main content


Title: Deficiencies in the Disclosures of Privacy Policy and in User Choice
Development of a comprehensive legal privacy framework in the United States should be based on identification of the common deficiencies of privacy policies. We attempt to delineate deficiencies by critically analyzing the privacy policies of mobile apps, application suites, social networks, Internet Service Providers, and Internet-of-Things devices. Whereas many studies have examined readability of privacy policies, few have specifically identified the information that should be provided in privacy policies but is not. Privacy legislation invariably starts a definition of personally identifiable information. We find that privacy policies’ definitions of personally identifiable information are far too restrictive, excluding information that does not itself identify a person but which can be used to reasonably identify a person, and excluding information paired with a device identifier which can be reasonably linked to a person. Legislation should define personally identifiable information to include such information, and should differentiate between information paired with a name versus information paired with a device identifier. Privacy legislation often excludes anonymous and de-identified information from notice and choice requirements. We find that privacy policies’ descriptions of anonymous and de-identified information are far too broad, including information paired with advertising identifiers. Computer science has repeatedly demonstrated that such information is reasonably linkable. Legislation should define these categories of information to align with technological abilities. Legislation should also not exempt de-identified information from notice requirements, to increase transparency. Privacy legislation relies heavily on notice requirements. We find that, because privacy policies’ disclosures of the uses of personal information are disconnected from their disclosures about the types of personal information collected, we are often unable to determine which types of information are used for which purposes. Often, we cannot determine whether location or web browsing history is used solely for functional purposes or also for advertising. Legislation should require the disclosure of the purposes for each type of personal information collected. We also find that, because privacy policies disclosures of sharing of personal information are disconnected from their disclosures about the types of personal information collected, we are often unable to determine which types of information are shared. Legislation should require the disclosure of the types of personal information shared. Finally, privacy legislation relies heavily on user choice. We find that free services often require the collection and sharing of personal information. As a result, users often have no choices. We find that whereas some paid services afford users a wide variety of choices, paid services in less competitive sectors often afford users few choices over use and sharing of personal information for purposes unrelated to the service. As a result, users are often unable to dictate which types of information they wish to allow to be shared, and which types they wish to allow to be used for advertising. Legislation should differentiate between take-it-or-leave it, opt-out, and opt-in approaches based on the type of use and on whether the information is shared. Congress should consider whether user choices should be affected by the presence of market power.  more » « less
Award ID(s):
1956393
NSF-PAR ID:
10349058
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Loyola consumer law review
ISSN:
1530-5449
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The computer science literature on identification of people using personal information paints a wide spectrum, from aggregate information that doesn’t contain information about individual people, to information that itself identifies a person. However, privacy laws and regulations often distinguish between only two types, often called personally identifiable information and de-identified information. We show that the collapse of this technological spectrum of identifiability into only two legal definitions results in the failure to encourage privacy-preserving practices. We propose a set of legal definitions that spans the spectrum. We start with anonymous information. Computer science has created anonymization algorithms, including differential privacy, that provide mathematical guarantees that a person cannot be identified. Although the California Consumer Privacy Act (CCPA) defines aggregate information, it treats aggregate information the same as de-identified information. We propose a definition of anonymous information based on the technological possibility of logical association of the information with other information. We argue for the exclusion of anonymous information from notice and consent requirements. We next consider de-identified information. Computer science has created de-identification algorithms, including generalization, that minimize (but not eliminate) the risk of re-identification. GDPR defines anonymous information but not de-identified information, and CCPA defines de-identified information but not anonymous information. The definitions do not align. We propose a definition of de-identified information based on the reasonableness of association with other information. We propose legal controls to protect against re-identification. We argue for the inclusion of de-identified information in notice requirements, but the exclusion of de-identified information from choice requirements. We next address the distinction between trackable and non-trackable information. Computer science has shown how one-time identifiers can be used to protect reasonably linkable information from being tracked over time. Although both GDPR and CCPA discuss profiling, neither formally defines it as a form of personal information, and thus both fail to adequately protect against it. We propose definitions of trackable information and non-trackable information based on the likelihood of association with information from other contexts. We propose a set of legal controls to protect against tracking. We argue for requiring stronger forms of user choice for trackable information, which will encourage the use of non-trackable information. Finally, we address the distinction between pseudonymous and reasonably identifiable information. Computer science has shown how pseudonyms can be used to reduce identification. Neither GDPR nor CCPA makes a distinction between pseudonymous and reasonable identifiable information. We propose definitions based on the reasonableness of identifiability of the information, and we propose a set of legal controls to protect against identification. We argue for requiring stronger forms of user choice for reasonably identifiable information, which will encourage the use of pseudonymous information. Our definitions of anonymous information, de-identified information, non-trackable information, trackable information, and reasonably identifiable information can replace the over-simplified distinction between personally identifiable information versus de-identified information. We hope that this full spectrum of definitions can be used in a comprehensive privacy law to tailor notice and consent requirements to the characteristics of each type of information. 
    more » « less
  2. The dominant privacy framework of the information age relies on notions of “notice and consent.” That is, service providers will disclose, often through privacy policies, their data collection practices, and users can then consent to their terms. However, it is unlikely that most users comprehend these disclosures, which is due in no small part to ambiguous, deceptive, and misleading statements. By comparing actual collection and sharing practices to disclosures in privacy policies, we demonstrate the scope of the problem. Through analysis of 68,051 apps from the Google Play Store, their corresponding privacy policies, and observed data transmissions, we investigated the potential misrepresentations of apps in the Designed For Families (DFF) program, inconsistencies in disclosures regarding third-party data sharing, as well as contradictory disclosures about secure data transmissions. We find that of the 8,030 DFF apps (i.e., apps directed at children), 9.1% claim that their apps are not directed at children, while 30.6% claim to have no knowledge that the received data comes from children. In addition, we observe that 10.5% of 68,051 apps share personal identifiers with third-party service providers, yet do not declare any in their privacy policies, and only 22.2% of the apps explicitly name third parties. This ultimately makes it not only difficult, but in most cases impossible, for users to establish where their personal data is being processed. Furthermore, we find that 9,424 apps do not use TLS when transmitting personal identifiers, yet 28.4% of these apps claim to take measures to secure data transfer. Ultimately, these divergences between disclosures and actual app behaviors illustrate the ridiculousness of the notice and consent framework. 
    more » « less
  3. Between 2018 and 2021 PIs for National Science Foundation Awards # 1758781 and 1758814 EAGER: Collaborative Research: Developing and Testing an Incubator for Digital Entrepreneurship in Remote Communities, in partnership with the Tanana Chiefs Conference, the traditional tribal consortium of the 42 villages of Interior Alaska, jointly developed and conducted large-scale digital and in-person surveys of multiple Alaskan interior communities. The survey was distributed via a combination of in-person paper surveys, digital surveys, social media links, verbal in-person interviews and telephone-based responses. Analysis of this measure using SAS demonstrated the statistically significant need for enhanced digital infrastructure and reworked digital entrepreneurial and technological education in the Tanana Chiefs Conference region. 1. Two statistical measures were created during this research: Entrepreneurial Readiness (ER) and Digital Technology needs and skills (DT), both of which showed high measures of internal consistency (.89, .81). 2. The measures revealed entrepreneurial readiness challenges and evidence of specific addressable barriers that are currently preventing (serving as hindrances) to regional digital economic activity. The survey data showed statistically significant correlation with the mixed-methodological in-person focus groups and interview research conducted by the PIs and TCC collaborators in Hughes and Huslia, AK, which further corroborated stated barriers to entrepreneurship development in the region. 3. Data generated by the survey and fieldwork is maintained by the Tanana Chiefs Conference under data sovereignty agreements. The survey and focus group data contains aggregated statistical/empirical data as well as qualitative/subjective detail that runs the risk of becoming personally identifiable especially due to (but not limited to) to concerns with exceedingly small Arctic community population sizes. 4. This metadata is being provided in order to serve as a record of the data collection and analysis conducted, and also to share some high-level findings that, while revealing no personal information, may be helpful for policymaking, regional planning and efforts towards educational curricular development and infrastructural investment. The sample demographics consist of 272 women, 79 men, and 4 with gender not indicated as a response. Barriers to Entrepreneurial Readiness were a component of the measure. Lack of education is the #1 barrier, followed closely by lack of access to childcare. Among women who participated in the survey measure, 30% with 2 or more children report lack of childcare to be a significant barrier to entrepreneurial and small business activity. For entrepreneurial readiness and digital economy, the scales perform well from a psychometric standpoint. The summary scores are roughly normally distributed. Cronbach’s alphas are greater than 0.80 for both. They are moderately correlated with each other (r = 0.48, p < .0001). Men and women do not differ significantly on either measure. Education is significantly related to the digital economy measure. The detail provided in the survey related to educational needs enabled optimized development of the Incubator for Digital Entrepreneurship in Remote Communities. Enhanced digital entrepreneurship training with clear cultural linkages to traditions and community needs, along with additional childcare opportunities are two among several specific recommendations provided to the TCC. The project PIs are working closely with the TCC administration and community members related to elements of culturally-aligned curricular development that respects data tribal sovereignty, local data management protocols, data anonymity and adherence to human subjects (IRB) protocols. While the survey data is currently embargoed and unable to be submitted publicly for reasons of anonymity, the project PIs are working with the NSF Arctic Data Center towards determining pathways for sharing personally-protected data with the larger scientific community. These approaches may consist of aggregating and digitally anonymizing sensitive data in ways that cannot be de-aggregated and that meet agency and scientific community needs (while also fully respecting and protecting participants’ rights and personal privacy). At present the data sensitivity protocols are not yet adapted to TCC requirements and the datasets will remain in their care. 
    more » « less
  4. Research on keystroke dynamics has the good potential to offer continuous authentication that complements conventional authentication methods in combating insider threats and identity theft before more harm can be done to the genuine users. Unfortunately, the large amount of data required by free-text keystroke authentication often contain personally identifiable information, or PII, and personally sensitive information, such as a user's first name and last name, username and password for an account, bank card numbers, and social security numbers. As a result, there are privacy risks associated with keystroke data that must be mitigated before they are shared with other researchers. We conduct a systematic study to remove PII's from a recent large keystroke dataset. We find substantial amounts of PII's from the dataset, including names, usernames and passwords, social security numbers, and bank card numbers, which, if leaked, may lead to various harms to the user, including personal embarrassment, blackmails, financial loss, and identity theft. We thoroughly evaluate the effectiveness of our detection program for each kind of PII. We demonstrate that our PII detection program can achieve near perfect recall at the expense of losing some useful information (lower precision). Finally, we demonstrate that the removal of PII's from the original dataset has only negligible impact on the detection error tradeoff of the free-text authentication algorithm by Gunetti and Picardi. We hope that this experience report will be useful in informing the design of privacy removal in future keystroke dynamics based user authentication systems. 
    more » « less
  5. Furnell, Steven (Ed.)
    A huge amount of personal and sensitive data is shared on Facebook, which makes it a prime target for attackers. Adversaries can exploit third-party applications connected to a user’s Facebook profile (i.e., Facebook apps) to gain access to this personal information. Users’ lack of knowledge and the varying privacy policies of these apps make them further vulnerable to information leakage. However, little has been done to identify mismatches between users’ perceptions and the privacy policies of Facebook apps. We address this challenge in our work. We conducted a lab study with 31 participants, where we received data on how they share information in Facebook, their Facebook-related security and privacy practices, and their perceptions on the privacy aspects of 65 frequently-used Facebook apps in terms of data collection, sharing, and deletion. We then compared participants’ perceptions with the privacy policy of each reported app. Participants also reported their expectations about the types of information that should not be collected or shared by any Facebook app. Our analysis reveals significant mismatches between users’ privacy perceptions and reality (i.e., privacy policies of Facebook apps), where we identified over-optimism not only in users’ perceptions of information collection, but also on their self-efficacy in protecting their information in Facebook despite experiencing negative incidents in the past. To the best of our knowledge, this is the first study on the gap between users’ privacy perceptions around Facebook apps and the reality. The findings from this study offer directions for future research to address that gap through designing usable, effective, and personalized privacy notices to help users to make informed decisions about using Facebook apps. 
    more » « less