skip to main content


Title: Aligning Legal Definitions of Personal Information with the Computer Science of Identifiability
The computer science literature on identification of people using personal information paints a wide spectrum, from aggregate information that doesn’t contain information about individual people, to information that itself identifies a person. However, privacy laws and regulations often distinguish between only two types, often called personally identifiable information and de-identified information. We show that the collapse of this technological spectrum of identifiability into only two legal definitions results in the failure to encourage privacy-preserving practices. We propose a set of legal definitions that spans the spectrum. We start with anonymous information. Computer science has created anonymization algorithms, including differential privacy, that provide mathematical guarantees that a person cannot be identified. Although the California Consumer Privacy Act (CCPA) defines aggregate information, it treats aggregate information the same as de-identified information. We propose a definition of anonymous information based on the technological possibility of logical association of the information with other information. We argue for the exclusion of anonymous information from notice and consent requirements. We next consider de-identified information. Computer science has created de-identification algorithms, including generalization, that minimize (but not eliminate) the risk of re-identification. GDPR defines anonymous information but not de-identified information, and CCPA defines de-identified information but not anonymous information. The definitions do not align. We propose a definition of de-identified information based on the reasonableness of association with other information. We propose legal controls to protect against re-identification. We argue for the inclusion of de-identified information in notice requirements, but the exclusion of de-identified information from choice requirements. We next address the distinction between trackable and non-trackable information. Computer science has shown how one-time identifiers can be used to protect reasonably linkable information from being tracked over time. Although both GDPR and CCPA discuss profiling, neither formally defines it as a form of personal information, and thus both fail to adequately protect against it. We propose definitions of trackable information and non-trackable information based on the likelihood of association with information from other contexts. We propose a set of legal controls to protect against tracking. We argue for requiring stronger forms of user choice for trackable information, which will encourage the use of non-trackable information. Finally, we address the distinction between pseudonymous and reasonably identifiable information. Computer science has shown how pseudonyms can be used to reduce identification. Neither GDPR nor CCPA makes a distinction between pseudonymous and reasonable identifiable information. We propose definitions based on the reasonableness of identifiability of the information, and we propose a set of legal controls to protect against identification. We argue for requiring stronger forms of user choice for reasonably identifiable information, which will encourage the use of pseudonymous information. Our definitions of anonymous information, de-identified information, non-trackable information, trackable information, and reasonably identifiable information can replace the over-simplified distinction between personally identifiable information versus de-identified information. We hope that this full spectrum of definitions can be used in a comprehensive privacy law to tailor notice and consent requirements to the characteristics of each type of information.  more » « less
Award ID(s):
1956393
NSF-PAR ID:
10349059
Author(s) / Creator(s):
Date Published:
Journal Name:
Research Conference on Communication, Information and Internet Policy (TPRC)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Development of a comprehensive legal privacy framework in the United States should be based on identification of the common deficiencies of privacy policies. We attempt to delineate deficiencies by critically analyzing the privacy policies of mobile apps, application suites, social networks, Internet Service Providers, and Internet-of-Things devices. Whereas many studies have examined readability of privacy policies, few have specifically identified the information that should be provided in privacy policies but is not. Privacy legislation invariably starts a definition of personally identifiable information. We find that privacy policies’ definitions of personally identifiable information are far too restrictive, excluding information that does not itself identify a person but which can be used to reasonably identify a person, and excluding information paired with a device identifier which can be reasonably linked to a person. Legislation should define personally identifiable information to include such information, and should differentiate between information paired with a name versus information paired with a device identifier. Privacy legislation often excludes anonymous and de-identified information from notice and choice requirements. We find that privacy policies’ descriptions of anonymous and de-identified information are far too broad, including information paired with advertising identifiers. Computer science has repeatedly demonstrated that such information is reasonably linkable. Legislation should define these categories of information to align with technological abilities. Legislation should also not exempt de-identified information from notice requirements, to increase transparency. Privacy legislation relies heavily on notice requirements. We find that, because privacy policies’ disclosures of the uses of personal information are disconnected from their disclosures about the types of personal information collected, we are often unable to determine which types of information are used for which purposes. Often, we cannot determine whether location or web browsing history is used solely for functional purposes or also for advertising. Legislation should require the disclosure of the purposes for each type of personal information collected. We also find that, because privacy policies disclosures of sharing of personal information are disconnected from their disclosures about the types of personal information collected, we are often unable to determine which types of information are shared. Legislation should require the disclosure of the types of personal information shared. Finally, privacy legislation relies heavily on user choice. We find that free services often require the collection and sharing of personal information. As a result, users often have no choices. We find that whereas some paid services afford users a wide variety of choices, paid services in less competitive sectors often afford users few choices over use and sharing of personal information for purposes unrelated to the service. As a result, users are often unable to dictate which types of information they wish to allow to be shared, and which types they wish to allow to be used for advertising. Legislation should differentiate between take-it-or-leave it, opt-out, and opt-in approaches based on the type of use and on whether the information is shared. Congress should consider whether user choices should be affected by the presence of market power. 
    more » « less
  2. We compare the notice and consent requirements of the three recent privacy regulations that are most likely to serve as the starting points for the creation of a comprehensive consumer privacy bill in the United States: the European General Data Protection Regulation, the California Consumer Privacy Act/California Privacy Rights Act, and the Federal Communications Commission’s Broadband Privacy Order. We compare the scope of personal information under each regulation, including the test for identifiability and exclusions for de-identified information, and identify problems with their treatment of de-identified information and of pseudonymous information. We compare notice requirements, including the level of required detail and the resulting ability of consumers to understand the use and flow of their personal information, and identify deficiencies with consumers’ ability to track the flow of their personal information. Finally, we compare consumer choices under each regulation, including when a consumer must agree to the use of their personal information in order to utilize a service or application, and find that none of the regulations take full advantage of the range of options, and thereby fail to disincentive tracking. 
    more » « less
  3. Mobile tracking has long been a privacy problem, where the geographic data and timestamps gathered by mobile network operators (MNOs) are used to track the locations and movements of mobile subscribers. Additionally, selling the geolocation information of subscribers has become a lucrative business. Many mobile carriers have violated user privacy agreements by selling users’ location history to third parties without user consent, exacerbating privacy issues related to mobile tracking and profiling. This paper presents AAKA, an anonymous authentication and key agreement scheme designed to protect against mobile tracking by honest-but-curious MNOs. AAKA leverages anonymous credentials and introduces a novel mobile authentication protocol that allows legitimate subscribers to access the network anonymously, without revealing their unique (real) IDs. It ensures the integrity of user credentials, preventing forgery, and ensures that connections made by the same user at different times cannot be linked. While the MNO alone cannot identify or profile a user, AAKA enables identification of a user under legal intervention, such as when the MNOs collaborate with an authorized law enforcement agency. Our design is compatible with the latest cellular architecture and SIM standardized by 3GPP, meeting 3GPP’s fundamental security requirements for User Equipment (UE) authentication and key agreement processes. A comprehensive security analysis demonstrates the scheme’s effectiveness. The evaluation shows that the scheme is practical, with a credential presentation generation taking∼ 52 ms on a constrained host device equipped with a standard cellular SIM. 
    more » « less
  4. Data privacy regulations like GDPR and CCPA define a right of access empowering consumers to view the data companies store about them. Companies satisfy these requirements in part via data downloads, or downloadable archives containing this information. Data downloads vary in format, organization, comprehensiveness, and content. It is unknown, however, whether current data downloads actually achieve the transparency goals embodied by the right of access. In this paper, we report on the first exploration of the design of data downloads. Through 12 focus groups involving 42 participants, we gathered reactions to six companies’ data downloads. Using co-design techniques, we solicited ideas for future data download designs, formats, and tools. Most participants indicated that current offerings need improvement to be useful, emphasizing the need for better filtration, visualization, and summarization to help them hone in on key information. 
    more » « less
  5. Involving the public in scientific discovery offers opportunities for engagement, learning, participation, and action. Since its launch in 2007, the CitSci.org platform has supported hundreds of community-driven citizen science projects involving thousands of participants who have generated close to a million scientific measurements around the world. Members using CitSci.org follow their curiosities and concerns to develop, lead, or simply participate in research projects. While professional scientists are trained to make ethical determinations related to the collection of, access to, and use of information, citizen scientists and practitioners may be less aware of such issues and more likely to become involved in ethical dilemmas. In this era of big and open data, where data sharing is encouraged and open science is promoted, privacy and openness considerations can often be overlooked. Platforms that support the collection, use, and sharing of data and personal information need to consider their responsibility to protect the rights to and ownership of data, the provision of protection options for data and members, and at the same time provide options for openness. This requires critically considering both intended and unintended consequences of the use of platforms, data, and volunteer information. Here, we use our journey developing CitSci.org to argue that incorporating customization into platforms through flexible design options for project managers shifts the decision-making from top-down to bottom-up and allows project design to be more responsive to goals. To protect both people and data, we developed—and continue to improve—options that support various levels of “open” and “closed” access permissions for data and membership participation. These options support diverse governance styles that are responsive to data uses, traditional and indigenous knowledge sensitivities, intellectual property rights, personally identifiable information concerns, volunteer preferences, and sensitive data protections. We present a typology for citizen science openness choices, their ethical considerations, and strategies that we are actively putting into practice to expand privacy options and governance models based on the unique needs of individual projects using our platform. 
    more » « less