skip to main content


Title: Spatial Privacy Pricing: The Interplay between Privacy, Utility and Price in Geo-Marketplaces
A geo-marketplace allows users to be paid for their location data. Users concerned about privacy may want to charge more for data that pinpoints their location accurately, but may charge less for data that is more vague. A buyer would prefer to minimize data costs, but may have to spend more to get the necessary level of accuracy. We call this interplay between privacy, utility, and price spatial privacy pricing. We formalize the issues mathematically with an example problem of a buyer deciding whether or not to open a restaurant by purchasing location data to determine if the potential number of customers is sufficient to open. The problem is expressed as a sequential decision making problem, where the buyer first makes a series of decisions about which data to buy and concludes with a decision about opening the restaurant or not. We present two algorithms to solve this problem, including experiments that show they perform better than baselines.  more » « less
Award ID(s):
1910950
NSF-PAR ID:
10192046
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
SIGSPATIAL '20: Proceedings of the 28th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Data sets and statistics about groups of individuals are increasingly collected and released, feeding many optimization and learning algorithms. In many cases, the released data contain sensitive information whose privacy is strictly regulated. For example, in the U.S., the census data is regulated under Title 13, which requires that no individual be identified from any data released by the Census Bureau. In Europe, data release is regulated according to the General Data Protection Regulation, which addresses the control and transfer of personal data. Differential privacy has emerged as the de-facto standard to protect data privacy. In a nutshell, differentially private algorithms protect an individual’s data by injecting random noise into the output of a computation that involves such data. While this process ensures privacy, it also impacts the quality of data analysis, and, when private data sets are used as inputs to complex machine learning or optimization tasks, they may produce results that are fundamentally different from those obtained on the original data and even rise unintended bias and fairness concerns. In this talk, I will first focus on the challenge of releasing privacy-preserving data sets for complex data analysis tasks. I will introduce the notion of Constrained-based Differential Privacy (C-DP), which allows casting the data release problem to an optimization problem whose goal is to preserve the salient features of the original data. I will review several applications of C-DP in the context of very large hierarchical census data, data streams, energy systems, and in the design of federated data-sharing protocols. Next, I will discuss how errors induced by differential privacy algorithms may propagate within a decision problem causing biases and fairness issues. This is particularly important as privacy-preserving data is often used for critical decision processes, including the allocation of funds and benefits to states and jurisdictions, which ideally should be fair and unbiased. Finally, I will conclude with a roadmap to future work and some open questions. 
    more » « less
  2. Recently, the ubiquity of mobile devices leads to an increasing demand of public network services, e.g., WiFi hot spots. As a part of this trend, modern transportation systems are equipped with public WiFi devices to provide Internet access for passengers as people spend a large amount of time on public transportation in their daily life. However, one of the key issues in public WiFi spots is the privacy concern due to its open access nature. Existing works either studied location privacy risk in human traces or privacy leakage in private networks such as cellular networks based on the data from cellular carriers. To the best of our knowledge, none of these work has been focused on bus WiFi privacy based on large-scale real-world data. In this paper, to explore the privacy risk in bus WiFi systems, we focus on two key questions how likely bus WiFi users can be uniquely re-identified if partial usage information is leaked and how we can protect users from the leaked information. To understand the above questions, we conduct a case study in a large-scale bus WiFi system, which contains 20 million connection records and 78 million location records from 770 thousand bus WiFi users during a two-month period. Technically, we design two models for our uniqueness analyses and protection, i.e., a PB-FIND model to identify the probability a user can be uniquely re-identified from leaked information; a PB-HIDE model to protect users from potentially leaked information. Specifically, we systematically measure the user uniqueness on users' finger traces (i.e., connection URL and domain), foot traces (i.e., locations), and hybrid traces (i.e., both finger and foot traces). Our measurement results reveal (i) 97.8% users can be uniquely re-identified by 4 random domain records of their finger traces and 96.2% users can be uniquely re-identified by 5 random locations on buses; (ii) 98.1% users can be uniquely re-identified by only 2 random records if both their connection records and locations are leaked to attackers. Moreover, the evaluation results show our PB-HIDE algorithm protects more than 95% users from the potentially leaked information by inserting only 1.5% synthetic records in the original dataset to preserve their data utility. 
    more » « less
  3. Open data sets that contain personal information are susceptible to adversarial attacks even when anonymized. By performing low-cost joins on multiple datasets with shared attributes, malicious users of open data portals might get access to information that violates individuals’ privacy. However, open data sets are primarily published using a release-and-forget model, whereby data owners and custodians have little to no cognizance of these privacy risks. We address this critical gap by developing a visual analytic solution that enables data defenders to gain awareness about the disclosure risks in local, joinable data neighborhoods. The solution is derived through a design study with data privacy researchers, where we initially play the role of a red team and engage in an ethical data hacking exercise based on privacy attack scenarios. We use this problem and domain characterization to develop a set of visual analytic interventions as a defense mechanism and realize them in PRIVEE, a visual risk inspection workflow that acts as a proactive monitor for data defenders. PRIVEE uses a combination of risk scores and associated interactive visualizations to let data defenders explore vulnerable joins and interpret risks at multiple levels of data granularity. We demonstrate how PRIVEE can help emulate the attack strategies and diagnose disclosure risks through two case studies with data privacy experts. 
    more » « less
  4. This paper explores how individuals' privacy-related decision-making processes may be influenced by their pre-existing relationships to companies in a wider social and economic context. Through an online role-playing exercise, we explore attitudes to a range of services including home automation, Internet-of-Things and financial services. We find that individuals do not only consider the privacy-related attributes of applications, devices or services in the abstract. Rather, their decisions are heavily influenced by their pre-existing perceptions of, and relationships with, the companies behind such apps, devices and services. In particular, perceptions about a company's size, level of regulatory scrutiny, relationships with third parties, and pre-existing data exposure lead some users to choose an option which might otherwise appear worse from a privacy perspective. This finding suggests a need for tools that support users to incorporate these existing perceptions and relationships into their privacy-related decision making. 
    more » « less
  5. Abstract. Social media (SM) has become a principal information source and the vast amounts of generated data are increasingly used to inform various disciplines. Most platforms, such as Instagram, Twitter or Flickr, offer the option to tag a post with a location or a precise coordinate, which also fosters applications of data in the geospatial fields. Notwithstanding the many ways in which these data could be analyzed and applied, scandals such as Cambridge Analytica have also shown the risks to user privacy that seem inherently part of the data.Is it possible to mitigate these risks, while maintaining the collective usability of this data for society questions? We identify urban planning as a key field for socio-spatial justice and propose an open source map-based cross-platform dashboard, fueled by geospatial SM, as a supporting tool for municipal decision-makers and citizens alike. As a core part of this tool, we implement a novel privacy-aware data structure that allows for both, a more transparent, encompassing data ground for municipalities, and a reduced data collection footprint, preventing the misuse of data or compromising user privacy. 
    more » « less