Abstract LetGbe a linear real reductive Lie group. Orbital integrals define traces on the group algebra ofG. We introduce a construction of higher orbital integrals in the direction of higher cyclic cocycles on the Harish-Chandra Schwartz algebra ofG. We analyze these higher orbital integrals via Fourier transform by expressing them as integrals on the tempered dual ofG. We obtain explicit formulas for the pairing between the higher orbital integrals and theK-theory of the reduced group$$C^{*}$$-algebra, and we discuss their application toK-theory.
more »
« less
Checking δ-Satisfiability of Reals with Integrals
Many synthesis and verification problems can be reduced to determining the truth of formulas over the real numbers. These formulas often involve constraints with integrals in them. To this end, we extend the framework of δ-decision procedures with techniques for handling integrals of user-specified real functions. We implement this decision procedure in the tool ∫dReal, which is built on top of dReal. We evaluate ∫dReal on a suite of problems that include formulas verifying the fairness of algorithms and the privacy and the utility of privacy mechanisms and formulas that synthesize parameters for the desired utility of privacy mechanisms. The performance of the tool in these experiments demonstrates the effectiveness of ∫dReal.
more »
« less
- Award ID(s):
- 1900924
- PAR ID:
- 10642031
- Publisher / Repository:
- Association for Computing Machinery (ACM)
- Date Published:
- Journal Name:
- Proceedings of the ACM on Programming Languages
- Volume:
- 9
- Issue:
- OOPSLA1
- ISSN:
- 2475-1421
- Format(s):
- Medium: X Size: p. 704-729
- Size(s):
- p. 704-729
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Spam phone calls have been rapidly growing from nuisance to an increasingly effective scam delivery tool. To counter this increasingly successful attack vector, a number of commercial smartphone apps that promise to block spam phone calls have appeared on app stores, and are now used by hundreds of thousands or even millions of users. However, following a business model similar to some online social network services, these apps often collect call records or other potentially sensitive information from users’ phones with little or no formal privacy guarantees. In this paper, we study whether it is possible to build a practical collaborative phone blacklisting system that makes use of local differential privacy (LDP) mechanisms to provide clear privacy guarantees. We analyze the challenges and trade-offs related to using LDP, evaluate our LDP-based system on real-world user-reported call records collected by the FTC, and show that it is possible to learn a phone blacklist using a reasonable overall privacy budget and at the same time preserve users’ privacy while maintaining utility for the learned blacklist.more » « less
-
Dynamic spectrum sharing between licensed incumbent users (IUs) and unlicensed wireless industries has been well recognized as an efficient approach to solving spectrum scarcity as well as creating spectrum markets. Recently, both U.S. and European governments called a ruling on opening up spectrum that was initially licensed to sensitive military/federal systems. However, this introduces serious concerns on operational privacy (e.g., location, time and frequency of use) of IUs for national security concerns. Although several works have proposed obfuscation methods to address this problem, these techniques only rely on syntactic privacy models, lacking rigorous privacy guarantee. In this paper, we propose a comprehensive framework to provide real-time differential location privacy for sensitive IUs. We design a utility-optimal differentially private mechanism to reduce the loss in spectrum efficiency while protecting IUs from harmful interference. Furthermore, we strategically combine differential privacy with another privacy notion, expected inference error, to provide double shield protection for IU’s location privacy. Extensive simulations are conducted to validate our design and demonstrate significant improvements in utility and location privacy compared with other existing mechanisms.more » « less
-
To quantify trade-offs between increasing demand for open data sharing and concerns about sensitive information disclosure, statistical data privacy (SDP) methodology analyzes data release mechanisms that sanitize outputs based on confidential data. Two dominant frameworks exist: statistical disclosure control (SDC) and the more recent differential privacy (DP). Despite framing differences, both SDC and DP share the same statistical problems at their core. For inference problems, either we may design optimal release mechanisms and associated estimators that satisfy bounds on disclosure risk measures, or we may adjust existing sanitized output to create new statistically valid and optimal estimators. Regardless of design or adjustment, in evaluating risk and utility, valid statistical inferences from mechanism outputs require uncertainty quantification that accounts for the effect of the sanitization mechanism that introduces bias and/or variance. In this review, we discuss the statistical foundations common to both SDC and DP, highlight major developments in SDP, and present exciting open research problems in private inference.more » « less
-
Differential privacy (DP) data synthesizers are increasingly proposed to afford public release of sensitive information, offering theoretical guarantees for privacy (and, in some cases, utility), but limited empirical evidence of utility in practical settings. Utility is typically measured as the error on representative proxy tasks, such as descriptive statistics, multivariate correlations, the accuracy of trained classifiers, or performance over a query workload. The ability for these results to generalize to practitioners' experience has been questioned in a number of settings, including the U.S. Census. In this paper, we propose an evaluation methodology for synthetic data that avoids assumptions about the representativeness of proxy tasks, instead measuring the likelihood that published conclusions would change had the authors used synthetic data, a condition we call epistemic parity. Our methodology consists of reproducing empirical conclusions of peer-reviewed papers on real, publicly available data, then re-running these experiments a second time on DP synthetic data and comparing the results. We instantiate our methodology over a benchmark of recent peer-reviewed papers that analyze public datasets in the ICPSR social science repository. We model quantitative claims computationally to automate the experimental workflow, and model qualitative claims by reproducing visualizations and comparing the results manually. We then generate DP synthetic datasets using multiple state-of-the-art mechanisms, and estimate the likelihood that these conclusions will hold. We find that, for reasonable privacy regimes, state-of-the-art DP synthesizers are able to achieve high epistemic parity for several papers in our benchmark. However, some papers, and particularly some specific findings, are difficult to reproduce for any of the synthesizers. Given these results, we advocate for a new class of mechanisms that can reorder the priorities for DP data synthesis: favor stronger guarantees for utility (as measured by epistemic parity) and offer privacy protection with a focus on application-specific threat models and risk-assessment.more » « less
An official website of the United States government
