skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: New Risks for Workers at Heights: Human-Drone Collaboration Risks in Construction
Award ID(s):
2024656
PAR ID:
10345742
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2021 ASCE International Conference on Computing in Civil Engineering (i3CE)
Page Range / eLocation ID:
321 to 328
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Property inference attacks reveal statistical properties about a training set but are difficult to distinguish from the primary purposes of statistical machine learning, which is to produce models that capture statistical properties about a distribution. Motivated by Yeom et al.’s membership inference framework, we propose a formal and generic definition of property inference attacks. The proposed notion describes attacks that can distinguish between possible training distributions, extending beyond previous property inference attacks that infer the ratio of a particular type of data in the training data set. In this paper, we show how our definition captures previous property inference attacks as well as a new attack that reveals the average degree of nodes of a training graph and report on experiments giving insight into the potential risks of property inference attacks. 
    more » « less
  2. Abstract Many instances of scientific research impose risks, not just on participants and scientists but also on third parties. This class ofsocial risksunifies a range of problems previously treated as distinct phenomena, including so‐called bystander risks, biosafety concerns arising from gain‐of‐function research, the misuse of the results of dual‐use research, and the harm caused by inductive risks. The standard approach to these problems has been to extend two familiar principles from human subjects research regulations—a favorable risk‐benefit ratio and informed consent. We argue, however, that these moral principles will be difficult to satisfy in the context of widely distributed social risks about which affected parties may reasonably disagree. We propose that framing these risks as political rather than moral problems may offer another way. By borrowing lessons from political philosophy, we propose a framework that unifies our discussion of social risks and the possible solutions to them. 
    more » « less
  3. All known life is homochiral. DNA and RNA are made from “righthanded” nucleotides, and proteins are made from “left-handed” amino acids. Driven by curiosity and plausible applications, some researchers had begun work toward creating lifeforms composed entirely of mirror-image biological molecules. Such mirror organisms would constitute a radical departure from known life, and their creation warrants careful consideration. The capability to create mirror life is likely at least a decade away and would require large investments and major technical advances; we thus have an opportunity to consider and preempt risks before they are realized. Here, we draw on an indepth analysis of current technical barriers, how they might be eroded by technological progress, and what we deem to be unprecedented and largely overlooked risks. We call for broader discussion among the global research community, policy-makers, research funders, industry, civil society, and the public to chart an appropriate path forward. 
    more » « less
  4. Distribution inference, sometimes called property inference, infers statistical properties about a training set from access to a model trained on that data. Distribution inference attacks can pose serious risks when models are trained on private data, but are difficult to distinguish from the intrinsic purpose of statistical machine learning—namely, to produce models that capture statistical properties about a distribution. Motivated by Yeom et al.’s membership inference framework, we propose a formal definition of distribution inference attacks general enough to describe a broad class of attacks distinguishing between possible training distributions. We show how our definition captures previous ratio-based inference attacks as well as new kinds of attack including revealing the average node degree or clustering coefficient of training graphs. To understand distribution inference risks, we introduce a metric that quantifies observed leakage by relating it to the leakage that would occur if samples from the training distribution were provided directly to the adversary. We report on a series of experiments across a range of different distributions using both novel black-box attacks and improved versions of the state-of-the-art white-box attacks. Our results show that inexpensive attacks are often as effective as expensive meta-classifier attacks, and that there are surprising asymmetries in the effectiveness of attacks. 
    more » « less