skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Machine Learning Security as a Source of Unfairness in Human-Robot Interaction
Machine learning models that sense human speech, body placement, and other key features are commonplace in human-robot interaction. However, the deployment of such models in themselves is not without risk. Research in the security of machine learning examines how such models can be exploited and the risks associated with these exploits. Unfortunately, the threat models of risks produced by machine learning security do not incorporate the rich sociotechnical underpinnings of the defenses they propose; as a result, efforts to improve the security of machine learning models may actually increase the difference in performance across different demographic groups, yielding systems that have risk mitigation that work better for one group than another. In this work, we outline why current approaches to machine learning security present DEI concerns for the human-robot interaction community and where there are open areas for collaboration.  more » « less
Award ID(s):
2145642 2024878
PAR ID:
10426143
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Human-Robot Interaction (HRI) Workshop on Inclusive HRI II: Equity and Diversity in Design, Application, Methods, and Community (DEI HRI)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Feasible and developmentally appropriate sociotechnical approaches for protecting youth from online risks have become a paramount concern among human-computer interaction research communities. Therefore, we conducted 38 interviews with entrepreneurs, IT professionals, clinicians, educators, and researchers who currently work in the space of youth online safety to understand the different sociotechnical approaches they proposed to keep youth safe online, while overcoming key challenges associated with these approaches. We identified three approaches taken among these stakeholders, which included 1) leveraging artificial intelligence (AI)/machine learning to detect risks, 2) building security/safety tools, and 3) developing new forms of parental control software. The trade-offs between privacy and protection, as well as other tensions among different stakeholders (e.g., tensions toward the big-tech companies) arose as major challenges, followed by the subjective nature of risk, lack of necessary but proprietary data, and costs to develop these technical solutions. To overcome the challenges, solutions such as building centralized and multi-disciplinary collaborations, creating sustainable business plans, prioritizing human-centered approaches, and leveraging state-of-art AI were suggested. Our contribution to the body of literature is providing evidence-based implications for the design of sociotechnical solutions to keep youth safe online. 
    more » « less
  2. Computational approaches to detect the online risks that the youth encounter have presented promising potentials to protect them online. However, a major identified trend among these approaches is the lack of human-centered machine learning (HCML) aspect. It is necessary to move beyond the computational lens of the detection task to address the societal needs of such a vulnerable population. Therefore, I direct my attention in this dissertation to better understand youths’ risk experiences prior to enhancing the development of risk detection algorithms by 1) Examining youths’ (ages 13–17) public disclosures about sexual experiences and contextualizing these experiences based on the levels of consent (i.e., consensual, non-consensual, sexual abuse) and relationship types (i.e., stranger, dating/friend, family), 2) Moving beyond the sexual experiences to examine a broader array of risks within the private conversations of youth (N = 173) between 13 and 21 and contextualizing the dynamics of youth online and offline risks and the self-reports of risk experiences to the digital trace data, and 3) Building real-time machine learning models for risk detection by creating a contextualized framework. This dissertation provides a human-centered approach for improving automated real-time risk predictions that are derived from a contextualized understanding of the nuances relative to youths’ risk experiences. 
    more » « less
  3. Harguess, Joshua D; Bastian, Nathaniel D; Pace, Teresa L (Ed.)
    Outsourcing computational tasks to the cloud offers numerous advantages, such as availability, scalability, and elasticity. These advantages are evident when outsourcing resource-demanding Machine Learning (ML) applications. However, cloud computing presents security challenges. For instance, allocating Virtual Machines (VMs) with varying security levels onto commonly shared servers creates cybersecurity and privacy risks. Researchers proposed several cryptographic methods to protect privacy, such as Multi-party Computation (MPC). Attackers unfortunately can still gain unauthorized access to users’ data if they successfully compromise a specific number of the participating MPC nodes. Cloud Service Providers (CSPs) can mitigate the risk of such attacks by distributing the MPC protocol over VMs allocated to separate physical servers (i.e., hypervisors). On the other hand, underutilizing cloud servers increases operational and resource costs, and worsens the overhead of MPC protocols. In this ongoing work, we address the security, communication and computation overheads, and performance limitations of MPC. We model this multi-objective optimization problem using several approaches, including but not limited to, zero-sum and non-zero-sum games. For example, we investigate Nash Equilibrium (NE) allocation strategies that reduce potential security risks, while minimizing response time and performance overhead, and/or maximizing resource usage. 
    more » « less
  4. Understanding human perceptions of robot performance is crucial for designing socially intelligent robots that can adapt to human expectations. Current approaches often rely on surveys, which can disrupt ongoing human–robot interactions. As an alternative, we explore predicting people’s perceptions of robot performance using non-verbal behavioral cues and machine learning techniques. We contribute the SEAN TOGETHER Dataset consisting of observations of an interaction between a person and a mobile robot in Virtual Reality, together with perceptions of robot performance provided by users on a 5-point scale. We then analyze how well humans and supervised learning techniques can predict perceived robot performance based on different observation types (like facial expression and spatial behavior features). Our results suggest that facial expressions alone provide useful information, but in the navigation scenarios that we considered, reasoning about spatial features in context is critical for the prediction task. Also, supervised learning techniques outperformed humans’ predictions in most cases. Further, when predicting robot performance as a binary classification task on unseen users’ data, the F1-Score of machine learning models more than doubled that of predictions on a 5-point scale. This suggested good generalization capabilities, particularly in identifying performance directionality over exact ratings. Based on these findings, we conducted a real-world demonstration where a mobile robot uses a machine learning model to predict how a human who follows it perceives it. Finally, we discuss the implications of our results for implementing these supervised learning models in real-world navigation. Our work paves the path to automatically enhancing robot behavior based on observations of users and inferences about their perceptions of a robot. 
    more » « less
  5. Accurate real-time risk identification is vital to protecting social media users from online harm, which has driven research towards advancements in machine learning (ML). While strides have been made regarding the computational facets of algorithms for “real-time” risk detection, such research has not yet evaluated these advancements through a human-centered lens. To this end, we conducted a systematic literature review of 53 peer-reviewed articles on real-time risk detection on social media. Real-time detection was mainly operationalized as “early” detection after-the-fact based on pre-defined chunks of data and evaluated based on standard performance metrics, such as timeliness. We identified several human-centered opportunities for advancing current algorithms, such as integrating human insight in feature selection, algorithms’ improvement considering human behavior, and utilizing human evaluations. This work serves as a critical call-to-action for the HCI and ML communities to work together to protect social media users before, during, and after exposure to risks. 
    more » « less