skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.


Title: CONFIDANT: A Privacy Controller for Social Robots
As social robots become increasingly prevalent in day-to-day environments, they will participate in conversations and appropriately manage the information shared with them. However, little is known about how robots might appropriately discern the sensitivity of information, which has major implications for human-robot trust. As a first step to address a part of this issue, we designed a privacy controller, CONFIDANT, for conversational social robots, capable of using contextual metadata (e.g., sentiment, relationships, topic) from conversations to model privacy boundaries. Afterwards, we conducted two crowdsourced user studies. The first study (n = 174) focused on whether a variety of human-human interaction scenarios were perceived as either private/sensitive or non-private/non-sensitive. The findings from our first study were used to generate association rules. Our second study (n = 95) evaluated the effectiveness and accuracy of the privacy controller in human-robot interaction scenarios by comparing a robot that used our privacy controller against a baseline robot with no privacy controls. Our results demonstrate that the robot with the privacy controller outperforms the robot without the privacy controller in privacy-awareness, trustworthiness, and social-awareness. We conclude that the integration of privacy controllers in authentic human-robot conversations can allow for more trustworthy robots. This initial privacy controller will serve as a foundation for more complex solutions.  more » « less
Award ID(s):
1906854
NSF-PAR ID:
10384276
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction
Page Range / eLocation ID:
205–214
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile robots must navigate efficiently, reliably, and appropriately around people when acting in shared social environments. For robots to be accepted in such environments, we explore robot navigation for the social contexts of each setting. Navigating through dynamic environments solely considering a collision-free path has long been solved. In human-robot environments, the challenge is no longer about efficiently navigating from one point to another. Autonomously detecting the context and adapting to an appropriate social navigation strategy is vital for social robots’ long-term applicability in dense human environments. As complex social environments, museums are suitable for studying such behavior as they have many different navigation contexts in a small space.Our prior Socially-Aware Navigation model considered con-text classification, object detection, and pre-defined rules to define navigation behavior in more specific contexts, such as a hallway or queue. This work uses environmental context, object information, and more realistic interaction rules for complex social spaces. In the first part of the project, we convert real-world interactions into algorithmic rules for use in a robot’s navigation system. Moreover, we use context recognition, object detection, and scene data for context-appropriate rule selection. We introduce our methodology of studying social behaviors in complex contexts, different analyses of our text corpus for museums, and the presentation of extracted social norms. Finally, we demonstrate applying some of the rules in scenarios in the simulation environment. 
    more » « less
  2. While security technology can be nearly impenetrable, the people behind the computer screens are often easily manipulated, which makes the human factor the biggest threat to cybersecurity. This study examined whether college students disclosed private information about themselves, and what type of information they shared. The study utilized pretexting, in which attackers impersonate individuals in certain roles and often involves extensive research to ensure credibility. The goal of pretexting is to create situations where individuals feel safe releasing information that they otherwise might not. The pretexts used for this study were based on the natural inclination to help, where people tend to want to help those in need, and reciprocity, where people tend to return favors given to them. Participants (N=51) answered survey questions that they thought were for a good cause or that would result in a reward. This survey asked for increasingly sensitive information that could be used maliciously to gain access to identification, passwords, or security questions. Upon completing the survey, participants were debriefed on the true nature of the study and were interviewed about why they were willing to share information via the survey. Some of the most commonly skipped questions included “Student ID number” and “What is your mother’s maiden name?”. General themes identified from the interviews included the importance of similarities between the researcher and the subject, the researcher’s adherence to the character role, the subject’s awareness of question sensitivity, and the overall differences between online and offline disclosure. Findings suggest that college students are more likely to disclose private information if the attacker shares a similar trait with the target or if the attacker adheres to the character role they are impersonating. Additionally, this study sheds light on the research limitations, emphasizes the relevance of the human factor in security and privacy, and offers recommendations for future research. 
    more » « less
  3. Purpose Existing algorithms for predicting suicide risk rely solely on data from electronic health records, but such models could be improved through the incorporation of publicly available socioeconomic data – such as financial, legal, life event and sociodemographic data. The purpose of this study is to understand the complex ethical and privacy implications of incorporating sociodemographic data within the health context. This paper presents results from a survey exploring what the general public’s knowledge and concerns are about such publicly available data and the appropriateness of using it in suicide risk prediction algorithms. Design/methodology/approach A survey was developed to measure public opinion about privacy concerns with using socioeconomic data across different contexts. This paper presented respondents with multiple vignettes that described scenarios situated in medical, private business and social media contexts, and asked participants to rate their level of concern over the context and what factor contributed most to their level of concern. Specific to suicide prediction, this paper presented respondents with various data attributes that could potentially be used in the context of a suicide risk algorithm and asked participants to rate how concerned they would be if each attribute was used for this purpose. Findings The authors found considerable concern across the various contexts represented in their vignettes, with greatest concern in vignettes that focused on the use of personal information within the medical context. Specific to the question of incorporating socioeconomic data within suicide risk prediction models, the results of this study show a clear concern from all participants in data attributes related to income, crime and court records, and assets. Data about one’s household were also particularly concerns for the respondents, suggesting that even if one might be comfortable with their own being used for risk modeling, data about other household members is more problematic. Originality/value Previous studies on the privacy concerns that arise when integrating data pertaining to various contexts of people’s lives into algorithmic and related computational models have approached these questions from individual contexts. This study differs in that it captured the variation in privacy concerns across multiple contexts. Also, this study specifically assessed the ethical concerns related to a suicide prediction model and determining people’s awareness of the publicness of select data attributes, as well as which of these data attributes generated the most concern in such a context. To the best of the authors’ knowledge, this is the first study to pursue this question. 
    more » « less
  4. Social robots have been used to support mental health. In this work, we explored their potential as community-based tools. Visualizing mood data patterns of a community with a social robot might help the community raise awareness about the emotions people feel and affecting factors from life events. This could potentially lead to adaptation of suitable coping skills enhancing the sense of belonging and support among community members. We present preliminary findings and ongoing plans for this human-robot interaction (HRI) research work on data visualizations supporting community mental health. In a two-day study, twelve participants recruited from a university community engaged with a robot displaying mood data. Given the feedback from the study, we improved the data visualization in the robot to increase accessibility, universality, and usefulness of such visualizations. In the future, we plan on conducting studies with this improved version and deploying a social robot for a community setting. 
    more » « less
  5. null (Ed.)
    This paper proposes and evaluates the use of image classification for detailed, full-body human-robot tactile interaction. A camera positioned below a translucent robot skin captures shadows generated from human touch and infers social gestures from the captured images. This approach enables rich tactile interaction with robots without the need for the sensor arrays used in traditional social robot tactile skins. It also supports the use of touch interaction with non-rigid robots, achieves high-resolution sensing for robots with different sizes and shape of surfaces, and removes the requirement of direct contact with the robot. We demonstrate the idea with an inflatable robot and a standing-alone testing device, an algorithm for recognizing touch gestures from shadows that uses Densely Connected Convolutional Networks, and an algorithm for tracking positions of touch and hovering shadows. Our experiments show that the system can distinguish between six touch gestures under three lighting conditions with 87.5 - 96.0% accuracy, depending on the lighting, and can accurately track touch positions as well as infer motion activities in realistic interaction conditions. Additional applications for this method include interactive screens on inflatable robots and privacy-maintaining robots for the home. 
    more » « less