skip to main content


Title: Evaluation of Human Perceived Safety during HRC Task using Multiple Data Collection Methods
While human safety is always a concern in an environment with human-robot collaboration, this concern magnifies when it is the human-robot work-space that overlaps. This overlap creates potential for collision which would reduce the safety of the system. Fear of such a collision could reduce the productivity of the system. This apprehensiveness is referred to as the perceived safety of the robot by the human. Therefore, we designed a within-subject human-robot collaboration experiment where a human and a robot work together in an assembling task. In order to evaluate the perceived safety during this HRC task, we collected subjective data by means of a questionnaire through two methods: during and after trial. The collected data was analyzed using non-parametric methods and these statistical tests were conducted: Friedman and Wilcoxon. The most clear relationship was found when changing only sensitivity of the robot or all three behaviors of velocity, trajectory, and sensitivity. There is a positive moderate linear relationship between the average safety of the during trial data and the after trial data.  more » « less
Award ID(s):
2125362
PAR ID:
10343340
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2022 17th Annual System of Systems Engineering Conference (SOSE)
Page Range / eLocation ID:
465 to 470
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Product disassembly is essential for remanufacturing operations and recovery of end-of-use devices. However, disassembly has often been performed manually with significant safety issues for human workers. Recently, human-robot collaboration has become popular to reduce the human workload and handle hazardous materials. However, due to the current limitations of robots, they are not fully capable of performing every disassembly task. It is critical to determine whether a robot can accomplish a specific disassembly task. This study develops a disassembly score which represents how easy is to disassemble a component by robots, considering the attributes of the component along with the robotic capability. Five factors, including component weight, shape, size, accessibility, and positioning, are considered when developing the disassembly score. Further, the relationship between the five factors and robotic capabilities, such as grabbing and placing, is discussed. The MaxViT (Multi-Axis Vision Transformer) model is used to determine component sizes through image processing of the XPS 8700 desktop, demonstrating the potential for automating disassembly score generation. Moreover, the proposed disassembly score is discussed in terms of determining the appropriate work setting for disassembly operations, under three main categories: human-robot collaboration (HRC), semi-HRC, and worker-only settings. A framework for calculating disassembly time, considering human-robot collaboration, is also proposed. 
    more » « less
  2. Objective

    This study aims to improve workers’ postures and thus reduce the risk of musculoskeletal disorders in human-robot collaboration by developing a novel model-free reinforcement learning method.

    Background

    Human-robot collaboration has been a flourishing work configuration in recent years. Yet, it could lead to work-related musculoskeletal disorders if the collaborative tasks result in awkward postures for workers.

    Methods

    The proposed approach follows two steps: first, a 3D human skeleton reconstruction method was adopted to calculate workers’ continuous awkward posture (CAP) score; second, an online gradient-based reinforcement learning algorithm was designed to dynamically improve workers’ CAP score by adjusting the positions and orientations of the robot end effector.

    Results

    In an empirical experiment, the proposed approach can significantly improve the CAP scores of the participants during a human-robot collaboration task when compared with the scenarios where robot and participants worked together at a fixed position or at the individual elbow height. The questionnaire outcomes also showed that the working posture resulted from the proposed approach was preferred by the participants.

    Conclusion

    The proposed model-free reinforcement learning method can learn the optimal worker postures without the need for specific biomechanical models. The data-driven nature of this method can make it adaptive to provide personalized optimal work posture.

    Application

    The proposed method can be applied to improve the occupational safety in robot-implemented factories. Specifically, the personalized robot working positions and orientations can proactively reduce exposure to awkward postures that increase the risk of musculoskeletal disorders. The algorithm can also reactively protect workers by reducing the workload in specific joints.

     
    more » « less
  3. Human-robot teaming is becoming increasingly common within manufacturing processes. A key aspect practitioners need to decide on when developing effective processes is the level of task interdependence between human and robot team members. Task interdependence refers to the extent to which one’s behavior affects the performance of others in a team. In this work, we examine the effects of three levels of task interdependence—pooled, sequential, reciprocalin human-robot teaming on human worker’s mental states, task performance, and perceptions of the robot. Participants worked with the robot in an assembly task while their heart rate variability was being recorded. Results suggested human workers in the reciprocal interdependence level experienced less stress and perceived the robot more as a collaborator than other two levels. Task interdependence did not affect perceived safety. Our findings highlight the importance of considering task structure in human-robot teaming and inform future research on and industry practices for human-robot task allocation. 
    more » « less
  4. Introduction As mobile robots proliferate in communities, designers must consider the impacts these systems have on the users, onlookers, and places they encounter. It becomes increasingly necessary to study situations where humans and robots coexist in common spaces, even if they are not directly interacting. This dataset presents a multidisciplinary approach to study human-robot encounters in an indoor apartment-like setting between participants and two mobile robots. Participants take questionnaires, wear sensors for physiological measures, and take part in a focus group after experiments finish. This dataset contains raw time series data from sensors and robots, and qualitative results from focus groups. The data can be used to analyze measures of human physiological response to varied encounter conditions, and to gain insights into human preferences and comfort during community encounters with mobile robots. Dataset Contents A dictionary of terms found in the dataset can be found in the "Data-Dictionary.pdf" Synchronized XDF files from every trial with raw data from electrodermal activity (EDA), electrocardiography (ECG), photoplethysmography (PPG) and seismocardiography (SCG). These synchronized files also contain robot pose data and microphone data. Results from analysis of two important features found from heart rate variability (HRV) and EDA. Specifically, HRV_CMSEn and nsEDRfreq is computed for each participant over each trial. These results also include Robot Confidence, which is a classification score representing the confidence that the 80 physiological features considered originate from a subject in a robot encounter. The higher the score, the higher the confidence A vectormap of the environment used during testing ("AHG_vectormap.txt") and a csv with locations of participant seating within the map ("Participant-Seating-Coordinates.csv"). Each line of the vectormap represents two endpoints of a line: x1,y1,x2,y2. The coordinates of participant seating are x,y positions and rotation about the vertical axis in radians. Anonymized videos captured using two static cameras placed in the environment. They are located in the living room and small room, respectively. Animations visualized from XDF files that show participant location, robot behaviors and additional characteristics like participant-robot line-of-sight and relative audio volume. Quotes associated with themes taken from focus group data. These quotes demonstrate and justify the results of the thematic analysis. Raw text from focus groups is not included for privacy concerns. Quantitative results from focus groups associated with factors influencing perceived safety. These results demonstrate the findings from deductive content analysis. The deductive codebook is also included. Results from pre-experiment and between-trial questionnaires Copies of both questionnaires and the semi-structured focus group protocol. Human Subjects This dataset contain de-identified information for 24 total subjects over 13 experiment sessions. The population for the study is the students, faculty and staff at the University of Texas at Austin. Of the 24 participants, 18 are students and 6 are staff at the university. Ages range from 19-48 and there are 10 males and 14 females who participated. Published data has been de-identified in coordination with the university Internal Review Board. All participants signed informed consent to participate in the study and for the distribution of this data. Access Restrictions Transcripts from focus groups are not published due to privacy concerns. Videos including participants are de-identified with overlays on videos. All other data is labeled only by participant ID, which is not associated with any identifying characteristics. Experiment Design Robots This study considers indoor encounters with two quadruped mobile robots. Namely, the Boston Dynamics Spot and Unitree Go1. These mobile robots are capable of everyday movement tasks like inspection, search or mapping which may be common tasks for autonomous agents in university communities. The study focus on perceived safety of bystanders under encounters with these relevant platforms. Control Conditions and Experiment Session Layout We control three variables in this study: Participant seating social (together in the living room) v. isolated (one in living room, other in small room) Robots Together v. Separate Robot Navigation v. Search Behavior A visual representation of the three control variables are shown on the left in (a)-(d) including the robot behaviors and participant seating locations, shown as X's. Blue represent social seating and yellow represent isolated seating. (a) shows the single robot navigation path. (b) is the two robot navigation paths. In (c) is the single robot search path and (d) shows the two robot search paths. The order of behaviors and seating locations are randomized and then inserted into the experiment session as overviewed in (e). These experiments are designed to gain insights into human responses to encounters with robots. The first step is receiving consent from the followed by a pre-experiment questionnaire that documents demographics, baseline stress information and big 5 personality traits. The nature video is repeated before and after the experimental session to establish a relaxed baseline physiological state. Experiments take place over 8 individual trials, which are defined by a subject seat arrangement, search or navigation behavior, and robots together or separate. After each of the 8 trials, participants take the between trial questionnaire, which is a 7 point Likert scale questionnaire designed to assess perceived safety during the preceding trial. After experiments and sensor removal, participants take part in a focus group. Synchronized Data Acquisition Data is synchronized from physiological sensors, environment microphones and the robots using the architecture shown. These raw xdf files are named using the following file naming convention: Trials where participants sit together in the living room [Session number]-[trial number]-social-[robots together or separate]-[search or navigation behavior].xdf Trials where participants are isolated [Session number]-[trial number]-isolated-[subject ID living room]-[subject ID small room]-[robots together or separate]-[search or navigation behavior].xdf Qualitative Data Qualitative data is obtained from focus groups with participants after experiments. Typically, two participants take part however two sessions only included one participant. The semi-structured focus group protocol can be found in the dataset. Two different research methods are applied to focus group transcripts. Note: the full transcripts are not provided for privacy concerns. First, we performed a qualitative content analysis using deductive codes found from an existing model of perceived safety during HRI (Akalin et al. 2023). The quantitative results from this analysis are reported as frequencies of references to the various factors of perceived safety. The codebook describing these factors is included in the dataset. Second, an inductive thematic analysis was performed on the data to identify emergent themes. The resulting themes and associated quotes taken from focus groups are also included. Data Organization Data is organized in separate folders, namely: animation-videos anonymized-session-videos focus-group-results questionnaire-responses research-materials signal-analysis-results synchronized-xdf-data Data Quality Statement In limited trials, participant EDA or ECG signals or robot pose information may be missing due to connectivity issues during data acquisition. Additionally, the questionnaires for Participant ID0 and ID1 are incomplete due to an error in the implementation of the Qualtrics survey instrument used. 
    more » « less
  5. With the prevalence of mental health problems today, designing human-robot interaction for mental health intervention is not only possible, but critical. The current experiment examined how three types of robot disclosure (emotional, technical, and by-proxy) affect robot perception and human disclosure behavior during a stress-sharing activity. Emotional robot disclosure resulted in the lowest robot perceived safety. Post-hoc analysis revealed that increased perceived stress predicted reduced human disclosure, user satisfaction, robot likability, and future robot use. Negative attitudes toward robots also predicted reduced intention for future robot use. This work informs on the possible design of robot disclosure, as well as how individual attributes, such as perceived stress, can impact human robot interaction in a mental health context. 
    more » « less