skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: ROSbag-based Multimodal Affective Dataset for Emotional and Cognitive States
This paper introduces a new ROSbag-based multimodal affective dataset for emotional and cognitive states generated using the Robot Operating System (ROS). We utilized images and sounds from the International Affective Pictures System (IAPS) and the International Affective Digitized Sounds (IADS) to stimulate targeted emotions (happiness, sadness, anger, fear, surprise, disgust, and neutral), and a dual N-back game to stimulate different levels of cognitive workload. 30 human subjects participated in the user study; their physiological data were collected using the latest commercial wearable sensors, behavioral data were collected using hardware devices such as cameras, and subjective assessments were carried out through questionnaires. All data were stored in single ROSbag files rather than in conventional Comma-Separated Values (CSV) files. This not only ensures synchronization of signals and videos in a data set, but also allows researchers to easily analyze and verify their algorithms by connecting directly to this dataset through ROS. The generated affective dataset consists of 1,602 ROSbag files, and the size of the dataset is about 787GB. The dataset is made publicly available. We expect that our dataset can be a great resource for many researchers in the fields of affective computing, Human-Computer Interaction (HCI), and Human-Robot Interaction (HRI).  more » « less
Award ID(s):
1846221
PAR ID:
10212011
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
Page Range / eLocation ID:
226 to 233
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Accurately measuring and understanding affective loads, such as cognitive and emotional loads, is crucial in the field of human–robot interaction (HRI) research. Although established assessment tools exist for gauging working memory capability in psychology and cognitive neuroscience, few tools are available to specifically measure affective loads. To address this gap, we propose a practical stimulus tool for teleoperated human–robot teams. The tool is comprised of a customizable graphical user interface and subjective questionnaires to measure affective loads. We validated that this tool can invoke different levels of affective loads through extensive user experiments. 
    more » « less
  2. We present the Human And Robot Multimodal Observations of Natural Interactive Collaboration (HARMONIC) dataset. This is a large multimodal dataset of human interactions with a robotic arm in a shared autonomy setting designed to imitate assistive eating. The dataset provides human, robot, and environmental data views of 24 different people engaged in an assistive eating task with a 6-degree-of-freedom (6-DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the forearm used to operate the joystick, third-person stereo video, and the joint positions of the 6-DOF robot arm. Also included are several features that come as a direct result of these recordings, such as eye gaze projected onto the egocentric video, body pose, hand pose, and facial keypoints. These data streams were collected specifically because they have been shown to be closely related to human mental states and intention. This dataset could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable CSV and YAML files. 
    more » « less
  3. Lovable robots in movies regularly beep, chirp, and whirr, yet robots in the real world rarely deploy such sounds. Despite preliminary work supporting the perceptual and objective benefits of intentionally-produced robot sound, relatively little research is ongoing in this area. In this paper, we systematically evaluate transformative robot sound across multiple robot archetypes and behaviors. We conducted a series of five online video-based surveys, each with N ≈ 100 participants, to better understand the effects of musician-designed transformative sounds on perceptions of personal, service, and industrial robots. Participants rated robot videos with transformative sound as significantly happier, warmer, and more competent in all five studies, as more energetic in four studies, and as less discomforting in one study. Overall, results confirmed that transformative sounds consistently improve subjective ratings but may convey affect contrary to the intent of affective robot behaviors. In future work, we will investigate the repeatability of these results through in-person studies and develop methods to automatically generate transformative robot sound. This work may benefit researchers and designers who aim to make robots more favorable to human users. 
    more » « less
  4. In recent years, researchers have developed technology to analyze human facial expressions and other affective data at very high time resolution. This technology is enabling researchers to develop and study interactive robots that are increasingly sensitive to their human interaction partners’ affective states. However, typical interaction planning models and algorithms operate on timescales that are frequently orders of magnitude larger than the timescales at which real-time affect data is sensed. To bridge this gap between the scales of sensor data collection and interaction modeling, affective data must be aggregated and interpreted over longer timescales. In this paper we clarify and formalize the computational task of affect interpretation in the context of an interactive educational game played by a human and a robot, during which facial expression data is sensed, interpreted, and used to predict the interaction partner’s gameplay behavior. We compare different techniques for affect interpretation, used to generate sets of affective labels for an interactive modeling and inference task, and evaluate how the labels generated by each interpretation technique impact model training and inference. We show that incorporating a simple method of personalization into the affect interpretation process — dynamically calculating and applying a personalized threshold for determining affect feature labels over time — leads to a significant improvement in the quality of inference, comparable to performance gains from other data pre-processing steps such as smoothing data via median filter. We discuss the implications of these findings for future development of affect-aware interactive robots and propose guidelines for the use of affect interpretation methods in interactive scenarios. 
    more » « less
  5. For mobile robots, mobile manipulators, and autonomous vehicles to safely navigate around populous places such as streets and warehouses, human observers must be able to understand their navigation intent. One way to enable such understanding is by visualizing this intent through projections onto the surrounding environment. But despite the demonstrated effectiveness of such projections, no open codebase with an integrated hardware setup exists. In this work, we detail the empirical evidence for the effectiveness of such directional projections, and share a robot-agnostic implementation of such projections, coded in C++ using the widely-used Robot Operating System (ROS) and rviz. Additionally, we demonstrate a hardware configuration for deploying this software, using a Fetch robot, and briefly summarize a full-scale user study that motivates this configuration. The code, configuration files (roslaunch and rviz files), and documentation are freely available on GitHub at https://github.com/umhan35/arrow_projection. 
    more » « less