skip to main content


Title: Human mobile robot interaction in the retail environment
Abstract

As technology advances, Human-Robot Interaction (HRI) is boosting overall system efficiency and productivity. However, allowing robots to be present closely with humans will inevitably put higher demands on precise human motion tracking and prediction. Datasets that contain both humans and robots operating in the shared space are receiving growing attention as they may facilitate a variety of robotics and human-systems research. Datasets that track HRI with rich information other than video images during daily activities are rarely seen. In this paper, we introduce a novel dataset that focuses on social navigation between humans and robots in a future-oriented Wholesale and Retail Trade (WRT) environment (https://uf-retail-cobot-dataset.github.io/). Eight participants performed the tasks that are commonly undertaken by consumers and retail workers. More than 260 minutes of data were collected, including robot and human trajectories, human full-body motion capture, eye gaze directions, and other contextual information. Comprehensive descriptions of each category of data stream, as well as potential use cases are included. Furthermore, analysis with multiple data sources and future directions are discussed.

 
more » « less
Award ID(s):
2132936
NSF-PAR ID:
10378885
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Data
Volume:
9
Issue:
1
ISSN:
2052-4463
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Effective interactions between humans and robots are vital to achieving shared tasks in collaborative processes. Robots can utilize diverse communication channels to interact with humans, such as hearing, speech, sight, touch, and learning. Our focus, amidst the various means of interactions between humans and robots, is on three emerging frontiers that significantly impact the future directions of human–robot interaction (HRI): (i) human–robot collaboration inspired by human–human collaboration, (ii) brain-computer interfaces, and (iii) emotional intelligent perception. First, we explore advanced techniques for human–robot collaboration, covering a range of methods from compliance and performance-based approaches to synergistic and learning-based strategies, including learning from demonstration, active learning, and learning from complex tasks. Then, we examine innovative uses of brain-computer interfaces for enhancing HRI, with a focus on applications in rehabilitation, communication, brain state and emotion recognition. Finally, we investigate the emotional intelligence in robotics, focusing on translating human emotions to robots via facial expressions, body gestures, and eye-tracking for fluid, natural interactions. Recent developments in these emerging frontiers and their impact on HRI were detailed and discussed. We highlight contemporary trends and emerging advancements in the field. Ultimately, this paper underscores the necessity of a multimodal approach in developing systems capable of adaptive behavior and effective interaction between humans and robots, thus offering a thorough understanding of the diverse modalities essential for maximizing the potential of HRI.

     
    more » « less
  2. Abstract

    This paper introduces an innovative and streamlined design of a robot, resembling a bicycle, created to effectively inspect a wide range of ferromagnetic structures, even those with intricate shapes. The key highlight of this robot lies in its mechanical simplicity coupled with remarkable agility. The locomotion strategy hinges on the arrangement of two magnetic wheels in a configuration akin to a bicycle, augmented by two independent steering actuators. This configuration grants the robot the exceptional ability to move in multiple directions. Moreover, the robot employs a reciprocating mechanism that allows it to alter its shape, thereby surmounting obstacles effortlessly. An inherent trait of the robot is its innate adaptability to uneven and intricate surfaces on steel structures, facilitated by a dynamic joint. To underscore its practicality, the robot's application is demonstrated through the utilization of an ultrasonic sensor for gauging steel thickness, coupled with a pragmatic deployment mechanism. By integrating a defect detection model based on deep learning, the robot showcases its proficiency in automatically identifying and pinpointing areas of rust on steel surfaces. The paper undertakes a thorough analysis, encompassing robot kinematics, adhesive force, potential sliding and turn‐over scenarios, and motor power requirements. These analyses collectively validate the stability and robustness of the proposed design. Notably, the theoretical calculations established in this study serve as a valuable blueprint for developing future robots tailored for climbing steel structures. To enhance its inspection capabilities, the robot is equipped with a camera that employs deep learning algorithms to detect rust visually. The paper substantiates its claims with empirical evidence, sharing results from extensive experiments and real‐world deployments on diverse steel bridges, situated in both Nevada and Georgia. These tests comprehensively affirm the robot's proficiency in adhering to surfaces, navigating challenging terrains, and executing thorough inspections. A comprehensive visual representation of the robot's trials and field deployments is presented in videos accessible at the following links:https://youtu.be/Qdh1oz_oxiQ andhttps://youtu.be/vFFq79O49dM.

     
    more » « less
  3. Abstract

    The visual modality is central to both reception and expression of human creativity. Creativity assessment paradigms, such as structured drawing tasks Barbot (2018), seek to characterize this key modality of creative ideation. However, visual creativity assessment paradigms often rely on cohorts of expert or naïve raters to gauge the level of creativity of the outputs. This comes at the cost of substantial human investment in both time and labor. To address these issues, recent work has leveraged the power of machine learning techniques to automatically extract creativity scores in the verbal domain (e.g., SemDis; Beaty & Johnson 2021). Yet, a comparably well-vetted solution for the assessment of visual creativity is missing. Here, we introduce AuDrA – an Automated Drawing Assessment platform to extract visual creativity scores from simple drawing productions. Using a collection of line drawings and human creativity ratings, we trained AuDrA and tested its generalizability to untrained drawing sets, raters, and tasks. Across four datasets, nearly 60 raters, and over 13,000 drawings, we found AuDrA scores to be highly correlated with human creativity ratings for new drawings on the same drawing task (r= .65 to .81; mean = .76). Importantly, correlations between AuDrA scores and human raters surpassed those between drawings’ elaboration (i.e., ink on the page) and human creativity raters, suggesting that AuDrA is sensitive to features of drawings beyond simple degree of complexity. We discuss future directions, limitations, and link the trained AuDrA model and a tutorial (https://osf.io/kqn9v/) to enable researchers to efficiently assess new drawings.

     
    more » « less
  4. Abstract

    Robotically assisted painting is widely used for spray and dip applications. However, use of robots for coating substrates using a roller applicator has not been systematically investigated. We showed for the first time, a generic robot arm-supported approach to painting engineering substrates using a roller with a constant force at an accurate joint step, while retaining compliance and thus safety. We optimized the robot design such that it is able to coat the substrate using a roller with a performance equivalent to that of a human applicator. To achieve this, we optimized the force, frequency of adjustment, and position control parameters of robotic design. A framework for autonomous coating is available athttps://github.com/duyayun/Vision-and-force-control-automonous-painting-with-rollers; users are only required to provide the boundary coordinates of surfaces to be coated. We found that robotically- and human-painted panels showed similar trends in dry film thickness, coating hardness, flexibility, impact resistance, and microscopic properties. Color profile analysis of the coated panels showed non-significant difference in color scheme and is acceptable for architectural paints. Overall, this work shows the potential of robot-assisted coating strategy using roller applicator. This could be a viable option for hazardous area coating, high-altitude architectural paints, germs sanitization, and accelerated household applications.

     
    more » « less
  5. null (Ed.)
    Mobile robots are increasingly populating homes, hospitals, shopping malls, factory floors, and other human environments. Human society has social norms that people mutually accept; obeying these norms is an essential signal that someone is participating socially with respect to the rest of the population. For robots to be socially compatible with humans, it is crucial for robots to obey these social norms. In prior work, we demonstrated a Socially-Aware Navigation (SAN) planner, based on Pareto Concavity Elimination Transformation (PaCcET), in a hallway scenario, optimizing two objectives so the robot does not invade the personal space of people. This article extends our PaCcET-based SAN planner to multiple scenarios with more than two objectives. We modified the Robot Operating System’s (ROS) navigation stack to include PaCcET in the local planning task. We show that our approach can accommodate multiple Human-Robot Interaction (HRI) scenarios. Using the proposed approach, we achieved successful HRI in multiple scenarios such as hallway interactions, an art gallery, waiting in a queue, and interacting with a group. We implemented our method on a simulated PR2 robot in a 2D simulator (Stage) and a pioneer-3DX mobile robot in the real-world to validate all the scenarios. A comprehensive set of experiments shows that our approach can handle multiple interaction scenarios on both holonomic and non-holonomic robots; hence, it can be a viable option for a Unified Socially-Aware Navigation (USAN). 
    more » « less