skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: System Integration of a Tour Guide Robot
In today’s world, people visit many attractive places. On such an occasion, It is of utmost importance to be accompanied by a tour guide, who is known to explain about the cultural and historical importance of places. Therefore, a human guide is necessary to provide tours for a group of visitors. However, Human tour guides might face tiredness, distraction, and the effects of repetitive tasks while providing tour service to visitors. Robots eliminate these problems and can provide tour consistently until it drains its battery. This experiment introduces a tour-guide robot that can navigate autonomously in a known map of a given place and at the same time interact with people. The environment is equipped with artificial landmarks. Each landmark provides information about that specific region. An Animated avatar is simulated on the screen. IBM Watson provides voice recognition and text-to-speech services for human-robot interaction. The experimental results show that the robot takes average time of 10000 seconds to provide a tour. TEB and DWA local planner are compared by allowing the robot to autonomously maneuver the environment for 9 trials which is tabulated in section V.  more » « less
Award ID(s):
2125362
PAR ID:
10343333
Author(s) / Creator(s):
Date Published:
Journal Name:
2022 17th Annual System of Systems Engineering Conference (SOSE)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Robotic telepresence enables users to navigate and experience remote environments. However, effective navigation and situational awareness depend on users’ prior knowledge of the environment, limiting the usefulness of these systems for exploring unfamiliar places. We explore how integrating location-aware LLM-based narrative capabilities into a mobile robot can support remote exploration. We developed a prototype system, called NarraGuide, that provides narrative guidance for users to explore and learn about a remote place through a dialogue-based interface. We deployed our prototype in a geology museum, where remote participants (𝑛 = 20) used the robot to tour the museum. Our findings reveal how users perceived the robot’s role, engaged in dialogue in the tour, and expressed preferences for bystander encountering. Our work demonstrates the potential of LLM-enabled robotic capabilities to deliver location-aware narrative guidance and enrich the experience of exploring remote environments. 
    more » « less
  2. In this paper, we propose a novel method for autonomously seeking out sparsely distributed targets in an unknown underwater environment. Our Sparse Adaptive Search and Sample (SASS) algorithm mixes low-altitude observations of discrete targets with high-altitude observations of the surrounding substrates. By using prior information about the distribution of targets across substrate types in combination with belief modelling over these substrates in the environment, high-altitude observations provide information that allows SASS to quickly guide the robot to areas with high target densities. A maximally informative path is autonomously constructed online using Monte Carlo Tree Search with a novel acquisition function to guide the search to maximise observations of unique targets. We demonstrate our approach in a set of simulated trials using a novel generative species model. SASS consistently outperforms the canonical boustrophedon planner by up to 36% in seeking out unique targets in the first 75 - 90% of time it takes for a boustrophedon survey. Additionally, we verify the performance of SASS on two real world coral reef datasets. 
    more » « less
  3. This paper presents a novel architecture to attain a Unified Planner for Socially-aware Navigation (UP-SAN) and explains its need in Socially Assistive Robotics (SAR) applications. Our approach emphasizes interpersonal distance and how spatial communication can be used to build a unified planner for a human-robot collaborative environment. Socially-Aware Navigation (SAN) is vital to make humans feel comfortable and safe around robots, HRI studies have show that the importance of SAN transcendent safety and comfort. SAN plays a crucial role in perceived intelligence, sociability and social capacity of the robot thereby increasing the acceptance of the robots in public places. Human environments are very dynamic and pose serious social challenges to the robots indented for human interactions. For the robots to cope with the changing dynamics of a situation, there is a need to infer intent and detect changes in the interaction context. SAN has gained immense interest in the social robotics community; to the best of our knowledge, however, there is no planner that can adapt to different interaction contexts spontaneously after autonomously sensing that context. Most of the recent efforts involve social path planning for a single context. In this work, we propose a novel approach for a Unified Planner for SAN that can plan and execute trajectories that are human-friendly for an autonomously sensed interaction context. Our approach augments the navigation stack of Robot Operating System (ROS) utilizing machine learn- ing and optimization tools. We modified the ROS navigation stack using a machine learning-based context classifier and a PaCcET based local planner for us to achieve the goals of UP- SAN. We discuss our preliminary results and concrete plans on putting the pieces together in achieving UP-SAN. 
    more » « less
  4. The importance of communication in many multirobot information-gathering tasks requires the availability of reliable communication maps. These provide estimates of the radio signal strength and can be used to predict the presence of communication links between different locations of the environment. In the problem we consider, a team of mobile robots has to build such maps autonomously in a robot-to-robot communication setting. The solution we propose models the signal's distribution with a Gaussian Process and exploits different online sensing strategies to coordinate and guide the robots during their data acquisition. Our methods show interesting operative insights both in simulations and on real TurtleBot 2 platforms. 
    more » « less
  5. We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. Human hands and robot hands differ in shape, size, and joint structure, and performing this translation from a single uncalibrated camera is a highly underconstrained problem. Moreover, the retargeted trajectories must effectively execute tasks on a physical robot, which requires them to be temporally smooth and free of self-collisions. Our key insight is that while paired human-robot correspondence data is expensive to collect, the internet contains a massive corpus of rich and diverse human hand videos. We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration. We demonstrate that it enables previously untrained people to teleoperate a robot on various dexterous manipulation tasks. Our low-cost, glove-free, marker-free remote teleoperation system makes robot teaching more accessible and we hope that it can aid robots that learn to act autonomously in the real world. 
    more » « less