Telepresence technology enables users to be virtually present in another location at the same time through video streaming. This kind of user interaction is further enhanced through mobility by driving remotely to form what is called a Telepresence robot. These innovative machines connect individuals with restricted mobility and increase social interaction, collaboration and active participation. However, operating and navigating these robots by individuals who have little knowledge and map of the remote environment is challenging. Avoiding obstacles via the narrow camera view and manual remote operation is a cumbersome task. Moreover, the users lack the sense of immersion while they are busy maneuvering via the real-time video feed and, thereby, decreasing their capability to handle different tasks. This demo presents a simultaneous mapping and autonomous driving virtual reality robot. Leveraging the 2D Lidar sensor, we generate two dimensional occupancy grid maps via SLAM and provide assisted navigation in reducing the onerous task of avoiding obstacles. The attitude of the robotic head with a camera is remotely controlled via the virtual reality headset. Remote users will be able to gain a visceral understanding of the environment while teleoperating the robot.
more »
« less
Efficient Autonomous Navigation for GPS-Free Mobile Robots: A VFH-Based Approach Integrated With ROS-Based SLAM
Abstract Simultaneous Localization and Mapping (SLAM) is an autonomous localization technique used for mobile robots without GPS. Since autonomous localization relies on pre-existing maps, to use SLAM with the Robotic Operating System (ROS), a map of the surroundings must first be created, and a controller can then use the initial map. The first mapping procedure is mostly carried out manually, with human intervention. When operating manually, the person operating the robot is responsible for avoiding obstacles and moving the robot to different sections of the space to create a full map of the entire environment. The mapping process, if done manually, is time demanding, and often not feasible. To solve this constraint, which is to construct a map of the environment autonomously without human involvement while avoiding obstacles, the Vector Field Histogram (VFH) technique is implemented in this study by integrating it with SLAM. VFH is a real-time motion planning approach in robotics that uses a statistical representation of the robot’s surroundings known as the histogram grid, to place a strong emphasis on handling modeling errors and sensor uncertainty. Furthermore, using range sensor values, the VFH algorithm determines a robot’s obstacle-free driving directions. Aside from its real-time obstacle avoidance function, the VFH method is enhanced in this study to collaborate with SLAM to create maps and reduce localization complexity. While generating maps, the VFH approach uses a two-step data-reduction procedure to calculate the appropriate vehicle control directives. The robot’s temporary location is used to generate a one-dimensional polar histogram, which is the first stage of the histogram grid reduction process. The polar obstacle density in a given direction is represented by a value in each sector of the polar histogram. In the second stage, the robot’s steering is oriented in the direction of the most appropriate sector, which the algorithm determines from all the polar histogram sectors with a low polar obstacle density. Following that, further algorithms, such as Rapidly Exploring Random Tree (RRT) and A*, can be used to plan autonomous pathways using the map provided by VFH. In order to put the concept into practice, MATLAB and ROS are used together in collaboration to autonomously and simultaneously map the environment and localize the robot. The combination of MATLAB and ROS provides many advantages because of their extensive feature set and ability to integrate with each other. Finally, a simulation and a real-time robot are utilized to analyze and validate the study’s findings.
more »
« less
- Award ID(s):
- 2133630
- PAR ID:
- 10646226
- Publisher / Repository:
- American Society of Mechanical Engineers
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)This paper presents two methods, tegrastats GUI version jtop and Nsight Systems, to profile NVIDIA Jetson embedded GPU devices on a model race car which is a great platform for prototyping and field testing autonomous driving algorithms. The two profilers analyze the power consumption, CPU/GPU utilization, and the run time of CUDA C threads of Jetson TX2 in five different working modes. The performance differences among the five modes are demonstrated using three example programs: vector add in C and CUDA C, a simple ROS (Robot Operating System) package of the wall follow algorithm in Python, and a complex ROS package of the particle filter algorithm for SLAM (Simultaneous Localization and Mapping). The results show that the tools are effective means for selecting operating mode of the embedded GPU devices.more » « less
-
In this paper, we introduce the design and implementation of a low-cost, small-scale autonomous vehicle equipped with an onboard computer, a camera, a Lidar, and some other accessories. We implement various autonomous driving-related modules including mapping and localization, object detection, obstacle avoidance, and path planning. In order to better test the system, we focus on the autonomous parking scenario. In this scenario, the vehicle is able to move from an appointed start point to the desired parking lot autonomously by following a path planned by the hybrid A* algorithm. The vehicle is able to detect objects and avoid obstacles on its path and achieve autonomous parking.more » « less
-
This paper describes two exemplary projects on physical ROS-compatible robots (i.e., Turtlebot3 Burger and Waffle PI) for an undergraduate robotics course, aiming to foster students’ problem-solving skills through project-based learning. The context of the study is a senior-level technical elective course in the Department of Computer Engineering Technology at a primarily undergraduate teaching institution. Earlier courses in the CET curriculum have prepared students with programming skills in several commonly used languages, including Python, C/C++, Java, and MATLAB. Students’ proficiency in programming and hands-on skills makes it possible to implement advanced robotic control algorithms in this robotics course, which has a 3-hour companion lab session each week. The Robot Operating System (ROS) is an open-source framework that helps developers build and reuse code between robotic applications. Though mainly used as a research platform, instructors in higher education take action in bringing ROS and its recent release of ROS 2 into their classrooms. Our earlier work controlled a simulated robot via ROS in a virtual environment on the MATLAB-ROS-Gazebo platform. This paper describes its counterparts by utilizing physical ROS-compatible autonomous ground robots on the MATLAB-ROS2-Turtlebot3 platform. The two exemplary projects presented in this paper cover sensing, perception, and control which are essential to any robotic application. Sensing is via the robot’s onboard 2D laser sensor. Perception involves pattern classification and recognition. Control is shown via path planning. We believe the physical MATLAB-ROS2-Turtlebot3 platform will help to enhance robotics education by exposing students to realistic situations. It will also provide opportunities for educators and students to explore AI-facilitated solutions when tackling everyday problems.more » « less
-
Rectilinear forms of snake-like robotic locomotion are anticipated to be an advantage in obstacle-strewn scenarios characterizing urban disaster zones, subterranean collapses, and other natural environments. The elongated, laterally narrow footprint associated with these motion strategies is well suited to traversal of confined spaces and narrow pathways. Navigation and path planning in the absence of global sensing, however, remains a pivotal challenge to be addressed prior to practical deployment of these robotic mechanisms. Several challenges related to visual processing and localization need to be resolved to to enable navigation. As a first pass in this direction, we equip a wireless, monocular color camera to the head of a robotic snake. Visiual odometry and mapping from ORB-SLAM permits self-localization in planar, obstacle strewn environments. Ground plane traversability segmentation in conjunction with perception-space collision detection permits path planning for navigation. A previously presented dynamical reduction of rectilinear snake locomotion to a non-holonomic kinematic vehicle informs both SLAM and planning. The simplified motion model is then applied to track planned trajectories through an obstacle configuration. This navigational framework enables a snake-like robotic platform to autonomously navigate and traverse unknown scenarios with only monocular vision.more » « less
An official website of the United States government

