skip to main content


Title: Telepresence Robot with Autonomous Navigation and Virtual Reality: Demo Abstract
Telepresence technology enables users to be virtually present in another location at the same time through video streaming. This kind of user interaction is further enhanced through mobility by driving remotely to form what is called a Telepresence robot. These innovative machines connect individuals with restricted mobility and increase social interaction, collaboration and active participation. However, operating and navigating these robots by individuals who have little knowledge and map of the remote environment is challenging. Avoiding obstacles via the narrow camera view and manual remote operation is a cumbersome task. Moreover, the users lack the sense of immersion while they are busy maneuvering via the real-time video feed and, thereby, decreasing their capability to handle different tasks. This demo presents a simultaneous mapping and autonomous driving virtual reality robot. Leveraging the 2D Lidar sensor, we generate two dimensional occupancy grid maps via SLAM and provide assisted navigation in reducing the onerous task of avoiding obstacles. The attitude of the robotic head with a camera is remotely controlled via the virtual reality headset. Remote users will be able to gain a visceral understanding of the environment while teleoperating the robot.  more » « less
Award ID(s):
1637371
NSF-PAR ID:
10092493
Author(s) / Creator(s):
;
Date Published:
Journal Name:
SenSys '16 Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Augmented reality (AR) enhances the user’s perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot’s operations during the task. Based on the previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the head-mounted display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.

     
    more » « less
  2. Virtual Reality (VR) telepresence platforms are being challenged to support live performances, sporting events, and conferences with thousands of users across seamless virtual worlds. Current systems have struggled to meet these demands which has led to high-profile performance events with groups of users isolated in parallel sessions. The core difference in scaling VR environments compared to classic 2D video content delivery comes from the dynamic peer-to-peer spatial dependence on communication. Users have many pair-wise interactions that grow and shrink as they explore spaces. In this paper, we discuss the challenges of VR scaling and present an architecture that supports hundreds of users with spatial audio and video in a single virtual environment. We leverage the property of \textit{spatial locality} with two key optimizations: (1) a Quality of Service (QoS) scheme to prioritize audio and video traffic based on users' locality, and (2) a resource manager that allocates client connections across multiple servers based on user proximity within the virtual world. Through real-world deployments and extensive evaluations under real and simulated environments, we demonstrate the scalability of our platform while showing improved QoS compared with existing approaches. 
    more » « less
  3. Abstract

    Objective. When multitasking, we must dynamically reorient our attention between different tasks. Attention reorienting is thought to arise through interactions of physiological arousal and brain-wide network dynamics. In this study, we investigated the relationship between pupil-linked arousal and electroencephalography (EEG) brain dynamics in a multitask driving paradigm conducted in virtual reality. We hypothesized that there would be an interaction between arousal and EEG dynamics and that this interaction would correlate with multitasking performance.Approach. We collected EEG and eye tracking data while subjects drove a motorcycle through a simulated city environment, with the instructions to count the number of target images they observed while avoiding crashing into a lead vehicle. The paradigm required the subjects to continuously reorient their attention between the two tasks. Subjects performed the paradigm under two conditions, one more difficult than the other.Main results. We found that task difficulty did not strongly correlate with pupil-linked arousal, and overall task performance increased as arousal level increased. A single-trial analysis revealed several interesting relationships between pupil-linked arousal and task-relevant EEG dynamics. Employing exact low-resolution electromagnetic tomography, we found that higher pupil-linked arousal led to greater EEG oscillatory activity, especially in regions associated with the dorsal attention network and ventral attention network (VAN). Consistent with our hypothesis, we found a relationship between EEG functional connectivity and pupil-linked arousal as a function of multitasking performance. Specifically, we found decreased functional connectivity between regions in the salience network (SN) and the VAN as pupil-linked arousal increased, suggesting that improved multitasking performance at high arousal levels may be due to a down-regulation in coupling between the VAN and the SN. Our results suggest that when multitasking, our brain rebalances arousal-based reorienting so that individual task demands can be met without prematurely reorienting to competing tasks.

     
    more » « less
  4. Online classes are typically conducted by using video conferencing software such as Zoom, Microsoft Teams, and Google Meet. Research has identified drawbacks of online learning, such as “Zoom fatigue”, characterized by distractions and lack of engagement. This study presents the CUNY Affective and Responsive Virtual Environment (CARVE) Hub, a novel virtual reality hub that uses a facial emotion classification model to generate emojis for affective and informal responsive interaction in a 3D virtual classroom setting. A web-based machine learning model is employed for facial emotion classification, enabling students to communicate four basic emotions live through automated web camera capture in a virtual classroom without activating their cameras. The experiment is conducted in undergraduate classes on both Zoom and CARVE, and the results of a survey indicate that students have a positive perception of interactions in the proposed virtual classroom compared with Zoom. Correlations between automated emojis and interactions are also observed. This study discusses potential explanations for the improved interactions, including a decrease in pressure on students when they are not showing faces. In addition, video panels in traditional remote classrooms may be useful for communication but not for interaction. Students favor features in virtual reality, such as spatial audio and the ability to move around, with collaboration being identified as the most helpful feature. 
    more » « less
  5. null (Ed.)
    COVID-19 has altered the landscape of teaching and learning. For those in in-service teacher education, workshops have been suspended causing programs to adapt their professional development to a virtual space to avoid indefinite postponement or cancellation. This paradigm shift in the way we conduct learning experiences creates several logistical and pedagogical challenges but also presents an important opportunity to conduct research about how learning happens in these new environments. This paper describes the approach we took to conduct research in a series of virtual workshops aimed at teaching rural elementary teachers about engineering practices and how to teach a unit from an engineering curriculum. Our work explores how engineering concepts and practices are socially constructed through interactions with teachers, students, and artifacts. This approach, called interactional ethnography has been used by the authors and others to learn about engineering teaching and learning in precollege classrooms. The approach relies on collecting data during instruction, such as video and audio recordings, interviews, and artifacts such as journal entries and photos of physical designs. Findings are triangulated by analyzing these data sources. This methodology was going to be applied in an in-person engineering education workshop for rural elementary teachers, however the pandemic forced us to conduct the workshops remotely. Teachers, working in pairs, were sent workshop supplies, and worked together during the training series that took place over Zoom over four days for four hours each session. The paper describes how we collected video and audio of teachers and the facilitators both in whole group and in breakout rooms. Class materials and submissions of photos and evaluations were managed using Google Classroom. Teachers took photos of their work and scanned written materials and submitted them all by email. Slide decks were shared by the users and their group responses were collected in real time. Workshop evaluations were collected after each meeting using Google Forms. Evaluation data suggest that the teachers were engaged by the experience, learned significantly about engineering concepts and the knowledge-producing practices of engineers, and feel confident about applying engineering activities in their classrooms. This methodology should be of interest to the membership for three distinct reasons. First, remote instruction is a reality in the near-term but will likely persist in some form. Although many of us prefer to teach in person, remote learning allows us to reach many more participants, including those living in remote and rural areas who cannot easily attend in-person sessions with engineering educators, so it benefits the field to learn how to teach effectively in this way. Second, it describes an emerging approach to engineering education research. Interactional ethnography has been applied in precollege classrooms, but this paper demonstrates how it can also be used in teacher professional development contexts. Third, based on our application of interactional ethnography to an education setting, readers will learn specifically about how to use online collaborative software and how to collect and organize data sources for research purposes. 
    more » « less