skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Ani-Bot: A Mixed-Reality Modular Robotics System
We present Ani-Bot, a modular robotics system that allows users to construct Do-It-Yourself (DIY) robots and use mixed-reality approach to interact with them. Ani-Bot enables novel user experience by embedding Mixed-Reality Interaction (MRI) in the three phases of interacting with a modular construction kit, namely, Creation, Tweaking, and Usage. In this paper, we first present the system design that allows users to instantly perform MRI once they finish assembling the robot. Further, we discuss the augmentations offered by MRI in the three phases in specific.  more » « less
Award ID(s):
1637961
PAR ID:
10068526
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ACM Symposium on User Interface Software & Technology (UIST)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Ani-Bot is a modular robotics system that allows users to control their DIY robots using Mixed-Reality Interaction (MRI). This system takes advantage of MRI to enable users to visually program the robot through the augmented view of a Head-Mounted Display (HMD). In this paper, we first explain the design of the Mixed-Reality (MR) ready modular robotics system, which allows users to instantly perform MRI once they finish assembling the robot. Then, we elaborate the augmentations provided by the MR system in the three primary phases of a construction kit's lifecycle: Creation, Tweaking, and Usage. Finally, we demonstrate Ani-Bot with four application examples and evaluate the system with a two-session user study. The results of our evaluation indicate that Ani-Bot does successfully embed MRI into the lifecycle (Creation, Tweaking, Usage) of DIY robotics and that it does show strong potential for delivering an enhanced user experience. 
    more » « less
  2. The growth in remote and hybrid work has resulted in an increased demand for collaborative, videoconferencing experiences that offer a more seamless and immersive transition between virtual and physical environments. The Mixed Reality Passthrough Window (MRPW) addresses this demand by introducing a new paradigm for the integration of augmented/mixed reality into laptop design. The design is characterized by two screens, situated back to back, with two mounted cameras, facing in opposite directions. This creates the effect of looking through a window, upon which virtual content can be augmented and overlaid. This configuration allows local users sitting around the laptop to more easily interact with remote users, who appear on both sides of the Mixed Reality Passthrough Window, giving the sense that all users are sharing the same space in the round. Additionally, these features create affordances for the outward facing screen to serve as a site for presentations (e.g. slide decks) and other sharable content. 
    more » « less
  3. We present a volumetric communication system that is designed for remote assistance of procedural tasks. The system allows a remote expert to visually guide a local operator. The two parties share a view that is spatially identical, but for the local operator it is of the object on which they operate, while for the remote expert, the object is presented as a mixed reality “hologram”. Guidance is provided by voice, gestures, and annotations performed directly on the object of interest or its hologram. At each end of the communication, spatial is visualized using mixed-reality glasses. 
    more » « less
  4. null (Ed.)
    Though virtual reality (VR) has been advanced to certain levels of maturity in recent years, the general public, especially the population of the blind and visually impaired (BVI), still cannot enjoy the benefit provided by VR. Current VR accessibility applications have been developed either on expensive head-mounted displays or with extra accessories and mechanisms, which are either not accessible or inconvenient for BVI individuals. In this paper, we present a mobile VR app that enables BVI users to access a virtual environment on an iPhone in order to build their skills of perception and recognition of the virtual environment and the virtual objects in the environment. The app uses the iPhone on a selfie stick to simulate a long cane in VR, and applies Augmented Reality (AR) techniques to track the iPhone’s real-time poses in an empty space of the real world, which is then synchronized to the long cane in the VR environment. Due to the use of mixed reality (the integration of VR & AR), we call it the Mixed Reality cane (MR Cane), which provides BVI users auditory and vibrotactile feedback whenever the virtual cane comes in contact with objects in VR. Thus, the MR Cane allows BVI individuals to interact with the virtual objects and identify approximate sizes and locations of the objects in the virtual environment. We performed preliminary user studies with blind-folded participants to investigate the effectiveness of the proposed mobile approach and the results indicate that the proposed MR Cane could be effective to help BVI individuals in understanding the interaction with virtual objects and exploring 3D virtual environments. The MR Cane concept can be extended to new applications of navigation, training and entertainment for BVI individuals without more significant efforts. 
    more » « less
  5. Virtual content instability caused by device pose tracking error remains a prevalent issue in markerless augmented reality (AR), especially on smartphones and tablets. However, when examining environments which will host AR experiences, it is challenging to determine where those instability artifacts will occur; we rarely have access to ground truth pose to measure pose error, and even if pose error is available, traditional visualizations do not connect that data with the real environment, limiting their usefulness. To address these issues we present SiTAR (Situated Trajectory Analysis for Augmented Reality), the first situated trajectory analysis system for AR that incorporates estimates of pose tracking error. We start by developing the first uncertainty-based pose error estimation method for visual-inertial simultaneous localization and mapping (VI-SLAM), which allows us to obtain pose error estimates without ground truth; we achieve an average accuracy of up to 96.1% and an average FI score of up to 0.77 in our evaluations on four VI-SLAM datasets. Next, we present our SiTAR system, implemented for ARCore devices, combining a backend that supplies uncertainty-based pose error estimates with a frontend that generates situated trajectory visualizations. Finally, we evaluate the efficacy of SiTAR in realistic conditions by testing three visualization techniques in an in-the-wild study with 15 users and 13 diverse environments; this study reveals the impact both environment scale and the properties of surfaces present can have on user experience and task performance. 
    more » « less