skip to main content


Title: Adaptive AR visual output security using reinforcement learning trained policies: demo abstract
Augmented reality (AR) technologies have seen significant improvement in recent years with several consumer and commercial solutions being developed. New security challenges arise as AR becomes increasingly ubiquitous. Previous work has proposed techniques for securing the output of AR devices and used reinforcement learning (RL) to train security policies which can be difficult to define manually. However, whether such systems and policies can be deployed on a physical AR device without degrading performance was left an open question. We develop a visual output security application using a RL trained policy and deploy it on a Magic Leap One head-mounted AR device. The demonstration illustrates that RL based visual output security systems are feasible.  more » « less
Award ID(s):
1908051 1903136
NSF-PAR ID:
10195332
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ACM Conference on Embedded Networked Sensor Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Augmented reality (AR) technologies, such as Microsoft’s HoloLens head-mounted display and AR-enabled car windshields, are rapidly emerging. AR applications provide users with immersive virtual experiences by capturing input from a user’s surroundings and overlaying virtual output on the user’s perception of the real world. These applications enable users to interact with and perceive virtual content in fundamentally new ways. However, the immersive nature of AR applications raises serious security and privacy concerns. Prior work has focused primarily on input privacy risks stemming from applications with unrestricted access to sensor data. However, the risks associated with malicious or buggy AR output remain largely unexplored. For example, an AR windshield application could intentionally or accidentally obscure oncoming vehicles or safety-critical output of other AR applications. In this work, we address the fundamental challenge of securing AR output in the face of malicious or buggy applications. We design, prototype, and evaluate Arya, an AR platform that controls application output according to policies specified in a constrained yet expressive policy framework. In doing so, we identify and overcome numerous challenges in securing AR output. 
    more » « less
  2. Abstract

    Since the modern concepts for virtual and augmented reality are first introduced in the 1960's, the field has strived to develop technologies for immersive user experience in a fully or partially virtual environment. Despite the great progress in visual and auditory technologies, haptics has seen much slower technological advances. The challenge is because skin has densely packed mechanoreceptors distributed over a very large area with complex topography; devising an apparatus as targeted as an audio speaker or television for the localized sensory input of an ear canal or iris is more difficult. Furthermore, the soft and sensitive nature of the skin makes it difficult to apply solid state electronic solutions that can address large areas without causing discomfort. The maturing field of soft robotics offers potential solutions toward this challenge. In this article, the definition and history of virtual (VR) and augmented reality (AR) is first reviewed. Then an overview of haptic output and input technologies is presented, opportunities for soft robotics are identified, and mechanisms of intrinsically soft actuators and sensors are introduced. Finally, soft haptic output and input devices are reviewed with categorization by device forms, and examples of soft haptic devices in VR/AR environments are presented.

     
    more » « less
  3. Koyejo, S ; Mohamed, S. ; Agarwal, A. ; Belgrave, D. ; Cho, K. ; Oh, A. (Ed.)
    Recursion is the fundamental paradigm to finitely describe potentially infinite objects. As state-of-the-art reinforcement learning (RL) algorithms cannot directly reason about recursion, they must rely on the practitioner's ingenuity in designing a suitable "flat" representation of the environment. The resulting manual feature constructions and approximations are cumbersome and error-prone; their lack of transparency hampers scalability. To overcome these challenges, we develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes (MDPs) that can recursively invoke one another. Each constituent MDP is characterized by several entry and exit points that correspond to input and output values of these invocations. These recursive MDPs (or RMDPs) are expressively equivalent to probabilistic pushdown systems (with call-stack playing the role of the pushdown stack), and can model probabilistic programs with recursive procedural calls. We introduce Recursive Q-learning---a model-free RL algorithm for RMDPs---and prove that it converges for finite, single-exit and deterministic multi-exit RMDPs under mild assumptions. 
    more » « less
  4. Abstract Augmented reality (AR) devices, as smart glasses, enable users to see both the real world and virtual images simultaneously, contributing to an immersive experience in interactions and visualization. Recently, to reduce the size and weight of smart glasses, waveguides incorporating holographic optical elements in the form of advanced grating structures have been utilized to provide light-weight solutions instead of bulky helmet-type headsets. However current waveguide displays often have limited display resolution, efficiency and field-of-view, with complex multi-step fabrication processes of lower yield. In addition, current AR displays often have vergence-accommodation conflict in the augmented and virtual images, resulting in focusing-visual fatigue and eye strain. Here we report metasurface optical elements designed and experimentally implemented as a platform solution to overcome these limitations. Through careful dispersion control in the excited propagation and diffraction modes, we design and implement our high-resolution full-color prototype, via the combination of analytical–numerical simulations, nanofabrication and device measurements. With the metasurface control of the light propagation, our prototype device achieves a 1080-pixel resolution, a field-of-view more than 40°, an overall input–output efficiency more than 1%, and addresses the vergence-accommodation conflict through our focal-free implementation. Furthermore, our AR waveguide is achieved in a single metasurface-waveguide layer, aiding the scalability and process yield control. 
    more » « less
  5. Current collaborative augmented reality (AR) systems establish a common localization coordinate frame among users by exchanging and comparing maps comprised of feature points. However, relative positioning through map sharing struggles in dynamic or feature-sparse environments. It also requires that users exchange identical regions of the map, which may not be possible if they are separated by walls or facing different directions. In this paper, we present Cappella11Like its musical inspiration, Cappella utilizes collaboration among agents to forgo the need for instrumentation, an infrastructure-free 6-degrees-of-freedom (6DOF) positioning system for multi-user AR applications that uses motion estimates and range measurements between users to establish an accurate relative coordinate system. Cappella uses visual-inertial odometry (VIO) in conjunction with ultra-wideband (UWB) ranging radios to estimate the relative position of each device in an ad hoc manner. The system leverages a collaborative particle filtering formulation that operates on sporadic messages exchanged between nearby users. Unlike visual landmark sharing approaches, this allows for collaborative AR sessions even if users do not share the same field of view, or if the environment is too dynamic for feature matching to be reliable. We show that not only is it possible to perform collaborative positioning without infrastructure or global coordinates, but that our approach provides nearly the same level of accuracy as fixed infrastructure approaches for AR teaming applications. Cappella consists of an open source UWB firmware and reference mobile phone application that can display the location of team members in real time using mobile AR. We evaluate Cappella across mul-tiple buildings under a wide variety of conditions, including a contiguous 30,000 square foot region spanning multiple floors, and find that it achieves median geometric error in 3D of less than 1 meter. 
    more » « less