skip to main content


Search for: All records

Award ID contains: 1839971

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Augmented/Virtual reality and video-based media play a vital role in the digital learning revolution to train novices in spatial tasks. However, creating content for these different media requires expertise in several fields. We present EditAR, a unified authoring, and editing environment to create content for AR, VR, and video based on a single demonstration. EditAR captures the user’s interaction within an environment and creates a digital twin, enabling users without programming backgrounds to develop content. We conducted formative interviews with both subject and media experts to design the system. The prototype was developed and reviewed by experts. We also performed a user study comparing traditional video creation with 2D video creation from 3D recordings, via a 3D editor, which uses freehand interaction for in-headset editing. Users took 5 times less time to record instructions and preferred EditAR, along with giving significantly higher usability scores. 
    more » « less
  2. Over the past decade, augmented reality (AR) developers have explored a variety of approaches to allow users to interact with the information displayed on smart glasses and head-mounted displays (HMDs). Current interaction modalities such as mid-air gestures, voice commands, or hand-held controllers provide a limited range of interactions with the virtual content. Additionally, these modalities can also be exhausting, uncomfortable, obtrusive, and socially awkward. There is a need to introduce comfortable interaction techniques for smart glasses and HMDS without the need for visual attention. This paper presents StretchAR, wearable straps that exploit touch and stretch as input modalities to interact with the virtual content displayed on smart glasses. StretchAR straps are thin, lightweight, and can be attached to existing garments to enhance users' interactions in AR. StretchAR straps can withstand strains up to 190% while remaining sensitive to touch inputs. The strap allows the effective combination of these inputs as a mode of interaction with the content displayed through AR widgets, maps, menus, social media, and Internet of Things (IoT) devices. Furthermore, we conducted a user study with 15 participants to determine the potential implications of the use of StretchAR as input modalities when placed on four different body locations (head, chest, forearm, and wrist). This study reveals that StretchAR can be used as an efficient and convenient input modality for smart glasses with a 96% accuracy. Additionally, we provide a collection of 28 interactions enabled by the simultaneous touch-stretch capabilities of StretchAR. Finally, we facilitate recommendation guidelines for the design, fabrication, placement, and possible applications of StretchAR as an interaction modality for AR content displayed on smart glasses. 
    more » « less
  3. Human skeleton-based action recognition offers a valuable means to understand the intricacies of human behavior because it can handle the complex relationships between physical constraints and intention. Although several studies have focused on encoding a skeleton, less attention has been paid to embed this information into the latent representations of human action. InfoGCN proposes a learning framework for action recognition combining a novel learning objective and an encoding method. First, we design an information bottleneck-based learning objective to guide the model to learn informative but compact latent representations. To provide discriminative information for classifying action, we introduce attention-based graph convolution that captures the context-dependent intrinsic topology of human action. In addition, we present a multi-modal representation of the skeleton using the relative position of joints, designed to provide complementary spatial information for joints. InfoGcn 1 1 Code is available at github.com/stnoahl/infogcn surpasses the known state-of-the-art on multiple skeleton-based action recognition benchmarks with the accuracy of 93.0% on NTU RGB+D 60 cross-subject split, 89.8% on NTU RGB+D 120 cross-subject split, and 97.0% on NW-UCLA. 
    more » « less
  4. The US manufacturing industry is currently facing a welding workforce shortage which is largely due to inadequacy of widespread welding training. To address this challenge, we present a Virtual Reality (VR)-based training system aimed at transforming state-of-the-art-welding simulations and in-person instruction into a widely accessible and engaging platform. We applied backward design principles to design a low-cost welding simulator in the form of modularized units through active consulting with welding training experts. Using a minimum viable prototype, we conducted a user study with 24 novices to test the system’s usability. Our findings show (1) greater effectiveness of the system in transferring skills to real-world environments as compared to accessible video-based alternatives and, (2) the visuo-haptic guidance during virtual welding enhances performance and provides a realistic learning experience to users. Using the solution, we expect inexperienced users to achieve competencies faster and be better prepared to enter actual work environments. 
    more » « less
  5. Augmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an integrated workflow enabling designers to author semantically adaptive AR experiences in Virtual Reality (VR). First, potential AR consumers collect local scenes with a semantic understanding technique. ScalAR then synthesizes numerous similar scenes. In VR, a designer authors the AR contents’ semantic associations and validates the design while being immersed in the provided scenes. We adopt a decision-tree-based algorithm to fit the designer’s demonstrations as a semantic adaptation model to deploy the authored AR experience in a physical scene. We further showcase two application scenarios authored by ScalAR and conduct a two-session user study where the quantitative results prove the accuracy of the AR content rendering and the qualitative results show the usability of ScalAR. 
    more » « less
  6. null (Ed.)
    Abstract Augmented reality (AR) is a unique, hands-on tool to deliver information. However, its educational value has been mainly demonstrated empirically so far. In this paper, we present a modeling approach to provide users with mastery of a skill, using AR learning content to implement an educational curriculum. We illustrate the potential of this approach by applying this to an important but pervasively misunderstood area of STEM learning, electrical circuitry. Unlike previous cognitive assessment models, we break down the area into microskills—the smallest segmentation of this knowledge—and concrete learning outcomes for each. This model empowers the user to perform a variety of tasks that are conducive to the acquisition of the skill. We also provide a classification of microskills and how to design them in an AR environment. Our results demonstrated that aligning the AR technology to specific learning objectives paves the way for high quality assessment, teaching, and learning. 
    more » « less
  7. Freehand gesture is an essential input modality for modern Augmented Reality (AR) user experiences. However, developing AR applications with customized hand interactions remains a challenge for end-users. Therefore, we propose GesturAR, an end-to-end authoring tool that supports users to create in-situ freehand AR applications through embodied demonstration and visual programming. During authoring, users can intuitively demonstrate the customized gesture inputs while referring to the spatial and temporal context. Based on the taxonomy of gestures in AR, we proposed a hand interaction model which maps the gesture inputs to the reactions of the AR contents. Thus, users can author comprehensive freehand applications using trigger-action visual programming and instantly experience the results in AR. Further, we demonstrate multiple application scenarios enabled by GesturAR, such as interactive virtual objects, robots, and avatars, room-level interactive AR spaces, embodied AR presentations, etc. Finally, we evaluate the performance and usability of GesturAR through a user study. 
    more » « less
  8. null (Ed.)
    Augmented reality (AR) is an efficient form of delivering spatial information and has great potential for training workers. However, AR is still not widely used for such scenarios due to the technical skills and expertise required to create interactive AR content. We developed ProcessAR, an AR-based system to develop 2D/3D content that captures subject matter expert’s (SMEs) environment-object interactions in situ. The design space for ProcessAR was identified from formative interviews with AR programming experts and SMEs, alongside a comparative design study with SMEs and novice users. To enable smooth workflows, ProcessAR locates and identifies different tools/objects through computer vision within the workspace when the author looks at them. We explored additional features such as embedding 2D videos with detected objects and user-adaptive triggers. A final user evaluation comparing ProcessAR and a baseline AR authoring environment showed that, according to our qualitative questionnaire, users preferred ProcessAR. 
    more » « less
  9. null (Ed.)
    Current hand wearables have limited customizability, they are loose-fit to an individual's hand and lack comfort. The main barrier in customizing hand wearables is the geometric complexity and size variation in hands. Moreover, there are different functions that the users can be looking for; some may only want to detect hand's motion or orientation; others may be interested in tracking their vital signs. Current wearables usually fit multiple functions and are designed for a universal user with none or limited customization. There are no specialized tools that facilitate the creation of customized hand wearables for varying hand sizes and provide different functionalities. We envision an emerging generation of customizable hand wearables that supports hand differences and promotes hand exploration with additional functionality. We introduce FabHandWear, a novel system that allows end-to-end design and fabrication of customized functional self-contained hand wearables. FabHandWear is designed to work with off-the-shelf electronics, with the ability to connect them automatically and generate a printable pattern for fabrication. We validate our system by using illustrative applications, a durability test, and an empirical user evaluation. Overall, FabHandWear offers the freedom to create customized, functional, and manufacturable hand wearables. 
    more » « less