skip to main content

Search for: All records

Award ID contains: 1839971

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Augmented reality (AR) is a unique, hands-on tool to deliver information. However, its educational value has been mainly demonstrated empirically so far. In this paper, we present a modeling approach to provide users with mastery of a skill, using AR learning content to implement an educational curriculum. We illustrate the potential of this approach by applying this to an important but pervasively misunderstood area of STEM learning, electrical circuitry. Unlike previous cognitive assessment models, we break down the area into microskills—the smallest segmentation of this knowledge—and concrete learning outcomes for each. This model empowers the user to perform amore »variety of tasks that are conducive to the acquisition of the skill. We also provide a classification of microskills and how to design them in an AR environment. Our results demonstrated that aligning the AR technology to specific learning objectives paves the way for high quality assessment, teaching, and learning.« less
    Free, publicly-accessible full text available December 1, 2022
  2. Augmented reality (AR) is an efficient form of delivering spatial information and has great potential for training workers. However, AR is still not widely used for such scenarios due to the technical skills and expertise required to create interactive AR content. We developed ProcessAR, an AR-based system to develop 2D/3D content that captures subject matter expert’s (SMEs) environment-object interactions in situ. The design space for ProcessAR was identified from formative interviews with AR programming experts and SMEs, alongside a comparative design study with SMEs and novice users. To enable smooth workflows, ProcessAR locates and identifies different tools/objects through computer visionmore »within the workspace when the author looks at them. We explored additional features such as embedding 2D videos with detected objects and user-adaptive triggers. A final user evaluation comparing ProcessAR and a baseline AR authoring environment showed that, according to our qualitative questionnaire, users preferred ProcessAR.« less
  3. Current hand wearables have limited customizability, they are loose-fit to an individual's hand and lack comfort. The main barrier in customizing hand wearables is the geometric complexity and size variation in hands. Moreover, there are different functions that the users can be looking for; some may only want to detect hand's motion or orientation; others may be interested in tracking their vital signs. Current wearables usually fit multiple functions and are designed for a universal user with none or limited customization. There are no specialized tools that facilitate the creation of customized hand wearables for varying hand sizes and providemore »different functionalities. We envision an emerging generation of customizable hand wearables that supports hand differences and promotes hand exploration with additional functionality. We introduce FabHandWear, a novel system that allows end-to-end design and fabrication of customized functional self-contained hand wearables. FabHandWear is designed to work with off-the-shelf electronics, with the ability to connect them automatically and generate a printable pattern for fabrication. We validate our system by using illustrative applications, a durability test, and an empirical user evaluation. Overall, FabHandWear offers the freedom to create customized, functional, and manufacturable hand wearables.« less
  4. Real-time communication and control are essential parts of the Cyber Physical System (CPS) to optimize effective performance and reliability. To gain a sustainable competitive advantage with Automation 5.0, as needed in Work-of-the-Future, this article addresses the concept of real-time communication and control in the case of an agricultural work setting, along with a newly designed Cyber Collaborative Protocol, called CCP-RTC2. The developed protocol aims to minimize information delay and maximize JIN (Just In Need) information sharing, to enable collaborative decisions among system agents. Two experiments are conducted to compare the designed protocol’s performance in agricultural CPS against the current non-CPSmore »practice. The results demonstrate that the CCP-RTC2 is superior compared with current practice in terms of information sharing in a normal operation scenario. When the system obtains an unplanned request, the CCP-RTC2 can integrate such a request to the original work plan while minimizing the system’s objective function (lower is better). Hence, the system has relatively smaller information delays, as well as better timely information shared with system agents that need it.« less
  5. There is an increasing trend of Virtual-Reality (VR) applications found in education, entertainment, and industry. Many of them utilize real world tools, environments, and interactions as bases for creation. However, creating such applications is tedious, fragmented, and involves expertise in authoring VR using programming and 3D-modelling softwares. This hinders VR adoption by decoupling subject matter experts from the actual process of authoring while increasing cost and time. We present VRFromX, an in-situ Do-It-Yourself (DIY) platform for content creation in VR that allows users to create interactive virtual experiences. Using our system, users can select region(s) of interest (ROI) in scannedmore »point cloud or sketch in mid-air using a brush tool to retrieve virtual models and then attach behavioral properties to them. We ran an exploratory study to evaluate usability of VRFromX and the results demonstrate feasibility of the framework as an authoring tool. Finally, we implemented three possible use-cases to showcase potential applications.« less
  6. Modern manufacturing processes are in a state of flux, as they adapt to increasing demand for flexible and self-configuring production. This poses challenges for training workers to rapidly master new machine operations and processes, i.e. machine tasks. Conventional in-person training is effective but requires time and effort of experts for each worker trained and not scalable. Recorded tutorials, such as video-based or augmented reality (AR), permit more efficient scaling. However, unlike in-person tutoring, existing recorded tutorials lack the ability to adapt to workers’ diverse experiences and learning behaviors. We present AdapTutAR, an adaptive task tutoring system that enables experts tomore »record machine task tutorials via embodied demonstration and train learners with different AR tutoring contents adapting to each user’s characteristics. The adaptation is achieved by continually monitoring learners’ tutorial-following status and adjusting the tutoring content on-the-fly and in-situ. The results of our user study evaluation have demonstrated that our adaptive system is more effective and preferable than the non-adaptive one.« less
  7. With increasing automation, the ‘human’ element in industrial systems is gradually being reduced, often for the sake of standardization. Complete automation, however, might not be optimal in complex, uncertain environments due to the dynamic and unstructured nature of interactions. Leveraging human perception and cognition can prove fruitful in making automated systems robust and sustainable. “Human-in-the-loop” (HITL) systems are systems which incorporate meaningful human interactions into the workflow. Agricultural Robotic Systems (ARS), developed for the timely detection and prevention of diseases in agricultural crops, are an example of cyber-physical systems where HITL augmentation can provide improved detection capabilities and system performance.more »Humans can apply their domain knowledge and diagnostic skills to fill in the knowledge gaps present in agricultural robotics and make them more resilient to variability. Owing to the multi-agent nature of ARS, HUB-CI, a collaborative platform for the optimization of interactions between agents is emulated to direct workflow logic. The challenge remains in designing and integrating human roles and tasks in the automated loop. This article explains the development of a HITL simulation for ARS, by first realistically modeling human agents, and exploring two different modes by which they can be integrated into the loop: Sequential, and Shared Integration. System performance metrics such as costs, number of tasks, and classification accuracy are measured and compared for different collaboration protocols. The results show the statistically significant advantages of HUB-CI protocols over the traditional protocols for each integration, while also discussing the competitive factors of both integration modes. Strengthening human modeling and expanding the range of human activities within the loop can help improve the practicality and accuracy of the simulation in replicating a HITL-ARS.« less