skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "LiKamWa, Robert"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We present IMPS: Immersive Media Player System, a tightly synchronized 360º media player that leverages Android VR headsets to deliver immersive educational experiences. Designed for deployment in classrooms, IMPS allows instructors to manage synchronized playback for up to 50 headsets using a tablet interface. The system’s synchronization algorithm ensures lockstep playback across devices within 10 ms, addressing audio and video desynchronization issues of previous systems. IMPS has been successfully deployed by the Act One non-profit to deliver VR content to Title I schools in Arizona and is also used at Arizona State University for synchronized playback of 360º media in educational settings. 
    more » « less
    Free, publicly-accessible full text available February 26, 2026
  2. AutoCalNet enables continuous real-time calibration of mobile 3D cameras by decoupling calibration from content streaming. It leverages a scalable device-edge-cloud network to minimize bandwidth, manage latency, and maintain high precision in calibration data, prioritizing trackable regions and feature points that will facilitate spatiotemporal tracking. This approach provides a flexible, efficient solution for networked camera systems without being constrained by content-specific requirements. 
    more » « less
    Free, publicly-accessible full text available February 26, 2026
  3. Free, publicly-accessible full text available February 26, 2026
  4. Smart IoT Speakers, while connected over a network, currently only produce sounds that come directly from the individual devices. We envision a future where smart speakers collaboratively produce a fabric of spatial audio, capable of perceptually placing sound in a range of locations in physical space. This could provide audio cues in homes, offices and public spaces that are flexibly linked to various positions. The perception of spatialized audio relies on binaural cues, especially the time difference and the level difference of incident sound at a user’s left and right ears. Traditional stereo speakers cannot create the spatialization perception for a user when playing binaural audio due to auditory crosstalk, as each ear hears a combination of both speaker outputs. We present Xblock, a novel time-domain pose-adaptive crosstalk cancellation technique that creates a spatial audio perception over a pair of speakers using knowledge of the user’s head pose and speaker positions. We build a prototype smart speaker IoT system empowered by Xblock, explore the effectiveness of Xblock through signal analysis, and discuss future perceptual user studies and future work. 
    more » « less
  5. Researchers, educators, and multimedia designers need to better understand how mixing physical tangible objects with virtual experiences affects learning and science identity. In this novel study, a 3D-printed tangible that is an accurate facsimile of the sort of expensive glassware that chemists use in real laboratories is tethered to a laptop with a digitized lesson. Interactive educational content is increasingly being placed online, it is important to understand the educational boundary conditions associated with passive haptics and 3D-printed manipulables. Cost-effective printed objects would be particularly welcome in rural and low Socio-Economic (SES) classrooms. A Mixed Reality (MR) experience was created that used a physical 3D-printed haptic burette to control a computer-based chemistry titration experiment. This randomized control trial study with 136 college students had two conditions: 1) low-embodied control (using keyboard arrows), and 2) high-embodied experimental (physically turning a valve/stopcock on the 3D-printed burette). Although both groups displayed similar significant gains on the declarative knowledge test, deeper analyses revealed nuanced Aptitude by Treatment Interactions (ATIs). These interactionsfavored the high-embodied experimental group that used the MR devicefor both titration-specific posttest knowledge questions and for science efficacy and science identity. Those students with higher prior science knowledge displayed higher titration knowledge scores after using the experimental 3D-printed haptic device. A multi-modal linguistic and gesture analysis revealed that during recall the experimental participants used the stopcock-turning gesture significantly more often, and their recalls created a significantly different Epistemic Network Analysis (ENA). ENA is a type of 2D projection of the recall data, stronger connections were seen in the high embodied group mainly centering on the key hand-turning gesture. Instructors and designers should consider the multi-modal and multi-dimensional nature of the user interface, and how the addition of another sensory-based learning signal (haptics) might differentially affect lower prior knowledge students. One hypothesis is that haptically manipulating novel devices during learning may create more cognitive load. For low prior knowledge students, it may be advantageous for them to begin learning content on a more ubiquitous interface (e.g., keyboard) before moving them to more novel, multi-modal MR devices/interfaces. 
    more » « less
  6. Energy-efficient visual sensing is of paramount importance to enable battery-backed low power IoT and mobile applications. Unfortunately, modern image sensors still consume hundreds of milliwatts of power, mainly due to analog readout. This is because current systems always supply a fixed voltage to the sensor’s analog circuitry, leading to higher power profiles. In this work, we propose to aggressively scale the analog voltage supplied to the camera as a means to significantly reduce sensor power consumption. To that end, we characterize the power and fidelity implications of analog voltage scaling on three off-the-shelf image sensors. Our characterization reveals that analog voltage scaling reduces sensor power but also degrades image quality. Furthermore, the degradation in image quality situationally affects the task accuracy of vision applications. We develop a visual streaming pipeline that flexibly allows application developers to dynamically adapt sensor voltage on a frame-by-frame basis. We develop a voltage controller that programmatically generates desired sensor voltage based on application request. We integrate our voltage controller into the existing RPi-based video streaming IoT pipeline. On top of this, we develop runtime support for flexible voltage specification from vision applications. Evaluating the system over a wide range of voltage scaling policies on popular vision tasks reveals that Squint imaging can deliver up to 73% sensor power savings, while maintaining reasonable task fidelity. Our artifacts are available at: https://gitlab.com/squint1/squint-ae-public 
    more » « less
  7. What we feel from handling liquids in vessels produces unmistakably fluid tactile sensations. These stimulate essential perceptions in home, laboratory, or industrial contexts. Feeling fluid interactions from virtual fluids would similarly enrich experiences in virtual reality. We introduce Geppetteau, a novel string-driven weight shifting mechanism capable of providing perceivable tactile sensations of handling virtual liquids within a variety of vessel shapes. These mechanisms widen the range of augmentable shapes beyond the state-of-the-art of existing mechanical systems. In this work, Geppetteau is integrated into conical, spherical, cylindrical, and cuboid shaped vessels. Variations of these shapes are often used for fluid containers in our day-to-day. We studied the effectiveness of Geppetteau in simulating fine and coarse-grained tactile sensations of virtual liquids across three user studies. Participants found Geppetteau successful in providing congruent physical sensations of handling virtual liquids in a variety of physical vessel shapes and virtual liquid volumes and viscosities. 
    more » « less
  8. null (Ed.)
    Augmented Reality (AR) is becoming readily more available as the number of AR capable smartphones and tablets increase in popularity. With its exponential development, augmented reality offers an oppurtunity to facilitate education in online chemistry. In hopes of furthering the advancement of augmented reality in online chemistry education, we developed a boiling water experiment to show the effects of heat capacity and to create an interactive lab experiment for online learning. Our work-in-progress paper explores how the utilization of augmented reality can improve the learning process and better exhibit chemistry lab concepts. 
    more » « less
  9. null (Ed.)
    High spatiotemporal resolution can offer high precision for vision applications, which is particularly useful to capture the nuances of visual features, such as for augmented reality. Unfortunately, capturing and processing high spatiotemporal visual frames generates energy-expensive memory traffic. On the other hand, low resolution frames can reduce pixel memory throughput, but reduce also the opportunities of high-precision visual sensing. However, our intuition is that not all parts of the scene need to be captured at a uniform resolution. Selectively and opportunistically reducing resolution for different regions of image frames can yield high-precision visual computing at energy-efficient memory data rates. To this end, we develop a visual sensing pipeline architecture that flexibly allows application developers to dynamically adapt the spatial resolution and update rate of different “rhythmic pixel regions” in the scene. We develop a system that ingests pixel streams from commercial image sensors with their standard raster-scan pixel read-out patterns, but only encodes relevant pixels prior to storing them in the memory. We also present streaming hardware to decode the stored rhythmic pixel region stream into traditional frame-based representations to feed into standard computer vision algorithms. We integrate our encoding and decoding hardware modules into existing video pipelines. On top of this, we develop runtime support allowing developers to flexibly specify the region labels. Evaluating our system on a Xilinx FPGA platform over three vision workloads shows 43 − 64% reduction in interface traffic and memory footprint, while providing controllable task accuracy. 
    more » « less