skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1942844

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We present IMPS: Immersive Media Player System, a tightly synchronized 360º media player that leverages Android VR headsets to deliver immersive educational experiences. Designed for deployment in classrooms, IMPS allows instructors to manage synchronized playback for up to 50 headsets using a tablet interface. The system’s synchronization algorithm ensures lockstep playback across devices within 10 ms, addressing audio and video desynchronization issues of previous systems. IMPS has been successfully deployed by the Act One non-profit to deliver VR content to Title I schools in Arizona and is also used at Arizona State University for synchronized playback of 360º media in educational settings. 
    more » « less
    Free, publicly-accessible full text available February 26, 2026
  2. Free, publicly-accessible full text available February 26, 2026
  3. AutoCalNet enables continuous real-time calibration of mobile 3D cameras by decoupling calibration from content streaming. It leverages a scalable device-edge-cloud network to minimize bandwidth, manage latency, and maintain high precision in calibration data, prioritizing trackable regions and feature points that will facilitate spatiotemporal tracking. This approach provides a flexible, efficient solution for networked camera systems without being constrained by content-specific requirements. 
    more » « less
    Free, publicly-accessible full text available February 26, 2026
  4. null (Ed.)
    High spatiotemporal resolution can offer high precision for vision applications, which is particularly useful to capture the nuances of visual features, such as for augmented reality. Unfortunately, capturing and processing high spatiotemporal visual frames generates energy-expensive memory traffic. On the other hand, low resolution frames can reduce pixel memory throughput, but reduce also the opportunities of high-precision visual sensing. However, our intuition is that not all parts of the scene need to be captured at a uniform resolution. Selectively and opportunistically reducing resolution for different regions of image frames can yield high-precision visual computing at energy-efficient memory data rates. To this end, we develop a visual sensing pipeline architecture that flexibly allows application developers to dynamically adapt the spatial resolution and update rate of different “rhythmic pixel regions” in the scene. We develop a system that ingests pixel streams from commercial image sensors with their standard raster-scan pixel read-out patterns, but only encodes relevant pixels prior to storing them in the memory. We also present streaming hardware to decode the stored rhythmic pixel region stream into traditional frame-based representations to feed into standard computer vision algorithms. We integrate our encoding and decoding hardware modules into existing video pipelines. On top of this, we develop runtime support allowing developers to flexibly specify the region labels. Evaluating our system on a Xilinx FPGA platform over three vision workloads shows 43 − 64% reduction in interface traffic and memory footprint, while providing controllable task accuracy. 
    more » « less