skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Underwater Dome-Port Camera Calibration: Modeling of Refraction and Offset through N-Sphere Camera Model
Award ID(s):
2024541 1919647 2333604
PAR ID:
10546812
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-8457-4
Page Range / eLocation ID:
6110 to 6117
Format(s):
Medium: X
Location:
Yokohama, Japan
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Cameras are deployed at scale with the purpose of searching and tracking objects of interest (e.g., a suspected person) through the camera network on live videos. Such cross-camera analytics is data and compute intensive, whose costs grow with the number of cameras and time. We present Spatula, a cost-efficient system that enables scaling cross-camera analytics on edge compute boxes to large camera networks by leveraging the spatial and temporal cross-camera correlations. While such correlations have been used in computer vision community, Spatula uses them to drastically reduce the communication and computation costs by pruning search space of a query identity (e.g., ignoring frames not correlated with the query identity’s current position). Spatula provides the first system substrate on which cross-camera analytics applications can be built to efficiently harness the cross-camera correlations that are abundant in large camera deployments. Spatula reduces compute load by $$8.3\times$$ on an 8-camera dataset, and by $$23\times-86\times$$ on two datasets with hundreds of cameras (simulated from real vehicle/pedestrian traces). We have also implemented Spatula on a testbed of 5 AWS DeepLens cameras. 
    more » « less
  2. Multi-camera systems are essential in movies, live broadcasts, and other media. The selection of the appropriate camera for every moment has a decisive impact on production quality and audience preferences. Learning-based multi-camera view recommendation frameworks have been explored to assist professionals in decision making. This work explores how two standard cinematography practices could be incorporated into the learning pipeline: (1) not staying on the same camera for too long and (2) introducing a scene from a wider shot and gradually progressing to narrower ones. In these regards, we incorporate (1) the duration of the displaying camera and (2) camera identity as temporal and camera embedding in a transformer architecture, thereby implicitly guiding the model to learn the two practices from professional-labeled data. Experiments show that the proposed framework outperforms the baseline by 14.68% in six-way classification accuracy. Ablation studies on different approaches to embedding the temporal and camera information further verify the efficacy of the framework. 
    more » « less
  3. In this work, we tackle the problem of active camera localization, which controls the camera movements actively to achieve an accurate camera pose. The past solutions are mostly based on Markov Localization, which reduces the position-wise camera uncertainty for localization. These approaches localize the camera in the discrete pose space and are agnostic to the localization-driven scene property, which restricts the camera pose accuracy in the coarse scale. We propose to overcome these limitations via a novel active camera localization algorithm, composed of a passive and an active localization module. The former optimizes the camera pose in the continuous pose space by establishing point-wise camera-world correspondences. The latter explicitly models the scene and camera uncertainty components to plan the right path for accurate camera pose estimation. We validate our algorithm on the challenging localization scenarios from both synthetic and scanned real-world indoor scenes. Experimental results demonstrate that our algorithm outperforms both the state-of-the-art Markov Localization based approach and other compared approaches on the fine-scale camera pose accuracy 
    more » « less
  4. null (Ed.)