skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Method for minimizing lens breathing with one moving group
Lens breathing in movie cameras is the change in the overall content of a scene while bringing subjects located at different depths into focus. This paper presents a method for minimizing lens breathing or changing angular field-of-view while maintaining perspective by moving only one lens group. To maintain perspective, the stop is placed in a fixed position where no elements between the scene and the stop can move, thus fixing the entrance pupil in one location relative to the object fields. The result is perspective invariance while refocusing the lens. Using paraxial optics, we solve for the moving group's position to focus on every object position and eliminate breathing between the minimum and maximum object distances. We investigate the solution space for optical systems with two positive groups or a positive and a negative group (i.e., retrofocus and telephoto systems). We explain how to apply this paraxial solution to existing systems to minimize breathing. The results for two systems altered using this method are presented. Breathing improved by two orders of magnitude in both cases, and performance specifications were still met when compared to the initial systems.  more » « less
Award ID(s):
1822049 1822026
PAR ID:
10531214
Author(s) / Creator(s):
; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Express
Volume:
30
Issue:
11
ISSN:
1094-4087; OPEXFF
Format(s):
Medium: X Size: Article No. 19494
Size(s):
Article No. 19494
Sponsoring Org:
National Science Foundation
More Like this
  1. Integral imaging has proven useful for three-dimensional (3D) object visualization in adverse environmental conditions such as partial occlusion and low light. This paper considers the problem of 3D object tracking. Two-dimensional (2D) object tracking within a scene is an active research area. Several recent algorithms use object detection methods to obtain 2D bounding boxes around objects of interest in each frame. Then, one bounding box can be selected out of many for each object of interest using motion prediction algorithms. Many of these algorithms rely on images obtained using traditional 2D imaging systems. A growing literature demonstrates the advantage of using 3D integral imaging instead of traditional 2D imaging for object detection and visualization in adverse environmental conditions. Integral imaging’s depth sectioning ability has also proven beneficial for object detection and visualization. Integral imaging captures an object’s depth in addition to its 2D spatial position in each frame. A recent study uses integral imaging for the 3D reconstruction of the scene for object classification and utilizes the mutual information between the object’s bounding box in this 3D reconstructed scene and the 2D central perspective to achieve passive depth estimation. We build over this method by using Bayesian optimization to track the object’s depth in as few 3D reconstructions as possible. We study the performance of our approach on laboratory scenes with occluded objects moving in 3D and show that the proposed approach outperforms 2D object tracking. In our experimental setup, mutual information-based depth estimation with Bayesian optimization achieves depth tracking with as few as two 3D reconstructions per frame which corresponds to the theoretical minimum number of 3D reconstructions required for depth estimation. To the best of our knowledge, this is the first report on 3D object tracking using the proposed approach. 
    more » « less
  2. We demonstrate how a simple 1D flat lens can be utilized to not only focus light but to generate non-paraxial accelerating beams. We further report how illumination angle and wavelength degrees of freedom allow dynamic transition between these two functionalities. 
    more » « less
  3. Conventional rendering techniques are primarily designed and optimized for single-frame rendering. In practical applications, such as scene editing and animation rendering, users frequently encounter scenes where only a small portion is modified between consecutive frames. In this paper, we develop a novel approach to incremental re-rendering of scenes with dynamic objects, where only a small part of a scene moves from one frame to the next. We formulate the difference (or residual) in the image between two frames as a (correlated) light-transport integral which we call the residual path integral. Efficient numerical solution of this integral then involves (1) devising importance sampling strategies to focus on paths with non-zero residual-transport contributions and (2) choosing appropriate mappings between the native path spaces of the two frames. We introduce a set of path importance sampling strategies that trace from the moving object(s) which are the sources of residual energy. We explore path mapping strategies that generalize those from gradient-domain path tracing to our importance sampling techniques specially for dynamic scenes. Additionally, our formulation can be applied to material editing as a simpler special case. We demonstrate speed-ups over previous correlated sampling of path differences and over rendering the new frame independently. Our formulation brings new insights into the re-rendering problem and paves the way for devising new types of sampling techniques and path mappings with different trade-offs. 
    more » « less
  4. Abstract Conventional rendering techniques are primarily designed and optimized for single‐frame rendering. In practical applications, such as scene editing and animation rendering, users frequently encounter scenes where only a small portion is modified between consecutive frames. In this paper, we develop a novel approach to incremental re‐rendering of scenes with dynamic objects, where only a small part of a scene moves from one frame to the next. We formulate the difference (or residual) in the image between two frames as a (correlated) light‐transport integral which we call the residual path integral. Efficient numerical solution of this integral then involves (1) devising importance sampling strategies to focus on paths with non‐zero residual‐transport contributions and (2) choosing appropriate mappings between the native path spaces of the two frames. We introduce a set of path importance sampling strategies that trace from the moving object(s) which are the sources of residual energy. We explore path mapping strategies that generalize those from gradient‐domain path tracing to our importance sampling techniques specially for dynamic scenes. Additionally, our formulation can be applied to material editing as a simpler special case. We demonstrate speed‐ups over previous correlated sampling of path differences and over rendering the new frame independently. Our formulation brings new insights into the re‐rendering problem and paves the way for devising new types of sampling techniques and path mappings with different trade‐offs. 
    more » « less
  5. In this letter, we introduce the idea of AquaFuse, a physics-based method for synthesizing waterbody properties in underwater imagery. We formulate a closed-form solution for waterbody fusion that facilitates realistic data augmentation and geometrically consistent underwater scene rendering. AquaFuse leverages the physical characteristics of light propagation underwater to synthesize the waterbody from one scene to the object contents of another. Unlike data-driven style transfer methods, AquaFuse preserves the depth consistency and object geometry in an input scene. We validate this unique feature by comprehensive experiments over diverse sets of underwater scenes. We find that the AquaFused images preserve over 94% depth consistency and 90–95% structural similarity of the input scenes. We also demonstrate that it generates accurate 3D view synthesis by preserving object geometry while adapting to the inherent waterbody fusion process. AquaFuse opens up a new research direction in data augmentation by geometry-preserving style transfer for underwater imaging and robot vision. 
    more » « less