skip to main content


Title: Method for minimizing lens breathing with one moving group
Lens breathing in movie cameras is the change in the overall content of a scene while bringing subjects located at different depths into focus. This paper presents a method for minimizing lens breathing or changing angular field-of-view while maintaining perspective by moving only one lens group. To maintain perspective, the stop is placed in a fixed position where no elements between the scene and the stop can move, thus fixing the entrance pupil in one location relative to the object fields. The result is perspective invariance while refocusing the lens. Using paraxial optics, we solve for the moving group's position to focus on every object position and eliminate breathing between the minimum and maximum object distances. We investigate the solution space for optical systems with two positive groups or a positive and a negative group (i.e., retrofocus and telephoto systems). We explain how to apply this paraxial solution to existing systems to minimize breathing. The results for two systems altered using this method are presented. Breathing improved by two orders of magnitude in both cases, and performance specifications were still met when compared to the initial systems.  more » « less
Award ID(s):
1822049 1822026
NSF-PAR ID:
10429710
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Optics Express
Volume:
30
Issue:
11
ISSN:
1094-4087
Page Range / eLocation ID:
19494
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The role of perceptual organization in motion analysis has heretofore been minimal. In this work we demonstrate that the use of perceptual organization principles of temporal coherence (common fate) and spatial proximity can result in a robust motion segmentation algorithm that is able to handle drastic illumination changes, occlusion events, and multiple moving objects, without the use of object models. The adopted algorithm does not employ the traditional frame by frame motion analysis, but rather treats the image sequence as a single 3D spatio-temporal block of data. We describe motion using spatio-temporal surfaces, which we, in turn, describe as compositions of finite planar patches. These planar patches, referred to as temporal envelopes, capture the local nature of the motions. We detect these temporal envelopes using 3D-edge detection followed by Hough transform, and represent them with convex hulls. We present a graph-based method to group these temporal envelopes arising from one object based on Gestalt organizational principles. A probabilistic Bayesian network quantifies the saliencies of the relationships between temporal envelopes. We present results on sequences with multiple moving persons, significant occlusions, and scene illumination changes. 
    more » « less
  2. Electrical muscle stimulation (EMS) is an emergent technique that miniaturizes force feedback, especially popular for untethered haptic devices, such as mobile gaming, VR, or AR. However, the actuation displayed by interactive systems based on EMS is coarse and imprecise. EMS systems mostly focus on inducing movements in large muscle groups such as legs, arms, and wrists; whereas individual finger poses, which would be required, for example, to actuate a user's fingers to fingerspell even the simplest letters in sign language, are not possible. The lack of dexterity in EMS stems from two fundamental limitations: (1) lack of independence: when a particular finger is actuated by EMS, the current runs through nearby muscles, causing unwanted actuation of adjacent fingers; and, (2) unwanted oscillations: while it is relatively easy for EMS to start moving a finger, it is very hard for EMS to stop and hold that finger at a precise angle; because, to stop a finger, virtually all EMS systems contract the opposing muscle, typically achieved via controllers (e.g., PID)—unfortunately, even with the best controller tuning, this often results in unwanted oscillations. To tackle these limitations, we propose dextrEMS, an EMS-based haptic device featuring mechanical brakes attached to each finger joint. The key idea behind dextrEMS is that while the EMS actuates the fingers, it is our mechanical brake that stops the finger in a precise position. Moreover, it is also the brakes that allow dextrEMS to select which fingers are moved by EMS, eliminating unwanted movements by preventing adjacent fingers from moving. We implemented dextrEMS as an untethered haptic device, weighing only 68g, that actuates eight finger joints independently (metacarpophalangeal and proximal interphalangeal joints for four fingers), which we demonstrate in a wide range of haptic applications, such as assisted fingerspelling, a piano tutorial, guitar tutorial, and a VR game. Finally, in our technical evaluation, we found that dextrEMS outperformed EMS alone by doubling its independence and reducing unwanted oscillations. 
    more » « less
  3. Avidan, S. (Ed.)
    We address the problem of segmenting moving rigid objects based on two-view image correspondences under a perspective camera model. While this is a well understood problem, existing methods scale poorly with the number of correspondences. In this paper we propose a fast segmentation algorithm that scales linearly with the number of correspondences and show that on benchmark datasets it offers the best trade-off between error and computational time: it is at least one order of magnitude faster than the best method (with comparable or better accuracy), with the ratio growing up to three orders of magnitude for larger number of correspondences. We approach the problem from an algebraic perspective by exploiting the fact that all points belonging to a given object lie in the same quadratic surface. The proposed method is based on a characterization of each surface in terms of the Christoffel polynomial associated with the probability that a given point belongs to the surface. This allows for efficiently segmenting points “one surface at a time” in O(number of points) 
    more » « less
  4. null (Ed.)
    Abstract Swimming in schools has long been hypothesized to allow fish to save energy. Fish must exploit the energy from the wakes of their neighbors for maximum energy savings, a feat that requires them to both synchronize their tail movements and stay in certain positions relative to their neighbors. To maintain position in a school, we know that fish use multiple sensory systems, mainly their visual and flow sensing lateral line system. However, how fish synchronize their swimming movements in a school is still not well understood. Here we test the hypothesis that this synchronization may depend on functional differences in the two branches of the lateral line sensory system that detects water movements close to the fish’s body. The anterior branch, located on the head, encounters largely undisturbed free-stream flow, while the posterior branch, located on the trunk and tail, encounters flow that has been affected strongly by the tail movement. Thus, we hypothesize that the anterior branch may be more important for regulating position within the school, while the posterior branch may be more important for synchronizing tail movements. Our study examines functional differences in the anterior and posterior lateral line in the structure and tail synchronization of fish schools. We used a widely available aquarium fish that schools, the giant danio, Devario equipinnatus. Fish swam in a large circular tank where stereoscopic videos recordings were used to reconstruct the 3 D position of each individual within the school and to track tail kinematics to quantify synchronization. For one fish in each school, we ablated using cobalt chloride either the anterior region only, the posterior region only, or the entire lateral line system. We observed that ablating any region of the lateral line system causes fish to swim in a “box” or parallel swimming formation, which was different from the diamond formation observed in normal fish. Ablating only the anterior region did not substantially reduce tail beat synchronization but ablating only the posterior region caused fish to stop synchronizing their tail beats, largely because the tail beat frequency increased dramatically. Thus, the anterior and posterior lateral line system appear to have different behavioral functions in fish. Most importantly, we showed that the posterior lateral line system played a major role in determining tail beat synchrony in schooling fish. Without synchronization, swimming efficiency decreases, which can have an impact on the fitness of the individual fish and group. 
    more » « less
  5. Human efficiency in finding a target in an image has attracted the attention of machine learning researchers, but what about when no target is there? Knowing how people search in the absence of a target, and when they stop, is important for Human-computer-interaction systems attempting to predict human gaze behavior in the wild. Here we report a rigorous evaluation of target-absent search behavior using the COCO-Search18 dataset to train stateof- the-art models. We focus on two specific aims. First, we characterize the presence of a target guidance signal in target-absent search behavior by comparing it to targetpresent guidance and free viewing. We do this by comparing how well a model trained on one type of fixation behavior (target-present, target-absent, free viewing) can predict behavior in either the same or different task. To compare target-absent search to free viewing behavior we created COCO-FreeView, a dataset of free-viewing fixations for the same images used in COCO-Search18. These comparisons revealed the existence of a target guidance signal in targetabsent search, albeit one much less dominant compared to when a target actually appeared in an image, and that the target-absent guidance signal was similar to free viewing in that saliency and center bias were both weighted more than guidance from target features. Our second aim focused on the stopping criteria, a question intrinsic to target-absent search. Here we propose to train a foveated target detector whose target detection representation is sensitive to the relationship between distance from the fovea. Then combining the predicted target detection representation with other information such as fixation history and subject ID, our model outperforms the baselines in predicting when a person stops moving his attention during target-absent search. 
    more » « less