skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A multi-mass and multi-hit two-camera 3D ion momentum imaging system
We demonstrate an improved two-camera system for multi-mass and multi-hit three-dimensional (3D) momentum imaging of ions. The imaging system employs two conventional complementary metal–oxide–semiconductor cameras. We have shown previously that the system can time slice ion Newton spheres with a time resolution of 8.8 ns, limited by camera timing jitter [J. Chem. Phys., 158, 191104 (2023)]. In this work, a jitter correction method was developed to suppress the camera jitter and improve the time resolution to better than 2 ns. With this resolution, full 3D momentum distributions of ions can be obtained. We further show that this method can detect two ions with different masses when utilizing both the rising and falling edges of the cameras.  more » « less
Award ID(s):
2107860
PAR ID:
10537251
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IOP
Date Published:
Journal Name:
Review of Scientific Instruments
Volume:
95
Issue:
7
ISSN:
0034-6748
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We demonstrate a simple approach to achieve three-dimensional ion momentum imaging. The method employs two complementary metal–oxide–semiconductor cameras in addition to a standard microchannel plates/phosphor screen imaging detector. The two cameras are timed to measure the decay of luminescence excited by ion hits to extract the time of flight. The achieved time resolution is better than 10 ns, which is mainly limited by camera jitters. A better than 5 ns resolution can be achieved when the jitter is suppressed. 
    more » « less
  2. This study presents an advanced multi-view drone swarm imaging system for the three-dimensional characterization of smoke plume dispersion dynamics. The system comprises a manager drone and four worker drones, each equipped with high-resolution cameras and precise GPS modules. The manager drone uses image feedback to autonomously detect and position itself above the plume, then commands the worker drones to orbit the area in a synchronized circular flight pattern, capturing multi-angle images. The camera poses of these images are first estimated, then the images are grouped in batches and processed using Neural Radiance Fields (NeRF) to generate high-resolution 3D reconstructions of plume dynamics over time. Field tests demonstrated the system's ability to capture critical plume characteristics including volume dynamics, wind-driven directional shifts, and lofting behavior at a temporal resolution of about 1 s. The 3D reconstructions generated by this system provide unique field data for enhancing the predictive models of smoke plume dispersion and fire spread. Broadly, the drone swarm system offer a versatile platform for high resolution measurements of pollutant emissions and transport in wildfires, volcanic eruptions, prescribed burns, and industrial processes, ultimately supporting more effective fire control decisions and mitigating wildfire risks. 
    more » « less
  3. Travel-time estimation of traffic flow is an important problem with critical implications for traffic congestion analysis. We developed techniques for using intersection videos to identify vehicle trajectories across multiple cameras and analyze corridor travel time. Our approach consists of (1) multi-object single-camera tracking, (2) vehicle re-identification among different cameras, (3) multi-object multi-camera tracking, and (4) travel-time estimation. We evaluated the proposed framework on real intersections in Florida with pan and fisheye cameras. The experimental results demonstrate the viability and effectiveness of our method. 
    more » « less
  4. Full surround 3D imaging for shape acquisition is essential for generating digital replicas of real-world objects. Surrounding an object we seek to scan with a kaleidoscope, that is, a configuration of multiple planar mirrors, produces an image of the object that encodes information from a combinatorially large number of virtual viewpoints. This information is practically useful for the full surround 3D reconstruction of the object, but cannot be used directly, as we do not know what virtual viewpoint each image pixel corresponds---the pixel label. We introduce a structured light system that combines a projector and a camera with a kaleidoscope. We then prove that we can accurately determine the labels of projector and camera pixels, for arbitrary kaleidoscope configurations, using the projector-camera epipolar geometry. We use this result to show that our system can serve as a multi-view structured light system with hundreds of virtual projectors and cameras. This makes our system capable of scanning complex shapes precisely and with full coverage. We demonstrate the advantages of the kaleidoscopic structured light system by scanning objects that exhibit a large range of shapes and reflectances. 
    more » « less
  5. This Letter presents a novel, to the best of our knowledge, method to calibrate multi-focus microscopic structured-light three-dimensional (3D) imaging systems with an electrically adjustable camera focal length. We first leverage the conventional method to calibrate the system with a reference focal lengthf0. Then we calibrate the system with other discrete focal lengthsfiby determining virtual features on a reconstructed white plane usingf0. Finally, we fit the polynomial function model using the discrete calibration results forfi. Experimental results demonstrate that our proposed method can calibrate the system consistently and accurately. 
    more » « less