skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Jayasuriya, Suren"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Simulating the effects of atmospheric turbulence for imaging systems operating over long distances is a significant challenge for optical and computer graphics models. Physically‐based ray tracing over kilometers of distance is difficult due to the need to define a spatio‐temporal volume of varying refractive index. Even if such a volume can be defined, Monte Carlo rendering approximations for light refraction through the environment would not yield real‐time solutions needed for video game engines or online dataset augmentation for machine learning. While existing simulators based on procedurally‐generated noise or textures have been proposed in these settings, these simulators often neglect the significant impact of scene depth, leading to unrealistic degradations for scenes with substantial foreground‐background separation. This paper introduces a novel, physically‐based atmospheric turbulence simulator that explicitly models depth‐dependent effects while rendering frames at interactive/near real‐time (>10FPS) rates for image resolutions up to1024×1024(real‐time35FPS at256× 256resolution with depth or512×512at33FPS without depth). Our hybrid approach combines spatially‐varying wavefront aberrations using Zernike polynomials with pixel‐wise depth modulation of both blur (via Point Spread Function interpolation) and geometric distortion or tilt. Our approach includes a novel fusion technique that integrates complementary strengths of leading monocular depth estimators to generate metrically accurate depth maps with enhanced edge fidelity. DAATSim is implemented efficiently on GPUs using Py‐Torch incorporating optimizations like mixed‐precision computation and caching to achieve efficient performance. We present quantitative and qualitative validation demonstrating the simulator's physical plausibility for generating turbulent video. DAAT‐Sim is made publicly available and open‐source to the community:https://github.com/Riponcs/DAATSim. 
    more » « less
  2. The study of non-line-of-sight (NLOS) imaging is growing due to its many potential applications, including rescue operations and pedestrian detection by self-driving cars. However, implementing NLOS imaging on a moving camera remains an open area of research. Existing NLOS imaging methods rely on time-resolved detectors and laser configurations that require precise optical alignment, making it difficult to deploy them in dynamic environments. This work proposes a data-driven approach to NLOS imaging, PathFinder, that can be used with a standard RGB camera mounted on a small, power-constrained mobile robot, such as an aerial drone. Our experimental pipeline is designed to accurately estimate the 2D trajectory of a person who moves in a Manhattan-world environment while remaining hidden from the camera’s fieldof- view. We introduce a novel approach to process a sequence of dynamic successive frames in a line-of-sight (LOS) video using an attention-based neural network that performs inference in real-time. The method also includes a preprocessing selection metric that analyzes images from a moving camera which contain multiple vertical planar surfaces, such as walls and building facades, and extracts planes that return maximum NLOS information. We validate the approach on in-the-wild scenes using a drone for video capture, thus demonstrating low-cost NLOS imaging in dynamic capture environments. 
    more » « less
  3. Inverse Synthetic Aperture Radar (ISAR) imaging presents a formidable challenge when it comes to small everyday objects due to their limited Radar Cross-Section (RCS) and the inherent resolution constraints of radar systems. Existing ISAR reconstruction methods including backprojection (BP) often require complex setups and controlled environments, rendering them impractical for many real-world noisy scenarios. In this paper, we propose a novel Analysis-through-Synthesis (ATS) framework enabled by Neural Radiance Fields (NeRF) for high-resolution coherent ISAR imaging of small objects using sparse and noisy Ultra-Wideband (UWB) radar data with an inexpensive and portable setup. Our end-to-end framework integrates ultra-wideband radar wave propagation, reflection characteristics, and scene priors, enabling efficient 2D scene reconstruction without the need for costly anechoic chambers or complex measurement test beds. With qualitative and quantitative comparisons, we demonstrate that the proposed method outperforms traditional techniques and generates ISAR images of complex scenes with multiple targets and complex structures in Non-Line-of-Sight (NLOS) and noisy scenarios, particularly with limited number of views and sparse UWB radar scans. This work represents a significant step towards practical, costeffective ISAR imaging of small everyday objects, with broad implications for robotics and mobile sensing applications. 
    more » « less
  4. Differentiable 3D-Gaussian splatting (GS) is emerging as a prominent technique in computer vision and graphics for reconstructing 3D scenes. GS represents a scene as a set of 3D Gaussians with varying opacities and employs a computationally efficient splatting operation along with analytical derivatives to compute the 3D Gaussian parameters given scene images captured from various viewpoints. Unfortunately, capturing surround view (360° viewpoint) images is impossible or impractical in many real-world imaging scenarios, including underwater imaging, rooms inside a building, and autonomous navigation. In these restricted baseline imaging scenarios, the GS algorithm suffers from a well-known ‘missing cone’ problem, which results in poor reconstruction along the depth axis. In this paper, we demonstrate that using transient data (from sonars) allows us to address the missing cone problem by sampling high-frequency data along the depth axis. We extend the Gaussian splatting algorithms for two commonly used sonars and propose fusion algorithms that simultaneously utilize RGB camera data and sonar data. Through simulations, emulations, and hardware experiments across various imaging scenarios, we show that the proposed fusion algorithms lead to significantly better novel view synthesis (5 dB improvement in PSNR) and 3D geometry reconstruction (60% lower Chamfer distance). 
    more » « less