skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on May 20, 2025

Title: Instantaneous Velocity Vector Estimation Using a Single MIMO RADAR VIA MULTI-BOUNCE SCATTERING
Multiple-input, multiple-output (MIMO) radars can estimate radial velocities of moving objects, but not their tangential velocities. In this paper, we propose to exploit multi-bounce scattering in the environment to form an effective multi-“look” synthetic aperture and enable estimation of a moving object's entire velocity vector - both tangential and radial velocities. The proposed approach enables instantaneous velocity vector estimation with a single MIMO radar, without additional sensors or assumptions about the object size. The only requirement of our approach is the existence of at least one resolvable multi-bounce path to the object from a static landmark in the environment. The approach is validated both in theory and simulation.  more » « less
Award ID(s):
1956297
PAR ID:
10568510
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-0864-8
Subject(s) / Keyword(s):
MIMO radar, Estimation, Scattering, Imaging, Radar imaging, Apertures, Vectors
Format(s):
Medium: X
Location:
Boulder, CO, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Multiple-input, multiple-output (MIMO) radars can estimate radial velocities of moving objects, but not their tangential velocities. In this paper, we propose to exploit multi-bounce scattering in the environment to form an effective multi-“look” synthetic aperture and enable estimation of a moving object's entire velocity vector - both tangential and radial velocities. The proposed approach enables instantaneous velocity vector estimation with a single MIMO radar, without additional sensors or assumptions about the object size. The only requirement of our approach is the existence of at least one resolvable multi-bounce path to the object from a static landmark in the environment. The approach is validated both in theory and simulation. 
    more » « less
  2. Abstract The kinematics and dynamics of stellar and substellar populations within young, still-forming clusters provide valuable information for constraining theories of formation mechanisms. Using Keck II NIRSPEC+AO data, we have measured radial velocities for 56 low-mass sources within 4′ of the core of the Orion Nebula Cluster (ONC). We also remeasure radial velocities for 172 sources observed with SDSS/APOGEE. These data are combined with proper motions measured using HST ACS/WFPC2/WFC3IR and Keck II NIRC2, creating a sample of 135 sources with all three velocity components. The velocities measured are consistent with a normal distribution in all three components. We measure intrinsic velocity dispersions of ( σ v α , σ v δ , σ v r ) = (1.64 ± 0.12, 2.03 ± 0.13, 2.56 − 0.17 + 0.16 ) km s −1 . Our computed intrinsic velocity dispersion profiles are consistent with the dynamical equilibrium models from Da Rio et al. (2014) in the tangential direction but not in the line-of-sight direction, possibly indicating that the core of the ONC is not yet virialized, and may require a nonspherical potential to explain the observed velocity dispersion profiles. We also observe a slight elongation along the north–south direction following the filament, which has been well studied in previous literature, and an elongation in the line-of-sight to tangential velocity direction. These 3D kinematics will help in the development of realistic models of the formation and early evolution of massive clusters. 
    more » « less
  3. ABSTRACT

    I report the discovery of a stellar stream (Sutlej) using Gaia DR3 (third data release) proper motions and XP metallicities located $\sim$15° north of the Small Magellanic Cloud (SMC). The stream is composed of two parallel linear components (‘branches’) approximately $\sim$8° × 0.6° in size and separated by 2.5°. The stars have a mean proper motion of ($\mu _{\rm RA},\mu _{\rm Dec.}$) = (+0.08 mas yr−1, −1.41 mas yr−1), which is quite similar to the proper motion of stars on the western side of the SMC. The colour–magnitude diagram of the stream stars has a clear red giant branch, horizontal branch, and main-sequence turn-off that are well matched by a parsec isochrone of 10 Gyr, [Fe/H] = −1.8 at 32 kpc, and a total stellar mass of $\sim$33 000 M$_{\odot }$. The stream is spread out over an area of 9.6 deg2 and has a surface brightness of 32.5 mag arcsec−2. The metallicity of the stream stars from Gaia XP spectra extends over $-2.5$$\le$ [M/H] $\le$$-1.0$ with a median of [M/H] = −1.8. The tangential velocity of the stream stars is 214 km s−1 compared to the values of 448 km s−1 for the Large Magellanic Cloud and 428 km s−1 for the SMC. While the radial velocity of the stream is not yet known, a comparison of the space velocities using a range of assumed radial velocities shows that the stream is unlikely to be associated with the Magellanic Clouds. The tangential velocity vector is misaligned with the stream by nearly 90°, which might indicate an important gravitational influence from the nearby Magellanic Clouds.

     
    more » « less
  4. Collaborative object localization aims to collaboratively estimate locations of objects observed from multiple views or perspectives, which is a critical ability for multi-agent systems such as connected vehicles. To enable collaborative localization, several model-based state estimation and learning-based localization methods have been developed. Given their encouraging performance, model-based state estimation often lacks the ability to model the complex relationships among multiple objects, while learning-based methods are typically not able to fuse the observations from an arbitrary number of views and cannot well model uncertainty. In this paper, we introduce a novel spatiotemporal graph filter approach that integrates graph learning and model-based estimation to perform multi-view sensor fusion for collaborative object localization. Our approach models complex object relationships using a new spatiotemporal graph representation and fuses multi-view observations in a Bayesian fashion to improve location estimation under uncertainty. We evaluate our approach in the applications of connected autonomous driving and multiple pedestrian localization. Experimental results show that our approach outperforms previous techniques and achieves the state-of-the-art performance on collaborative localization. 
    more » « less
  5. null (Ed.)
    We present MultiBodySync, a novel, end-to-end trainable multi-body motion segmentation and rigid registration framework for multiple input 3D point clouds. The two non-trivial challenges posed by this multi-scan multibody setting that we investigate are: (i) guaranteeing correspondence and segmentation consistency across multiple input point clouds capturing different spatial arrangements of bodies or body parts; and (ii) obtaining robust motion-based rigid body segmentation applicable to novel object categories. We propose an approach to address these issues that incorporates spectral synchronization into an iterative deep declarative network, so as to simultaneously recover consistent correspondences as well as motion segmentation. At the same time, by explicitly disentangling the correspondence and motion segmentation estimation modules, we achieve strong generalizability across different object categories. Our extensive evaluations demonstrate that our method is effective on various datasets ranging from rigid parts in articulated objects to individually moving objects in a 3D scene, be it single-view or full point clouds. 
    more » « less