Purpose: To improve the image reconstruction for prospective motion correction (PMC) of simultaneous multislice (SMS) EPI of the brain, an update of receiver phase and resampling of coil sensitivities are proposed and evaluated. Methods: A camera-based system was used to track head motion (3 translations and 3 rotations) and dynamically update the scan position and orientation. We derived the change in receiver phase associated with a shifted field of view (FOV) and applied it in real-time to each k-space line of the EPI readout trains. Second, for the SMS reconstruction, we adapted resampled coil sensitivity profiles reflecting the movement of slices. Single-shot gradient-echo SMS-EPI scans were performed in phantoms and human subjects for validation. Results: Brain SMS-EPI scans in the presence of motion withPMCand no phase correction for scan plane shift showed noticeable artifacts. These artifacts were visually and quantitatively attenuated when corrections were enabled. Correcting misaligned coil sensitivity maps improved the temporal SNR (tSNR) of time series by 24% (p=0.0007) for scans with large movements (up to ∼35mm and 30◦). Correcting the receiver phase improved the tSNR of a scan with minimal head movement by 50% from 50 to 75 for a United Kingdom biobank protocol. Conclusion: Reconstruction-induced motion artifacts in single-shot SMS-EPI scans acquired with PMC can be removed by dynamically adjusting the receiver phase of each line across EPI readout trains and updating coil sensitivity profiles during reconstruction. The method may be a valuable tool for SMS-EPI scans in the presence of subject motion.
more »
« less
Orientation estimation for instrumented helmet using neural networks
This work presents an integrated solution for head orientation estimation, which is a critical component for applications of virtual and augmented reality systems. The proposed solution builds upon the measurements from the inertial sensors and magnetometer added to an instrumented helmet, and an orientation estimation algorithm is developed to mitigate the effect of bias introduced by noise in the gyroscope signal. Convolutional Neural Network (CNN) techniques are introduced to develop a dynamic orientation estimation algorithm with a structure motivated by complementary filters and trained on data collected to represent a wide range of head motion profiles. The proposed orientation estimation method is evaluated experimentally and compared to both learning and non-learning-based orientation estimation algorithms found in the literature for comparable applications. Test results support the advantage of the proposed CNN-based solution, particularly for motion profiles with high acceleration disturbance that are characteristic of head motion.
more »
« less
- Award ID(s):
- 2218063
- PAR ID:
- 10515031
- Publisher / Repository:
- SAGE Journals
- Date Published:
- Journal Name:
- Measurement and Control
- Volume:
- 56
- Issue:
- 7-8
- ISSN:
- 0020-2940
- Page Range / eLocation ID:
- 1156 to 1167
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Estimation of a speaker’s direction and head orientation with binaural recordings can be a critical piece of information in many real-world applications with emerging ‘earable’ devices, including smart headphones and AR/VR headsets. However, it requires predicting the mutual head orientations of both the speaker and the listener, which is challenging in practice. This paper presents a system for jointly predict- ing speaker-listener head orientations by leveraging inherent human voice directivity and listener’s head-related transfer function (HRTF) as perceived by the ear-mounted microphones on the listener. We propose a convolution neural network model that, given binaural speech recording, can predict the orientation of both speaker and listener with re- spect to the line joining the two. The system builds on the core observation that the recordings from the left and right ears are differentially affected by the voice directivity as well as the HRTF. We also incorporate the fact that voice is more directional at higher frequencies compared to lower frequen- cies. Our proposed system achieves 2.5 degrees of 90th percentile error in the listener’s head orientation and 12.5 degrees of 90th percentile error for that of the speaker.more » « less
-
Abstract A novel computer vision‐based meteor head echo detection algorithm is developed to study meteor fluxes and their physical properties, including initial range, range coverage, and radial velocity. The proposed Algorithm for Head Echo Automatic Detection (AHEAD) comprises a feature extraction function and a Convolutional Neural Network (CNN). The former is tailored to identify meteor head echoes, and then a CNN is employed to remove false alarms. In the testing of meteor data collected with the Jicamarca 50 MHz incoherent scatter radar, the new algorithm detects over 180 meteors per minute at dawn, which is 2 to 10 times more sensitive than prior manual or algorithmic approaches, with a false alarm rate less than 1 percent. The present work lays the foundation of developing a fully automatic AI‐meteor package that detects, analyzes, and distinguishes among many types of meteor echoes. Furthermore, although initially evaluated for meteor data collected with the Jicamarca VHF incoherent radar, the new algorithm is generic enough that can be applied to other facilities with minor modifications. The CNN removes up to 98 percent of false alarms according to the testing set. We also present and discuss the physical characteristics of meteors detected with AHEAD, including flux rate, initial range, line of sight velocity, Signal‐to‐Noise Ratio, and noise characteristics. Our results indicate that stronger meteor echoes are detected at a slightly lower altitude and lower radial velocity than other meteors.more » « less
-
null (Ed.)Underwater motion recognition using acoustic wireless networks has a promisingly potential to be applied to the diver activity monitoring and aquatic animal recognition without the burden of expensive underwater cameras which have been used by the image-based underwater classification techniques. However, accurately extracting features that are independent of the complicated underwater environments such as inhomogeneous deep seawater is a serious challenge for underwater motion recognition. Velocities of target body (VTB) during the motion are excellent environment independent features for WiFi-based recognition techniques in the indoor environments, however, VTB features are hard to be extracted accurately in the underwater environments. The inaccurate VTB estimation is caused by the fact that the signal propagates along with a curve instead of a straight line as the signal propagates in the air. In this paper, we propose an underwater motion recognition mechanism in the inhomogeneous deep seawater using acoustic wireless networks. To accurately extract velocities of target body features, we first derive Doppler Frequency Shift (DFS) coefficients that can be utilized for VTB estimation when signals propagate deviously. Secondly, we propose a dynamic self-refining (DSR) optimization algorithm with acoustic wireless networks that consist of multiple transmitter-receiver links to estimate the VTB. Those VTB features can be utilized to train the convolutional neural networks (CNN). Through the simulation, estimated VTB features are evaluated and the testing recognition results validate that our proposed underwater motion recognition mechanism is able to achieve high classification accuracy.more » « less
-
Abstract High‐power large‐aperture radar instruments are capable of detecting thousands of meteor head echoes within hours of observation, and manually identifying every head echo is prohibitively time‐consuming. Previous work has demonstrated that convolutional neural networks (CNNs) accurately detect head echoes, but training a CNN requires thousands of head echo examples manually identified at the same facility and with similar experiment parameters. Since pre‐labeled data is often unavailable, a method is developed to simulate head echo observations at any given frequency and pulse code. Real instances of radar clutter, noise, or ionospheric phenomena such as the equatorial electrojet are additively combined with synthetic head echo examples. This enables the CNN to differentiate between head echoes and other phenomena. CNNs are trained using tens of thousands of simulated head echoes at each of three radar facilities, where concurrent meteor observations were performed in October 2019. Each CNN is tested on a subset of actual data containing hundreds of head echoes, and demonstrates greater than 97% classification accuracy at each facility. The CNNs are capable of identifying a comprehensive set of head echoes, with over 70% sensitivity at all three facilities, including when the equatorial electrojet is present. The CNN demonstrates greater sensitivity to head echoes with higher signal strength, but still detects more than half of head echoes with maximum signal strength below 20 dB that would likely be missed during manual detection. These results demonstrate the ability of the synthetic data approach to train a machine learning algorithm to detect head echoes.more » « less
An official website of the United States government

