skip to main content


Title: The Benefit of Distraction: Denoising Camera-Based Physiological Measurements Using Inverse Attention
Attention networks perform well on diverse computer vision tasks. The core idea is that the signal of interest is stronger in some pixels ("foreground"), and by selectively focusing computation on these pixels, networks can extract subtle information buried in noise and other sources of corruption. Our paper is based on one key observation: in many real-world applications, many sources of corruption, such as illumination and motion, are often shared between the "foreground" and the "background" pixels. Can we utilize this to our advantage? We propose the utility of inverse attention networks, which focus on extracting information about these shared sources of corruption. We show that this helps to effectively suppress shared covariates and amplify signal information, resulting in improved performance. We illustrate this on the task of camera-based physiological measurement where the signal of interest is weak and global illumination variations and motion act as significant shared sources of corruption. We perform experiments on three datasets and show that our approach of inverse attention produces state-of-the-art results, increasing the signal-to-noise ratio by up to 5.8 dB, reducing heart rate and breathing rate estimation errors by as much as 30 %, recovering subtle waveform dynamics, and generalizing from RGB to NIR videos without retraining.  more » « less
Award ID(s):
1801372
NSF-PAR ID:
10301742
Author(s) / Creator(s):
Date Published:
Journal Name:
Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021
Page Range / eLocation ID:
4955-4964
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. SUMMARY

    Cross-correlations of ambient seismic noise are widely used for seismic velocity imaging, monitoring and ground motion analyses. A typical step in analysing noise cross-correlation functions (NCFs) is stacking short-term NCFs over longer time periods to increase the signal quality. Spurious NCFs could contaminate the stack, degrade its quality and limit its use. Many methods have been developed to improve the stacking of coherent waveforms, including earthquake waveforms, receiver functions and NCFs. This study systematically evaluates and compares the performance of eight stacking methods, including arithmetic mean or linear stacking, robust stacking, selective stacking, cluster stacking, phase-weighted stacking, time–frequency phase-weighted stacking, Nth-root stacking and averaging after applying an adaptive covariance filter. Our results demonstrate that, in most cases, all methods can retrieve clear ballistic or first arrivals. However, they yield significant differences in preserving the phase and amplitude information. This study provides a practical guide for choosing the optimal stacking method for specific research applications in ambient noise seismology. We evaluate the performance using multiple onshore and offshore seismic arrays in the Pacific Northwest region. We compare these stacking methods for NCFs calculated from raw ambient noise (referred to as Raw NCFs) and from ambient noise normalized using a one-bit clipping time normalization method (referred to as One-bit NCFs). We evaluate six metrics, including signal-to-noise ratios, phase dispersion images, convergence rate, temporal changes in the ballistic and coda waves, relative amplitude decays with distance and computational time. We show that robust stacking is the best choice for all applications (velocity tomography, monitoring and attenuation studies) using Raw NCFs. For applications using One-bit NCFs, all methods but phase-weighted and Nth-root stacking are good choices for seismic velocity tomography. Linear, robust and selective stacking methods are all equally appropriate choices when using One-bit NCFs for monitoring applications. For applications relying on accurate relative amplitudes, the linear, robust, selective and cluster stacking methods all perform well with One-bit NCFs. The evaluations in this study can be generalized to a broad range of time-series analysis that utilizes data coherence to perform ensemble stacking. Another contribution of this study is the accompanying open-source software package, StackMaster, which can be used for general purposes of time-series stacking.

     
    more » « less
  2. Giove, Federico (Ed.)
    Resting-state blood-oxygen-level-dependent (BOLD) signal acquired through functional magnetic resonance imaging is a proxy of neural activity and a key mechanism for assessing neurological conditions. Therefore, practical tools to filter out artefacts that can compromise the assessment are required. On the one hand, a variety of tailored methods to preprocess the data to deal with identified sources of noise (e.g., head motion, heart beating, and breathing, just to mention a few) are in place. But, on the other hand, there might be unknown sources of unstructured noise present in the data. Therefore, to mitigate the effects of such unstructured noises, we propose a model-based filter that explores the statistical properties of the underlying signal (i.e., long-term memory). Specifically, we consider autoregressive fractional integrative process filters. Remarkably, we provide evidence that such processes can model the signals at different regions of interest to attain stationarity. Furthermore, we use a principled analysis where a ground-truth signal with statistical properties similar to the BOLD signal under the injection of noise is retrieved using the proposed filters. Next, we considered preprocessed (i.e., the identified sources of noise removed) resting-state BOLD data of 98 subjects from the Human Connectome Project. Our results demonstrate that the proposed filters decrease the power in the higher frequencies. However, unlike the low-pass filters, the proposed filters do not remove all high-frequency information, instead they preserve process-related higher frequency information. Additionally, we considered four different metrics (power spectrum, functional connectivity using the Pearson’s correlation, coherence, and eigenbrains) to infer the impact of such filter. We provided evidence that whereas the first three keep most of the features of interest from a neuroscience perspective unchanged, the latter exhibits some variations that could be due to the sporadic activity filtered out. 
    more » « less
  3. Introduction Multi-series CT (MSCT) scans, including non-contrast CT (NCCT), CT Perfusion (CTP), and CT Angiography (CTA), are widely used in acute stroke imaging. While each scan has its advantage in disease diagnosis, the varying image resolution of different series hinders the ability of the radiologist to discern subtle suspicious findings. Besides, higher image quality requires high radiation doses, leading to increases in health risks such as cataract formation and cancer induction. Thus, it is highly crucial to develop an approach to improve MSCT resolution and to lower radiation exposure. Hypothesis MSCT imaging of the same patient is highly correlated in structural features, the transferring and integration of the shared and complementary information from different series are beneficial for achieving high image quality. Methods We propose TL-GAN, a learning-based method by using Transfer Learning (TL) and Generative Adversarial Network (GAN) to reconstruct high-quality diagnostic images. Our TL-GAN method is evaluated on 4,382 images collected from nine patients’ MSCT scans, including 415 NCCT slices, 3,696 CTP slices, and 271 CTA slices. We randomly split the nine patients into a training set (4 patients), a validation set (2 patients), and a testing set (3 patients). In preprocessing, we remove the background and skull and visualize in brain window. The low-resolution images (1/4 of the original spatial size) are simulated by bicubic down-sampling. For training without TL, we train different series individually, and for with TL, we follow the scanning sequence (NCCT, CTP, and CTA) by finetuning. Results The performance of TL-GAN is evaluated by the peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) index on 184 NCCT, 882 CTP, and 107 CTA test images. Figure 1 provides both visual (a-c) and quantity (d-f) comparisons. Through TL-GAN, there is a significant improvement with TL than without TL (training from scratch) for NCCT, CTP, and CTA images, respectively. These significances of performance improvement are evaluated by one-tailed paired t-tests (p < 0.05). We enlarge the regions of interest for detail visual comparisons. Further, we evaluate the CTP performance by calculating the perfusion maps, including cerebral blood flow (CBF) and cerebral blood volume (CBV). The visual comparison of the perfusion maps in Figure 2 demonstrate that TL-GAN is beneficial for achieving high diagnostic image quality, which are comparable to the ground truth images for both CBF and CBV maps. Conclusion Utilizing TL-GAN can effectively improve the image resolution for MSCT, provides radiologists more image details for suspicious findings, which is a practical solution for MSCT image quality enhancement. 
    more » « less
  4. null (Ed.)
    Drilling and milling operations are material removal processes involved in everyday conventional productions, especially in the high-speed metal cutting industry. The monitoring of tool information (wear, dynamic behavior, deformation, etc.) is essential to guarantee the success of product fabrication. Many methods have been applied to monitor the cutting tools from the information of cutting force, spindle motor current, vibration, as well as sound acoustic emission. However, those methods are indirect and sensitive to environmental noises. Here, the in-process imaging technique that can capture the cutting tool information while cutting the metal was studied. As machinists judge whether a tool is worn-out by the naked eye, utilizing the vision system can directly present the performance of the machine tools. We proposed a phase shifted strobo-stereoscopic method (Figure 1) for three-dimensional (3D) imaging. The stroboscopic instrument is usually applied for the measurement of fast-moving objects. The operation principle is as follows: when synchronizing the frequency of the light source illumination and the motion of object, the object appears to be stationary. The motion frequency of the target is transferring from the count information of the encoder signals from the working rotary spindle. If small differences are added to the frequency, the object appears to be slowly moving or rotating. This effect can be working as the source for the phase-shifting; with this phase information, the target can be whole-view 3D reconstructed by 360 degrees. The stereoscopic technique is embedded with two CCD cameras capturing images that are located bilateral symmetrically in regard to the target. The 3D scene is reconstructed by the location information of the same object points from both the left and right images. In the proposed system, an air spindle was used to secure the motion accuracy and drilling/milling speed. As shown in Figure 2, two CCDs with 10X objective lenses were installed on a linear rail with rotary stages to capture the machine tool bit raw picture for further 3D reconstruction. The overall measurement process was summarized in the flow chart (Figure 3). As the count number of encoder signals is related to the rotary speed, the input speed (unit of RPM) was set as the reference signal to control the frequency (f0) of the illumination of the LED. When the frequency was matched with the reference signal, both CCDs started to gather the pictures. With the mismatched frequency (Δf) information, a sequence of images was gathered under the phase-shifted process for a whole-view 3D reconstruction. The study in this paper was based on a 3/8’’ drilling tool performance monitoring. This paper presents the principle of the phase-shifted strobe-stereoscopic 3D imaging process. A hardware set-up is introduced, , as well as the 3D imaging algorithm. The reconstructed image analysis under different working speeds is discussed, the reconstruction resolution included. The uncertainty of the imaging process and the built-up system are also analyzed. As the input signal is the working speed, no other information from other sources is required. This proposed method can be applied as an on-machine or even in-process metrology. With the direct method of the 3D imaging machine vision system, it can directly offer the machine tool surface and fatigue information. This presented method can supplement the blank for determining the performance status of the machine tools, which further guarantees the fabrication process. 
    more » « less
  5. Mariño, Inés P. (Ed.)
    In many physiological systems, real-time endogeneous and exogenous signals in living organisms provide critical information and interpretations of physiological functions; however, these signals or variables of interest are not directly accessible and must be estimated from noisy, measured signals. In this paper, we study an inverse problem of recovering gas exchange signals of animals placed in a flow-through respirometry chamber from measured gas concentrations. For large-scale experiments (e.g., long scans with high sampling rate) that have many uncertainties (e.g., noise in the observations or an unknown impulse response function), this is a computationally challenging inverse problem. We first describe various computational tools that can be used for respirometry reconstruction and uncertainty quantification when the impulse response function is known. Then, we address the more challenging problem where the impulse response function is not known or only partially known. We describe nonlinear optimization methods for reconstruction, where both the unknown model parameters and the unknown signal are reconstructed simultaneously. Numerical experiments show the benefits and potential impacts of these methods in respirometry. 
    more » « less