skip to main content


Title: Motion-compensated noninvasive periodontal health monitoring using handheld and motor-based photoacoustic-ultrasound imaging systems

Simultaneous visualization of the teeth and periodontium is of significant clinical interest for image-based monitoring of periodontal health. We recently reported the application of a dual-modality photoacoustic-ultrasound (PA-US) imaging system for resolving periodontal anatomy and periodontal pocket depths in humans. This work utilized a linear array transducer attached to a stepper motor to generate 3D images via maximum intensity projection. This prior work also used a medical head immobilizer to reduce artifacts during volume rendering caused by motion from the subject (e.g., breathing, minor head movements). However, this solution does not completely eliminate motion artifacts while also complicating the imaging procedure and causing patient discomfort. To address this issue, we report the implementation of an image registration technique to correctly align B-mode PA-US images and generate artifact-free 2D cross-sections. Application of the deshaking technique to PA phantoms revealed 80% similarity to the ground truth when shaking was intentionally applied during stepper motor scans. Images from handheld sweeps could also be deshaken using an LED PA-US scanner. Inex vivoporcine mandibles, pigmentation of the enamel was well-estimated within 0.1 mm error. The pocket depth measured in a healthy human subject was also in good agreement with our prior study. This report demonstrates that a modality-independent registration technique can be applied to clinically relevant PA-US scans of the periodontium to reduce operator burden of skill and subject discomfort while showing potential for handheld clinical periodontal imaging.

 
more » « less
NSF-PAR ID:
10214866
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Biomedical Optics Express
Volume:
12
Issue:
3
ISSN:
2156-7085
Page Range / eLocation ID:
Article No. 1543
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose

    This work demonstrates a magnetization prepared diffusion‐weighted single‐shot fast spin echo (SS‐FSE) pulse sequence for the application of body imaging to improve robustness to geometric distortion. This work also proposes a scan averaging technique that is superior to magnitude averaging and is not subject to artifacts due to object phase.

    Theory and Methods

    This single‐shot sequence is robust against violation of the Carr‐Purcell‐Meiboom‐Gill (CPMG) condition. This is achieved by dephasing the signal after diffusion weighting and tipping the MG component of the signal onto the longitudinal axis while the non‐MG component is spoiled. The MG signal component is then excited and captured using a traditional SS‐FSE sequence, although the echo needs to be recalled prior to each echo. Extended Parallel Imaging (ExtPI) averaging is used where coil sensitivities from the multiple acquisitions are concatenated into one large parallel imaging (PI) problem. The size of the PI problem is reduced by SVD‐based coil compression which also provides background noise suppression. This sequence and reconstruction are evaluated in simulation, phantom scans, and in vivo abdominal clinical cases.

    Results

    Simulations show that the sequence generates a stable signal throughout the echo train which leads to good image quality. This sequence is inherently low‐SNR, but much of the SNR can be regained through scan averaging and the proposed ExtPI reconstruction. In vivo results show that the proposed method is able to provide diffusion encoded images while mitigating geometric distortion artifacts compared to EPI.

    Conclusion

    This work presents a diffusion‐prepared SS‐FSE sequence that is robust against the violation of the CPMG condition while providing diffusion contrast in clinical cases. Magn Reson Med 79:3032–3044, 2018. © 2017 International Society for Magnetic Resonance in Medicine.

     
    more » « less
  2. Abstract

    Blood carries oxygen and nutrients to the trillions of cells in our body to sustain vital life processes. Lack of blood perfusion can cause irreversible cell damage. Therefore, blood perfusion measurement has widespread clinical applications. In this paper, we develop PulseCam — a new camera-based, motion-robust, and highly sensitive blood perfusion imaging modality with 1 mm spatial resolution and 1 frame-per-second temporal resolution. Existing camera-only blood perfusion imaging modality suffers from two core challenges: (i) motion artifact, and (ii) small signal recovery in the presence of large surface reflection and measurement noise. PulseCam addresses these challenges by robustly combining the video recording from the camera with a pulse waveform measured using a conventional pulse oximeter to obtain reliable blood perfusion maps in the presence of motion artifacts and outliers in the video recordings. For video stabilization, we adopt a novel brightness-invariant optical flow algorithm that helps us reduce error in blood perfusion estimate below 10% in different motion scenarios compared to 20–30% error when using current approaches. PulseCam can detect subtle changes in blood perfusion below the skin with at least two times better sensitivity, three times better response time, and is significantly cheaper compared to infrared thermography. PulseCam can also detect venous or partial blood flow occlusion that is difficult to identify using existing modalities such as the perfusion index measured using a pulse oximeter. With the help of a pilot clinical study, we also demonstrate that PulseCam is robust and reliable in an operationally challenging surgery room setting. We anticipate that PulseCam will be used both at the bedside as well as a point-of-care blood perfusion imaging device to visualize and analyze blood perfusion in an easy-to-use and cost-effective manner.

     
    more » « less
  3. Abstract

    Optical coherence tomography (OCT) is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples. Here, we present a deep learning-based image reconstruction framework that can generate swept-source OCT (SS-OCT) images using undersampled spectral data, without any spatial aliasing artifacts. This neural network-based image reconstruction does not require any hardware changes to the optical setup and can be easily integrated with existing swept-source or spectral-domain OCT systems to reduce the amount of raw spectral data to be acquired. To show the efficacy of this framework, we trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using 2-fold undersampled spectral data (i.e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in 0.59 ms using multiple graphics-processing units (GPUs), removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i.e., 1280 spectral points per A-line). We also successfully demonstrate that this framework can be further extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× spectral undersampling. Furthermore, an A-line-optimized undersampling method is presented by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network, which improved the overall imaging performance using less spectral data points per A-line compared to 2× or 3× spectral undersampling results. This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral-domain OCT systems, helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio.

     
    more » « less
  4. Purpose: To improve the image reconstruction for prospective motion correction (PMC) of simultaneous multislice (SMS) EPI of the brain, an update of receiver phase and resampling of coil sensitivities are proposed and evaluated. Methods: A camera-based system was used to track head motion (3 translations and 3 rotations) and dynamically update the scan position and orientation. We derived the change in receiver phase associated with a shifted field of view (FOV) and applied it in real-time to each k-space line of the EPI readout trains. Second, for the SMS reconstruction, we adapted resampled coil sensitivity profiles reflecting the movement of slices. Single-shot gradient-echo SMS-EPI scans were performed in phantoms and human subjects for validation. Results: Brain SMS-EPI scans in the presence of motion withPMCand no phase correction for scan plane shift showed noticeable artifacts. These artifacts were visually and quantitatively attenuated when corrections were enabled. Correcting misaligned coil sensitivity maps improved the temporal SNR (tSNR) of time series by 24% (p=0.0007) for scans with large movements (up to ∼35mm and 30◦). Correcting the receiver phase improved the tSNR of a scan with minimal head movement by 50% from 50 to 75 for a United Kingdom biobank protocol. Conclusion: Reconstruction-induced motion artifacts in single-shot SMS-EPI scans acquired with PMC can be removed by dynamically adjusting the receiver phase of each line across EPI readout trains and updating coil sensitivity profiles during reconstruction. The method may be a valuable tool for SMS-EPI scans in the presence of subject motion. 
    more » « less
  5. Purpose

    We introduce and validate a scalable retrospective motion correction technique for brain imaging that incorporates a machine learning component into a model‐based motion minimization.

    Methods

    A convolutional neural network (CNN) trained to remove motion artifacts from 2D T2‐weighted rapid acquisition with refocused echoes (RARE) images is introduced into a model‐based data‐consistency optimization to jointly search for 2D motion parameters and the uncorrupted image. Our separable motion model allows for efficient intrashot (line‐by‐line) motion correction of highly corrupted shots, as opposed to previous methods which do not scale well with this refinement of the motion model. Final image generation incorporates the motion parameters within a model‐based image reconstruction. The method is tested in simulations and in vivo motion experiments of in‐plane motion corruption.

    Results

    While the convolutional neural network alone provides some motion mitigation (at the expense of introduced blurring), allowing it to guide the iterative joint‐optimization both improves the search convergence and renders the joint‐optimization separable. This enables rapid mitigation within shots in addition to between shots. For 2D in‐plane motion correction experiments, the result is a significant reduction of both image space root mean square error in simulations, and a reduction of motion artifacts in the in vivo motion tests.

    Conclusion

    The separability and convergence improvements afforded by the combined convolutional neural network+model‐based method shows the potential for meaningful postacquisition motion mitigation in clinical MRI.

     
    more » « less