This paper introduces a deep neural network based method, i.e., DeepOrganNet, to generate and visualize fully high-fidelity 3D / 4D organ geometric models from single-view medical images with complicated background in real time. Traditional 3D / 4D medical image reconstruction requires near hundreds of projections, which cost insufferable computational time and deliver undesirable high imaging / radiation dose to human subjects. Moreover, it always needs further notorious processes to segment or extract the accurate 3D organ models subsequently. The computational time and imaging dose can be reduced by decreasing the number of projections, but the reconstructed image quality is degraded accordingly. To our knowledge, there is no method directly and explicitly reconstructing multiple 3D organ meshes from a single 2D medical grayscale image on the fly. Given single-view 2D medical images, e.g., 3D / 4D-CT projections or X-ray images, our end-to-end DeepOrganNet framework can efficiently and effectively reconstruct 3D / 4D lung models with a variety of geometric shapes by learning the smooth deformation fields from multiple templates based on a trivariate tensor-product deformation technique, leveraging an informative latent descriptor extracted from input 2D images. The proposed method can guarantee to generate high-quality and high-fidelity manifold meshes for 3D / 4D lung models; while, all current deep learning based approaches on the shape reconstruction from a single image cannot. The major contributions of this work are to accurately reconstruct the 3D organ shapes from 2D single-view projection, significantly improve the procedure time to allow on-the-fly visualization, and dramatically reduce the imaging dose for human subjects. Experimental results are evaluated and compared with the traditional reconstruction method and the state-of-the-art in deep learning, by using extensive 3D and 4D examples, including both synthetic phantom and real patient datasets. The efficiency of the proposed method shows that it only needs several milliseconds to generate organ meshes with 10K vertices, which has great potential to be used in real-time image guided radiation therapy (IGRT).
more »
« less
3D Reconstruction of Tubular Structure Using Radially Deployed Projections
Acquiring volumetric data plays a crucial role in the field of medical imaging. 3D reconstruction is mostly performed using multislice image datasets. The objective of this research is to introduce a magnetic resonance technique for imaging tubular structures and their 3D reconstructions using multiple radially deployed projections. The oblique projection sequence was evaluated on a phantom, and multislice dataset is collected using the same phantom for the reference. To compute the correctness of the 3D reconstruction process, the resulting meshes were compared using the Hausdorff Distance Calculation and Point Cloud Comparison methods.
more »
« less
- Award ID(s):
- 1646566
- PAR ID:
- 10130862
- Date Published:
- Journal Name:
- 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)
- Page Range / eLocation ID:
- 322 to 327
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Telecystoscopy can lower the barrier to access critical urologic diagnostics for patients around the world. A major challenge for robotic control of flexible cystoscopes and intuitive teleoperation is the pose estimation of the scope tip. We propose a novel real-time camera localization method using video recordings from a prior cystoscopy and 3D bladder reconstruction to estimate cystoscope pose within the bladder during follow-up telecystoscopy. We map prior video frames into a low-dimensional space as a dictionary so that a new image can be likewise mapped to efficiently retrieve its nearest neighbor among the dictionary images. The cystoscope pose is then estimated by the correspondence among the new image, its nearest dictionary image, and the prior model from 3D reconstruction. We demonstrate performance of our methods using bladder phantoms with varying fidelity and a servo-controlled cystoscope to simulate the use case of bladder surveillance through telecystoscopy. The servo-controlled cystoscope with 3 degrees of freedom (angulation, roll, and insertion axes) was developed for collecting cystoscope videos from bladder phantoms. Cystoscope videos were acquired in a 2.5D bladder phantom (bladder-shape cross-section plus height) with a panorama of a urothelium attached to the inner surface. Scans of the 2.5D phantom were performed in separate arc trajectories each of which is generated by actuation on the angulation with a fixed roll and insertion length. We further included variance in moving speed, imaging distance and existence of bladder tumors. Cystoscope videos were also acquired in a water-filled 3D silicone bladder phantom with hand-painted vasculature. Scans of the 3D phantom were performed in separate circle trajectories each of which is generated by actuation on the roll axis under a fixed angulation and insertion length. These videos were used to create 3D reconstructions, dictionary sets, and test data sets for evaluating the computational efficiency and accuracy of our proposed method in comparison with a method based on global Scale-Invariant Feature Transform (SIFT) features, named SIFT-only. Our method can retrieve the nearest dictionary image for 94–100% of test frames in under 55[Formula: see text]ms per image, whereas the SIFT-only method can only find the image match for 56–100% of test frames in 6000–40000[Formula: see text]ms per image depending on size of the dictionary set and richness of SIFT features in the images. Our method, with a speed of around 20 Hz for the retrieval stage, is a promising tool for real-time image-based scope localization in robotic cystoscopy when prior cystoscopy images are available.more » « less
-
PurposeTo develop an improved k‐space reconstruction method using scan‐specific deep learning that is trained on autocalibration signal (ACS) data. TheoryRobust artificial‐neural‐networks for k‐space interpolation (RAKI) reconstruction trains convolutional neural networks on ACS data. This enables nonlinear estimation of missing k‐space lines from acquired k‐space data with improved noise resilience, as opposed to conventional linear k‐space interpolation‐based methods, such as GRAPPA, which are based on linear convolutional kernels. MethodsThe training algorithm is implemented using a mean square error loss function over the target points in the ACS region, using a gradient descent algorithm. The neural network contains 3 layers of convolutional operators, with 2 of these including nonlinear activation functions. The noise performance and reconstruction quality of the RAKI method was compared with GRAPPA in phantom, as well as in neurological and cardiac in vivo data sets. ResultsPhantom imaging shows that the proposed RAKI method outperforms GRAPPA at high (≥4) acceleration rates, both visually and quantitatively. Quantitative cardiac imaging shows improved noise resilience at high acceleration rates (rate 4:23% and rate 5:48%) over GRAPPA. The same trend of improved noise resilience is also observed in high‐resolution brain imaging at high acceleration rates. ConclusionThe RAKI method offers a training database‐free deep learning approach for MRI reconstruction, with the potential to improve many existing reconstruction approaches, and is compatible with conventional data acquisition protocols.more » « less
-
In this paper, we propose a model for parallel magnetic resonance imaging (pMRI) reconstruction, regularized by a carefully designed tight framelet system, that can lead to reconstructed images with much less artifacts in comparison to those from existing models. Our model is motivated from the observations that each receiver coil in a pMRI system is more sensitive to the specific object nearest to the coil, and all coil images are correlated. To exploit these observations, we first stack all coil images together as a 3-dimensional (3D) data matrix, and then design a 3D directional Haar tight framelet (3DHTF) to represent it. After analyzing sparse information of the coil images provided by the high-pass filters of the 3DHTF, we separate the high-pass filters into effective ones and ineffective ones, and we then devise a 3D directional Haar semi-tight framelet (3DHSTF) from the 3DHTF by replacing its ineffective filters with only one filter. This 3DHSTF is tailor-made for coil images, meanwhile, giving a significant saving in computation comparing to the 3DHTF. With the 3DHSTF, we propose an l1-3DHSTF model for pMRI reconstruction. Numerical experiments for MRI phantom and in-vivo data sets are provided to demonstrate the superiority of our l1-3DHSTF model in terms of the efficiency of reducing aliasing artifacts in the reconstructed images.more » « less
-
This paper introduces a simple three-dimensional (3D) stereoscopic method using a single unit of an imaging device consisting of a charge-coupled device (CCD) and a zoom lens. Unlike conventional stereoscopy, which requires a pair of imaging devices, 3D surface imaging is achieved by 3D image reconstruction of two images obtained from two different camera positions by scanning. The experiments were performed by obtaining two images of the measurement target in two different ways: (1) by moving the object while the imaging device is stationary, and (2) by moving the imaging device while the object is stationary. Conventional stereoscopy is limited by disparity errors in 3D image reconstruction because a pair of imaging devices is not ideally identical and alignment errors are always present in the imaging system setup. The proposed method significantly reduced the disparity error in 3D image reconstruction, and the calibration process of the imaging system became simple and convenient. The proposed imaging system showed a disparity error of 0.26 in the camera pixel.more » « less
An official website of the United States government

