skip to main content

Search for: All records

Award ID contains: 1657364

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available January 1, 2023
  2. This paper introduces a deep neural network based method, i.e., DeepOrganNet, to generate and visualize fully high-fidelity 3D / 4D organ geometric models from single-view medical images with complicated background in real time. Traditional 3D / 4D medical image reconstruction requires near hundreds of projections, which cost insufferable computational time and deliver undesirable high imaging / radiation dose to human subjects. Moreover, it always needs further notorious processes to segment or extract the accurate 3D organ models subsequently. The computational time and imaging dose can be reduced by decreasing the number of projections, but the reconstructed image quality is degradedmore »accordingly. To our knowledge, there is no method directly and explicitly reconstructing multiple 3D organ meshes from a single 2D medical grayscale image on the fly. Given single-view 2D medical images, e.g., 3D / 4D-CT projections or X-ray images, our end-to-end DeepOrganNet framework can efficiently and effectively reconstruct 3D / 4D lung models with a variety of geometric shapes by learning the smooth deformation fields from multiple templates based on a trivariate tensor-product deformation technique, leveraging an informative latent descriptor extracted from input 2D images. The proposed method can guarantee to generate high-quality and high-fidelity manifold meshes for 3D / 4D lung models; while, all current deep learning based approaches on the shape reconstruction from a single image cannot. The major contributions of this work are to accurately reconstruct the 3D organ shapes from 2D single-view projection, significantly improve the procedure time to allow on-the-fly visualization, and dramatically reduce the imaging dose for human subjects. Experimental results are evaluated and compared with the traditional reconstruction method and the state-of-the-art in deep learning, by using extensive 3D and 4D examples, including both synthetic phantom and real patient datasets. The efficiency of the proposed method shows that it only needs several milliseconds to generate organ meshes with 10K vertices, which has great potential to be used in real-time image guided radiation therapy (IGRT).« less
  3. Analyzing the geometric and semantic properties of 3D point clouds through the deep networks is still challenging due to the irregularity and sparsity of samplings of their geometric structures. This paper presents a new method to define and compute convolution directly on 3D point clouds by the proposed annular convolution. This new convolution operator can better capture the local neighborhood geometry of each point by specifying the (regular and dilated) ring-shaped structures and directions in the computation. It can adapt to the geometric variability and scalability at the signal processing level. We apply it to the developed hierarchical neural networksmore »for object classification, part segmentation, and semantic segmentation in large-scale scenes. The extensive experiments and comparisons demonstrate that our approach outperforms the state-of-the-art methods on a variety of standard benchmark datasets (e.g., ModelNet10, ModelNet40, ShapeNetpart, S3DIS, and ScanNet).« less
  4. This paper presents a novel surface registration technique using the spectrum of the shapes, which can facilitate accurate localization and visualization of non-isometric deformations of the surfaces. In order to register two surfaces, we map both eigenvalues and eigenvectors of the Laplace-Beltrami of the shapes through optimizing an energy function. The function is defined by the integration of a smoothness term to align the eigenvalues and a distance term between the eigenvectors at feature points to align the eigenvectors. The feature points are generated using the static points of certain eigenvectors of the surfaces. By using both the eigenvalues andmore »the eigenvectors on these feature points, the computational efficiency is improved considerably without losing the accuracy in comparison to the approaches that use the eigenvectors for all vertices. In our technique, the variation of the shape is expressed using a scale function defined at each vertex. Consequently, the total energy function to align the two given surfaces can be defined using the linear interpolation of the scale function derivatives. Through the optimization of the energy function, the scale function can be solved and the alignment is achieved. After the alignment, the eigenvectors can be employed to calculate the point-to-point correspondence of the surfaces. Therefore, the proposed method can accurately define the displacement of the vertices. We evaluate our method by conducting experiments on synthetic and real data using hippocampus, heart, and hand models. We also compare our method with non-rigid Iterative Closest Point (ICP) and a similar spectrum-based methods. These experiments demonstrate the advantages and accuracy of our method« less