skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Towards 3D Shape Estimation from 2D Particle Images: A State-of-the-Art Review and Demonstration
Particle shape plays a critical role in governing the properties and behavior of granular materials. Despite advances in capturing and analyzing 3D particle shapes, these remain more demanding than 2D shape analysis due to the high computational costs and time-consuming nature of 3D imaging processes. Consequently, there is a growing interest in exploring potential correlations between 3D and 2D shapes, as this approach could potentially enable a reasonable estimation of a 3D shape from a 2D particle image, or at most, a couple of images. In response to this research interest, this study provides a thorough review of previous studies that have attempted to establish a correlation between 3D and 2D shape measures. A key finding from the extensive review is the high correlation between 2D perimeter circularity (cp) and Wadell’s true sphericity (S) defined in 3D, suggesting that a 3D shape can be estimated from the cp value in terms of S. To further substantiate the correlation between cp and S, this study analyzes approximately 400 mineral particle geometries available from an open-access data repository in both 3D and 2D. The analysis reveals a strong linear relationship between S and cp compared with other 2D shape descriptors broadly used in the research community. Furthermore, the limited variance in cp values indicates that cp is insensitive to changes in viewpoint, which indicates that fewer 2D images are needed. This finding offers a promising avenue for cost-effective and reliable 3D shape estimation using 2D particle images.  more » « less
Award ID(s):
1938431 1938285
PAR ID:
10538869
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Hosokawa Powder Technology Foundation
Date Published:
Journal Name:
KONA Powder and Particle Journal
ISSN:
0288-4534
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ward, C (Ed.)
    Computational microstructure design aims to fully exploit the precipitate strengthening potential of an alloy system. The development of accurate models to describe the temporal evolution of precipitate shapes and sizes is of great technological relevance. The experimental investigation of the precipitate microstructure is mostly based on two-dimensional micrographic images. Quantitative modeling of the temporal evolution of these microstructures needs to be discussed in three-dimensional simulation setups. To consistently bridge the gap between 2D images and 3D simulation data, we employ the method of central moments. Based on this, the aspect ratio of plate-like particles is consistently defined in two and three dimensions. The accuracy and interoperability of the method is demonstrated through representative 2D and 3D pixel-based sample data containing particles with a predefined aspect ratio. The applicability of the presented approach in integrated computational materials engineering (ICME) is demonstrated by the example of γ″ microstructure coarsening in Ni-based superalloys at 730 °C. For the first time, γ″ precipitate shape information from experimental 2D images and 3D phase-field simulation data is directly compared. This coarsening data indicates deviations from the classical ripening behavior and reveals periods of increased precipitate coagulation. 
    more » « less
  2. We present a new weakly supervised learning-based method for generating novel category-specific 3D shapes from unoccluded image collections. Our method is weakly supervised and only requires silhouette annotations from unoccluded, category-specific objects. Our method does not require access to the object's 3D shape, multiple observations per object from different views, intra-image pixel correspondences, or any view annotations. Key to our method is a novel multi-projection generative adversarial network (MP-GAN) that trains a 3D shape generator to be consistent with multiple 2D projections of the 3D shapes, and without direct access to these 3D shapes. This is achieved through multiple discriminators that encode the distribution of 2D projections of the 3D shapes seen from a different views. Additionally, to determine the view information for each silhouette image, we also train a view prediction network on visualizations of 3D shapes synthesized by the generator. We iteratively alternate between training the generator and training the view prediction network. We validate our multi-projection GAN on both synthetic and real image datasets. Furthermore, we also show that multi-projection GANs can aid in learning other high-dimensional distributions from lower dimensional training datasets, such as material-class specific spatially varying reflectance properties from images. 
    more » « less
  3. null (Ed.)
    Abstract Background Cryo-EM data generated by electron tomography (ET) contains images for individual protein particles in different orientations and tilted angles. Individual cryo-EM particles can be aligned to reconstruct a 3D density map of a protein structure. However, low contrast and high noise in particle images make it challenging to build 3D density maps at intermediate to high resolution (1–3 Å). To overcome this problem, we propose a fully automated cryo-EM 3D density map reconstruction approach based on deep learning particle picking. Results A perfect 2D particle mask is fully automatically generated for every single particle. Then, it uses a computer vision image alignment algorithm (image registration) to fully automatically align the particle masks. It calculates the difference of the particle image orientation angles to align the original particle image. Finally, it reconstructs a localized 3D density map between every two single-particle images that have the largest number of corresponding features. The localized 3D density maps are then averaged to reconstruct a final 3D density map. The constructed 3D density map results illustrate the potential to determine the structures of the molecules using a few samples of good particles. Also, using the localized particle samples (with no background) to generate the localized 3D density maps can improve the process of the resolution evaluation in experimental maps of cryo-EM. Tested on two widely used datasets, Auto3DCryoMap is able to reconstruct good 3D density maps using only a few thousand protein particle images, which is much smaller than hundreds of thousands of particles required by the existing methods. Conclusions We design a fully automated approach for cryo-EM 3D density maps reconstruction (Auto3DCryoMap). Instead of increasing the signal-to-noise ratio by using 2D class averaging, our approach uses 2D particle masks to produce locally aligned particle images. Auto3DCryoMap is able to accurately align structural particle shapes. Also, it is able to construct a decent 3D density map from only a few thousand aligned particle images while the existing tools require hundreds of thousands of particle images. Finally, by using the pre-processed particle images,Auto3DCryoMap reconstructs a better 3D density map than using the original particle images. 
    more » « less
  4. Integral imaging has proven useful for three-dimensional (3D) object visualization in adverse environmental conditions such as partial occlusion and low light. This paper considers the problem of 3D object tracking. Two-dimensional (2D) object tracking within a scene is an active research area. Several recent algorithms use object detection methods to obtain 2D bounding boxes around objects of interest in each frame. Then, one bounding box can be selected out of many for each object of interest using motion prediction algorithms. Many of these algorithms rely on images obtained using traditional 2D imaging systems. A growing literature demonstrates the advantage of using 3D integral imaging instead of traditional 2D imaging for object detection and visualization in adverse environmental conditions. Integral imaging’s depth sectioning ability has also proven beneficial for object detection and visualization. Integral imaging captures an object’s depth in addition to its 2D spatial position in each frame. A recent study uses integral imaging for the 3D reconstruction of the scene for object classification and utilizes the mutual information between the object’s bounding box in this 3D reconstructed scene and the 2D central perspective to achieve passive depth estimation. We build over this method by using Bayesian optimization to track the object’s depth in as few 3D reconstructions as possible. We study the performance of our approach on laboratory scenes with occluded objects moving in 3D and show that the proposed approach outperforms 2D object tracking. In our experimental setup, mutual information-based depth estimation with Bayesian optimization achieves depth tracking with as few as two 3D reconstructions per frame which corresponds to the theoretical minimum number of 3D reconstructions required for depth estimation. To the best of our knowledge, this is the first report on 3D object tracking using the proposed approach. 
    more » « less
  5. Many industries, such as human-centric product manufacturing, are calling for mass customization with personalized products. One key enabler of mass customization is 3D printing, which makes flexible design and manufacturing possible. However, the personalized designs bring challenges for the shape matching and analysis, owing to the high complexity and shape variations. Traditional shape matching methods are limited to spatial alignment and finding a transformation matrix for two shapes, which cannot determine a vertex-to-vertex or feature-to-feature correlation between the two shapes. Hence, such a method cannot measure the deformation of the shape and interested features directly. To measure the deformations widely seen in the mass customization paradigm and address the issues of alignment methods in shape matching, we identify the geometry matching of deformed shapes as a correspondence problem. The problem is challenging due to the huge solution space and nonlinear complexity, which is difficult for conventional optimization methods to solve. According to the observation that the well-established massive databases provide the correspondence results of the treated teeth models, a learning-based method is proposed for the shape correspondence problem. Specifically, a state-of-the-art geometric deep learning method is used to learn the correspondence of a set of collected deformed shapes. Through learning the deformations of the models, the underlying variations of the shapes are extracted and used for finding the vertex-to-vertex mapping among these shapes. We demonstrate the application of the proposed approach in the orthodontics industry, and the experimental results show that the proposed method can predict correspondence fast and accurate, also robust to extreme cases. Furthermore, the proposed method is favorably suitable for deformed shape analysis in mass customization enabled by 3D printing. 
    more » « less