skip to main content


Title: Merge-Swap Optimization Framework for Supervoxel Generation from Three-Dimensional Point Clouds
Surpervoxels are becoming increasingly popular in many point cloud processing applications. However, few methods have been devised specifically for generating compact supervoxels from unstructured three-dimensional (3D) point clouds. In this study, we aimed to generate high quality over-segmentation of point clouds. We propose a merge-swap optimization framework that solves any supervoxel generation problem formulated in energy minimization. In particular, we tailored an energy function that explicitly encourages regular and compact supervoxels with adaptive size control considering local geometric information of point clouds. We also provide two acceleration techniques to reduce the computational overhead. The performance of the proposed merge-swap optimization approach is superior to that of previous work in terms of thorough optimization, computational efficiency, and practical applicability to incorporating control of other properties of supervoxels. The experiments show that our approach produces supervoxels with better segmentation quality than two state-of-the-art methods on three public datasets.  more » « less
Award ID(s):
1804929
NSF-PAR ID:
10147830
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Remote Sensing
Volume:
12
Issue:
3
ISSN:
2072-4292
Page Range / eLocation ID:
473
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Purpose

    As a challenging but important optimization problem, the inverse planning for volumetric modulated arc therapy (VMAT) has attracted much research attention. The column generation (CG) type method is so far one of the most effective solution schemes. However, it often relies on simplifications leading to significant gaps between the output and the actual feasible plan. This paper presents a novel column generation (NCG) approach to push the planning results substantially closer to practice.

    Methods

    The proposed NCG algorithm is equipped with multiple new quality‐enhancing and computation‐facilitating modules as below: (1) Flexible constraints are enabled on both dose rates and treatment time to adapt to machine capabilities as well as planner's preferences, respectively; (2) a cross‐control‐point intermediate aperture simulation is incorporated to better conform to the underlying physics; (3) new pricing and pruning subroutines are adopted to achieve better optimization outputs. To evaluate the effectiveness of this NCG, five VMAT plans, that is, three prostate cases and two head‐and‐neck cases, were computed using proposed NCG. The planning results were compared with those yielded by a historical benchmark planning scheme.

    Results

    The NCG generated plans of significantly better quality than the benchmark planning algorithm. For prostate cases, NCG plans satisfied all planning target volume (PTV) criteria whereas CG plans failed on D10% criteria of PTVs for over 9 Gy or more on all cases. For head‐and‐neck cases, again, NCG plans satisfied all PTVs criteria while CG plans failed on D10% criteria of PTVs for over 3 Gy or more on all cases as well as the max dose criteria of both cord and brain stem for over 13 Gy on one case. Moreover, the pruning scheme was found to be effective in enhancing the optimization quality.

    Conclusions

    The proposed NCG inherits the computational advantages of the traditional CG, while capturing a more realistic characterization of the machine capability and underlying physics. The output solutions of the NCG are substantially closer to practical implementation.

     
    more » « less
  2. We present SHRED, a method for 3D SHape REgion Decomposition. SHRED takes a 3D point cloud as input and uses learned local operations to produce a segmentation that approximates fine-grained part instances. We endow SHRED with three decomposition operations: splitting regions, fixing the boundaries between regions, and merging regions together. Modules are trained independently and locally, allowing SHRED to generate high-quality segmentations for categories not seen during training. We train and evaluate SHRED with fine-grained segmentations from PartNet; using its merge-threshold hyperparameter, we show that SHRED produces segmentations that better respect ground-truth annotations compared with baseline methods, at any desired decomposition granularity. Finally, we demonstrate that SHRED is useful for downstream applications, out-performing all baselines on zero-shot fine-grained part instance segmentation and few-shot finegrained semantic segmentation when combined with methods that learn to label shape regions. 
    more » « less
  3. null (Ed.)
    Experimental measurements or computational model predictions of the post-translational regulation of enzymes needed in a metabolic pathway is a difficult problem. Consequently, regulation is mostly known only for well-studied reactions of central metabolism in various model organisms. In this study, we use two approaches to predict enzyme regulation policies and investigate the hypothesis that regulation is driven by the need to maintain the solvent capacity in the cell. The first predictive method uses a statistical thermodynamics and metabolic control theory framework while the second method is performed using a hybrid optimization–reinforcement learning approach. Efficient regulation schemes were learned from experimental data that either agree with theoretical calculations or result in a higher cell fitness using maximum useful work as a metric. As previously hypothesized, regulation is herein shown to control the concentrations of both immediate and downstream product concentrations at physiological levels. Model predictions provide the following two novel general principles: (1) the regulation itself causes the reactions to be much further from equilibrium instead of the common assumption that highly non-equilibrium reactions are the targets for regulation; and (2) the minimal regulation needed to maintain metabolite levels at physiological concentrations maximizes the free energy dissipation rate instead of preserving a specific energy charge. The resulting energy dissipation rate is an emergent property of regulation which may be represented by a high value of the adenylate energy charge. In addition, the predictions demonstrate that the amount of regulation needed can be minimized if it is applied at the beginning or branch point of a pathway, in agreement with common notions. The approach is demonstrated for three pathways in the central metabolism of E. coli (gluconeogenesis, glycolysis-tricarboxylic acid (TCA) and pentose phosphate-TCA) that each require different regulation schemes. It is shown quantitatively that hexokinase, glucose 6-phosphate dehydrogenase and glyceraldehyde phosphate dehydrogenase, all branch points of pathways, play the largest roles in regulating central metabolism. 
    more » « less
  4. The presence of various uncertainty sources in metal-based additive manufacturing (AM) process prevents producing AM products with consistently high quality. Using electron beam melting (EBM) of Ti-6Al-4V as an example, this paper presents a data-driven framework for process parameters optimization using physics-informed computer simulation models. The goal is to identify a robust manufacturing condition that allows us to constantly obtain equiaxed materials microstructures under uncertainty. To overcome the computational challenge in the robust design optimization under uncertainty, a two-level data-driven surrogate model is constructed based on the simulation data of a validated high-fidelity multiphysics AM simulation model. The robust design result, indicating a combination of low preheating temperature, low beam power, and intermediate scanning speed, was acquired enabling the repetitive production of equiaxed structure products as demonstrated by physics-based simulations. Global sensitivity analysis at the optimal design point indicates that among the studied six noise factors, specific heat capacity and grain growth activation energy have the largest impact on the microstructure variation. Through this exemplar process optimization, the current study also demonstrates the promising potential of the presented approach in facilitating other complicate AM process optimizations, such as robust designs in terms of porosity control or direct mechanical property control. 
    more » « less
  5. Topological data analysis (TDA) is a branch of computational mathematics, bridging algebraic topology and data science, that provides compact, noise-robust representations of complex structures. Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture resulting in high-dimensional, difficult to interpret internal representations of input data. As DNNs become more ubiquitous across multiple sectors of our society, there is increasing recognition that mathematical methods are needed to aid analysts, researchers, and practitioners in understanding and interpreting how these models' internal representations relate to the final classification. In this paper we apply cutting edge techniques from TDA with the goal of gaining insight towards interpretability of convolutional neural networks used for image classification. We use two common TDA approaches to explore several methods for modeling hidden layer activations as high-dimensional point clouds, and provide experimental evidence that these point clouds capture valuable structural information about the model's process. First, we demonstrate that a distance metric based on persistent homology can be used to quantify meaningful differences between layers and discuss these distances in the broader context of existing representational similarity metrics for neural network interpretability. Second, we show that a mapper graph can provide semantic insight as to how these models organize hierarchical class knowledge at each layer. These observations demonstrate that TDA is a useful tool to help deep learning practitioners unlock the hidden structures of their models. 
    more » « less