Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We present SHRED, a method for 3D SHape REgion Decomposition. SHRED takes a 3D point cloud as input and uses learned local operations to produce a segmentation that approximates fine-grained part instances. We endow SHRED with three decomposition operations: splitting regions, fixing the boundaries between regions, and merging regions together. Modules are trained independently and locally, allowing SHRED to generate high-quality segmentations for categories not seen during training. We train and evaluate SHRED with fine-grained segmentations from PartNet; using its merge-threshold hyperparameter, we show that SHRED produces segmentations that better respect ground-truth annotations compared with baseline methods, at any desired decomposition granularity. Finally, we demonstrate that SHRED is useful for downstream applications, out-performing all baselines on zero-shot fine-grained part instance segmentation and few-shot finegrained semantic segmentation when combined with methods that learn to label shape regions.more » « less
-
Abstract Procedural models (i.e. symbolic programs that output visual data) are a historically‐popular method for representing graphics content: vegetation, buildings, textures, etc. They offer many advantages: interpretable design parameters, stochastic variations, high‐quality outputs, compact representation, and more. But they also have some limitations, such as the difficulty of authoring a procedural model from scratch. More recently, AI‐based methods, and especially neural networks, have become popular for creating graphic content. These techniques allow users to directly specify desired properties of the artifact they want to create (via examples, constraints, or objectives), while a search, optimization, or learning algorithm takes care of the details. However, this ease of use comes at a cost, as it's often hard to interpret or manipulate these representations. In this state‐of‐the‐art report, we summarize research on neurosymbolic models in computer graphics: methods that combine the strengths of both AI and symbolic programs to represent, generate, and manipulate visual data. We survey recent work applying these techniques to represent 2D shapes, 3D shapes, and materials & textures. Along the way, we situate each prior work in a unified design space for neurosymbolic models, which helps reveal underexplored areas and opportunities for future research.
-
Free, publicly-accessible full text available January 1, 2024
-
Abstract The Vera C. Rubin Observatory is expected to start the Legacy Survey of Space and Time (LSST) in early to mid-2025. This multiband wide-field synoptic survey will transform our view of the solar system, with the discovery and monitoring of over five million small bodies. The final survey strategy chosen for LSST has direct implications on the discoverability and characterization of solar system minor planets and passing interstellar objects. Creating an inventory of the solar system is one of the four main LSST science drivers. The LSST observing cadence is a complex optimization problem that must balance the priorities and needs of all the key LSST science areas. To design the best LSST survey strategy, a series of operation simulations using the Rubin Observatory scheduler have been generated to explore the various options for tuning observing parameters and prioritizations. We explore the impact of the various simulated LSST observing strategies on studying the solar system’s small body reservoirs. We examine what are the best observing scenarios and review what are the important considerations for maximizing LSST solar system science. In general, most of the LSST cadence simulations produce ±5% or less variations in our chosen key metrics, but a subset of the simulations significantly hinder science returns with much larger losses in the discovery and light-curve metrics.more » « lessFree, publicly-accessible full text available May 23, 2024