skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Review of Methods for the Geometric Post-Processing of Topology Optimized Models
Abstract Topology optimization (TO) has rapidly evolved from an academic exercise into an exciting discipline with numerous industrial applications. Various TO algorithms have been established, and several commercial TO software packages are now available. However, a major challenge in TO is the post-processing of the optimized models for downstream applications. Typically, optimal topologies generated by TO are faceted (triangulated) models, extracted from an underlying finite element mesh. These triangulated models are dense, poor quality, and lack feature/parametric control. This poses serious challenges to downstream applications such as prototyping/testing, design validation, and design exploration. One strategy to address this issue is to directly impose downstream requirements as constraints in the TO algorithm. However, this not only restricts the design space, it may even lead to TO failure. Separation of post-processing from TO is more robust and flexible. The objective of this paper is to provide a critical review of various post-processing methods and categorize them based both on targeted applications and underlying strategies. The paper concludes with unresolved challenges and future work.  more » « less
Award ID(s):
1824980
PAR ID:
10191733
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Computing and Information Science in Engineering
Volume:
20
Issue:
6
ISSN:
1530-9827
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Contextualized word embeddings, such as ELMo, provide meaningful representations for words and their contexts. They have been shown to have a great impact on downstream applications. However, we observe that the contextualized embeddings of a word might change drastically when its contexts are paraphrased. As these embeddings are over-sensitive to the context, the downstream model may make different predictions when the input sentence is paraphrased. To address this issue, we propose a post-processing approach to retrofit the embedding with paraphrases. Our method learns an orthogonal transformation on the input space of the contextualized word embedding model, which seeks to minimize the variance of word representations on paraphrased contexts. Experiments show that the proposed method significantly improves ELMo on various sentence classification and inference tasks. 
    more » « less
  2. Post-processing immunity is a fundamental property of differential privacy: it enables arbitrary data-independent transformations to differentially private outputs without affecting their privacy guarantees. Post-processing is routinely applied in data-release applications, including census data, which are then used to make allocations with substantial societal impacts. This paper shows that post-processing causes disparate impacts on individuals or groups and analyzes two critical settings: the release of differentially private datasets and the use of such private datasets for downstream decisions, such as the allocation of funds informed by US Census data. In the first setting, the paper proposes tight bounds on the unfairness of traditional post-processing mechanisms, giving a unique tool to decision-makers to quantify the disparate impacts introduced by their release. In the second setting, this paper proposes a novel post-processing mechanism that is (approximately) optimal under different fairness metrics, either reducing fairness issues substantially or reducing the cost of privacy. The theoretical analysis is complemented with numerical simulations on Census data. 
    more » « less
  3. null (Ed.)
    Most modern commodity imaging systems we use directly for photography—or indirectly rely on for downstream applications—employ optical systems of multiple lenses that must balance deviations from perfect optics, manufacturing constraints, tolerances, cost, and footprint. Although optical designs often have complex interactions with downstream image processing or analysis tasks, today’s compound optics are designed in isolation from these interactions. Existing optical design tools aim to minimize optical aberrations, such as deviations from Gauss’ linear model of optics, instead of application-specific losses, precluding joint optimization with hardware image signal processing (ISP) and highly parameterized neural network processing. In this article, we propose an optimization method for compound optics that lifts these limitations. We optimize entire lens systems jointly with hardware and software image processing pipelines, downstream neural network processing, and application-specific end-to-end losses. To this end, we propose a learned, differentiable forward model for compound optics and an alternating proximal optimization method that handles function compositions with highly varying parameter dimensions for optics, hardware ISP, and neural nets. Our method integrates seamlessly atop existing optical design tools, such as Zemax . We can thus assess our method across many camera system designs and end-to-end applications. We validate our approach in an automotive camera optics setting—together with hardware ISP post processing and detection—outperforming classical optics designs for automotive object detection and traffic light state detection. For human viewing tasks, we optimize optics and processing pipelines for dynamic outdoor scenarios and dynamic low-light imaging. We outperform existing compartmentalized design or fine-tuning methods qualitatively and quantitatively, across all domain-specific applications tested. 
    more » « less
  4. Deep learning models are increasingly used for end-user applications, supporting both novel features such as facial recognition, and traditional features, e.g. web search. To accommodate high inference throughput, it is common to host a single pre-trained Convolutional Neural Network (CNN) in dedicated cloud-based servers with hardware accelerators such as Graphics Processing Units (GPUs). However, GPUs can be orders of magnitude more expensive than traditional Central Processing Unit (CPU) servers. These resources could also be under-utilized facing dynamic workloads, which may result in inflated serving costs. One potential way to alleviate this problem is by allowing hosted models to share the underlying resources, which we refer to as multi-tenant inference serving. One of the key challenges is maximizing the resource efficiency for multi-tenant serving given hardware with diverse characteristics, models with unique response time Service Level Agreement (SLA), and dynamic inference workloads. In this paper, we present PERSEUS, a measurement framework that provides the basis for understanding the performance and cost trade-offs of multi-tenant model serving. We implemented PERSEUS in Python atop a popular cloud inference server called Nvidia TensorRT Inference Server. Leveraging PERSEUS, we evaluated the inference throughput and cost for serving various models and demonstrated that multi-tenant model serving led to up to 12% cost reduction. 
    more » « less
  5. Aerosol Jet Printing is a novel micron-scale printing technology capable of handling a variety of materials due to a large print material viscosity range and high substrate standoff distance of 3-5 mm. To finalize the properties of printed materials, a form of post-processing is often required. A current widely applicable post-processing technique exists in traditional oven curing. However, oven curing greatly restricts the viable substrates as well as curing time. Intense Pulsed Light (IPL) offers the chance to greatly expand this substrate variety and decrease curing time. However, limited models currently exist to relate the finished material properties to the unique settings of current IPL technology. In this paper, an experiment is developed through a General Full Factorial Design of Experiments (DOE) model to characterize conductivity of Ag ink using IPL as a post processing technique. This is conducted through Novacentrix Ag ink (JSA426) by 3x3 mm Van der Pauw sensor pads cured using IPL. Sample pads were generated in triplicate over a range of Energy Levels, Counts and Durations for IPL and the resulting conductivity measured. The collected conductivity data was then analyzed using ANOVA to determine the significant interactions. From this, a regression model is developed to predict the conductivity for any Energy-Count-Duration value. The methods employed are applicable to any post-processing technique, and further optimization of the model is proposed for future work. 
    more » « less