skip to main content


This content will become publicly available on September 1, 2024

Title: Printability Prediction in Projection Two-Photon Lithography Via Machine Learning Based Surrogate Modeling of Photopolymerization
Abstract Two-photon lithography (TPL) is a direct laser writing process that enables the fabrication of cm-scale complex three-dimensional polymeric structures with submicrometer resolution. In contrast to the slow and serial writing scheme of conventional TPL, projection TPL (P-TPL) enables rapid printing of entire layers at once. However, process prediction remains a significant challenge in P-TPL due to the lack of computationally efficient models. In this work, we present machine learning-based surrogate models to predict the outcomes of P-TPL to >98% of the accuracy of a physics-based reaction-diffusion finite element simulation. A classification neural network was trained using data generated from the physics-based simulations. This enabled us to achieve computationally efficient and accurate prediction of whether a set of printing conditions will result in precise and controllable polymerization and the desired printing versus no printing or runaway polymerization. We interrogate this surrogate model to investigate the parameter regimes that are promising for successful printing. We predict combinations of photoresist reaction rate constants that are necessary to print for a given set of processing conditions, thereby generating a set of printability maps. The surrogate models reduced the computational time that is required to generate these maps from more than 10 months to less than a second. Thus, these models can enable rapid and informed selection of photoresists and printing parameters during process control and optimization.  more » « less
Award ID(s):
2045147
NSF-PAR ID:
10440442
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of Micro- and Nano-Manufacturing
Volume:
10
Issue:
3
ISSN:
2166-0468
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Data description This dataset presents the raw and augmented data that were used to train the machine learning (ML) models for classification of printing outcome in projection two-photon lithography (P-TPL). P-TPL is an additive manufacturing technique for the fabrication of cm-scale complex 3D structures with features smaller than 200 nm. The P-TPL process is further described in this article: “Saha, S. K., Wang, D., Nguyen, V. H., Chang, Y., Oakdale, J. S., and Chen, S.-C., 2019, "Scalable submicrometer additive manufacturing," Science, 366(6461), pp. 105-109.” This specific dataset refers to the case wherein a set of five line features were projected and the printing outcome was classified into three classes: ‘no printing’, ‘printing’, ‘overprinting’. Each datapoint comprises a set of ten inputs (i.e., attributes) and one output (i.e., target) corresponding to these inputs. The inputs are: optical power (P), polymerization rate constant at the beginning of polymer conversion (kp-0), radical quenching rate constant (kq), termination rate constant at the beginning of polymer conversion (kt-0), number of optical pulses, (N), kp exponential function shape parameter (A), kt exponential function shape parameter (B), quantum yield of photoinitiator (QY), initial photoinitiator concentration (PIo), and the threshold degree of conversion (DOCth). The output variable is ‘Class’ which can take these three values: -1 for the class ‘no printing’, 0 for the class ‘printing’, and 1 for the class ‘overprinting’. The raw data (i.e., the non-augmented data) refers to the data generated from finite element simulations of P-TPL. The augmented data was obtained from the raw data by (1) changing the DOCth and re-processing a solved finite element model or (2) by applying physics-based prior process knowledge. For example, it is known that if a given set of parameters failed to print, then decreasing the parameters that are positively correlated with printing (e.g. kp-0, power), while keeping the other parameters constant would also lead to no printing. Here, positive correlation means that individually increasing the input parameter will lead to an increase in the amount of printing. Similarly, increasing the parameters that are negatively correlated with printing (e.g. kq, kt-0), while keeping the other parameters constant would also lead to no printing. The converse is true for those datapoints that resulted in overprinting. The 'Raw.csv' file contains the datapoints generated from finite element simulations, the 'Augmented.csv' file contains the datapoints generated via augmentation, and the 'Combined.csv' file contains the datapoints from both files. The ML models were trained on the combined dataset that included both raw and augmented data. 
    more » « less
  2. Designing alloys for additive manufacturing (AM) presents significant opportunities. Still, the chemical composition and processing conditions required for printability (ie., their suitability for fabrication via AM) are challenging to explore using solely experimental means. In this work, we develop a high-throughput (HTP) computational framework to guide the search for highly printable alloys and appropriate processing parameters. The framework uses material properties from stateof- the-art databases, processing parameters, and simulated melt pool profiles to predict processinduced defects, such as lack-of-fusion, keyholing, and balling. We accelerate the printability assessment using a deep learning surrogate for a thermal model, enabling a 1,000-fold acceleration in assessing the printability of a given alloy at no loss in accuracy when compared with conventional physics-based thermal models. We verify and validate the framework by constructing printability maps for the CoCrFeMnNi Cantor alloy system and comparing our predictions to an exhaustive ’in-house’ database. The framework enables the systematic investigation of the printability of a wide range of alloys in the broader Co-Cr-Fe-Mn-Ni HEA system. We identified the most promising alloys that were suitable for high-temperature applications and had the narrowest solidification ranges, and that was the least susceptible to balling, hot-cracking, and the formation of macroscopic printing defects. A new metric for the global printability of an alloy is constructed and is further used for the ranking of candidate alloys. The proposed framework is expected to be integrated into ICME approaches to accelerate the discovery and optimization of novel high-performance, printable alloys. 
    more » « less
  3. Abstract The temperature history of an additively manufactured part plays a critical role in determining process–structure–property relationships in fusion-based additive manufacturing (AM) processes. Therefore, fast thermal simulation methods are needed for a variety of AM tasks, from temperature history prediction for part design and process planning to in situ temperature monitoring and control during manufacturing. However, conventional numerical simulation methods fall short in satisfying the strict requirements of time efficiency in these applications due to the large space and time scales of the required multiscale simulation. While data-driven surrogate models are of interest for their rapid computation capabilities, the performance of these models relies on the size and quality of the training data, which is often prohibitively expensive to create. Physics-informed neural networks (PINNs) mitigate the need for large datasets by imposing physical principles during the training process. This work investigates the use of a PINN to predict the time-varying temperature distribution in a part during manufacturing with laser powder bed fusion (L-PBF). Notably, the use of the PINN in this study enables the model to be trained solely on randomly synthesized data. These training data are both inexpensive to obtain, and the presence of stochasticity in the dataset improves the generalizability of the trained model. Results show that the PINN model achieves higher accuracy than a comparable artificial neural network trained on labeled data. Further, the PINN model trained in this work maintains high accuracy in predicting temperature for laser path scanning strategies unseen in the training data. 
    more » « less
  4. Abstract

    Two-photon lithography (TPL) is a photopolymerization-based additive manufacturing technique capable of fabricating complex 3D structures with submicron features. Projection TPL (P-TPL) is a specific implementation that leverages projection-based parallelization to increase the rate of printing by three orders of magnitude. However, a practical limitation of P-TPL is the high shrinkage of the printed microstructures that is caused by the relatively low degree of polymerization in the as-printed parts. Unlike traditional stereolithography (SLA) methods and conventional TPL, most of the polymerization in P-TPL occurs through dark reactions while the light source is off, thereby resulting in a lower degree of polymerization. In this study, we empirically investigated the parameters of the P-TPL process that affect shrinkage. We observed that the shrinkage reduces with an increase in the duration of laser exposure and with a reduction of layer spacing. To broaden the design space, we explored a photochemical post-processing technique that involves further curing the printed structures using UV light while submerging them in a solution of a photoinitiator. With this post-processing, we were able to reduce the areal shrinkage from more than 45% to 1% without limiting the geometric design space. This shows that P-TPL can achieve high dimensional accuracy while taking advantage of the high throughput when compared to conventional serial TPL. Furthermore, P-TPL has a higher resolution when compared to the conventional SLA prints at a similar shrinkage rate.

     
    more » « less
  5. Abstract

    Inkjet printing (IJP) is an additive manufacturing process capable to produce intricate functional structures. The IJP process performance and the quality of the printed parts are considerably affected by the deposited droplets’ volume. Obtaining consistent droplets volume during the process is difficult to achieve because the droplets are prone to variations due to various material properties, process parameters, and environmental conditions. Experimental (i.e., IJP setup observations) and computational (i.e., computational fluid dynamics (CFD)) analysis are used to study the droplets variability; however, they are expensive and computationally inefficient, respectively. The objective of this paper is to propose a framework that can perform fast and accurate droplet volume predictions for unseen IJP driving voltage regimes. A two-step approach is adopted: (1) an emulator is constructed from the physics-based droplet volume simulations to overcome the computational complexity and (2) the emulator is calibrated by incorporating the experimental IJP observations. In particular, a scaled Gaussian stochastic process (s-GaSP) is deployed for the emulation and calibration. The resulting surrogate model is able to rapidly and accurately predict the IJP droplets volume. The proposed methodology is demonstrated by calibrating the simulated data (i.e., CFD droplet simulations) emulator with experimental data from two distinct materials, namely glycerol and isopropyl alcohol.

     
    more » « less