skip to main content


Title: Spatiotemporal Fusion Network for the Droplet Behavior Recognition in Inkjet Printing
Abstract

Inkjet 3D printing has broad applications in areas such as health and energy due to its capability to precisely deposit micro-droplets of multi-functional materials. However, the droplet of the inkjet printing has different jetting behaviors including drop initiation, thinning, necking, pinching and flying, and they are vulnerable to disturbance from vibration, material inhomogeneity, etc. Such issues make it challenging to yield a consistent printing process and a defect-free final product with desired properties. Therefore, timely recognition of the droplet behavior is critical for inkjet printing quality assessment. In-situ video monitoring of the printing process paves a way for such recognition. In this paper, a novel feature identification framework is presented to recognize the spatiotemporal feature of in-situ monitoring videos for inkjet printing. Specifically, a spatiotemporal fusion network is used for droplet printing behavior classification. The categories are based on inkjet printability, which is related to both the static features (ligament, satellite, and meniscus) and dynamic features (ligament thinning, droplet pinch off, meniscus oscillation). For the recorded droplet jetting video data, two streams of networks, the frames sampled from video in spatial domain (associated with static features) and the optical flow in temporal domain (associated with dynamic features), are fused in different ways to recognize the droplet evolving behavior. Experiments results show that the proposed fusion network can recognize the droplet jetting behavior in the complex printing process and identify its printability with learned knowledge, which can ultimately enable the real-time inkjet printing quality control and further provide guidance to design optimal parameter settings for the inkjet printing process.

 
more » « less
Award ID(s):
1846863
NSF-PAR ID:
10212289
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
ASME 2020 15th International Manufacturing Science and Engineering Conference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Binder Jetting (BJ) is a low-cost Additive Manufacturing (AM) process that uses inkjet technology to selectively bind particles in a powder bed. BJ relies on the ability to control, not only the placement of binder on the surface but also its imbibition into the powder bed. This is a complex process in which picoliter-sized droplets impact powder beds at velocities of 1–10 m/s. However, the effects of printing parameters such as droplet velocity, size, spacing, and inter-arrival time on saturation level (fraction of pore space filled with binder) and line formation (merging of droplets to form a line) are unknown. Prior attempts to predict saturation levels with simple measurements of droplet primitives and capillary pressure assume that droplet/powder interactions are dominated by static equilibrium and neglect the impact of printing parameters. This study analyzes the influence of these parameters on the effective saturation level and conditions for line formation when printing single lines into powder beds of varied materials (316 stainless steel, 420 stainless steel, and alumina) and varied particle size (d50=10–47 µm). Results show that increasing droplet velocity or droplet spacing decreases effective saturation while droplet spacing, velocity, and inter-arrival time affect line formation. At constant printing velocity, the conditions for successful line printing are shown to be a function of droplet spacing and square root of the droplet inter-arrival time analogous to the Washburn model for infiltration into a porous media. The results have implications to maximizing build rates and improving quality of small features in BJ. 
    more » « less
  2. Abstract

    Inkjet printing (IJP) is one of the promising additive manufacturing techniques that yield many innovations in electronic and biomedical products. In IJP, the products are fabricated by depositing droplets on substrates, and the quality of the products is highly affected by the droplet pinch-off behaviors. Therefore, identifying pinch-off behaviors of droplets is critical. However, annotating the pinch-off behaviors is burdensome since a large amount of images of pinch-off behaviors can be collected. Active learning (AL) is a machine learning technique which extracts human knowledge by iteratively acquiring human annotation and updating the classification model for the pinch-off behaviors identification. Consequently, a good classification performance can be achieved with limited labels. However, during the query process, the most informative instances (i.e., images) are varying and most query strategies in AL cannot handle these dynamics since they are handcrafted. Thus, this paper proposes a multiclass reinforced active learning (MCRAL) framework in which a query strategy is trained by reinforcement learning (RL). We designed a unique intrinsic reward signal to improve the classification model performance. Moreover, how to extract the features from images for pinch-off behavior identification is not trivial. Thus, we used a graph convolutional network for droplet image feature extraction. The results show that MCRAL excels AL and can reduce human efforts in pinch-off behavior identification. We further demonstrated that, by linking the process parameters to the predicted droplet pinch-off behaviors, the droplet pinch-off behavior can be adjusted based on MCRAL.

     
    more » « less
  3. Abstract

    In droplet-on-demand liquid metal jetting (DoD-LMJ) additive manufacturing, complex physical interactions govern the droplet characteristics, such as size, velocity, and shape. These droplet characteristics, in turn, determine the functional quality of the printed parts. Hence, to ensure repeatable and reliable part quality it is necessary to monitor and control the droplet characteristics. Existing approaches for in-situ monitoring of droplet behavior in DoD-LMJ rely on high-speed imaging sensors. The resulting high volume of droplet images acquired is computationally demanding to analyze and hinders real-time control of the process. To overcome this challenge, the objective of this work is to use time series data acquired from an in-process millimeter-wave sensor for predicting the size, velocity, and shape characteristics of droplets in DoD-LMJ process. As opposed to high-speed imaging, this sensor produces data-efficient time series signatures that allows rapid, real-time process monitoring. We devise machine learning models that use the millimeter-wave sensor data to predict the droplet characteristics. Specifically, we developed multilayer perceptron-based non-linear autoregressive models to predict the size and velocity of droplets. Likewise, a supervised machine learning model was trained to classify the droplet shape using the frequency spectrum information contained in the millimeter-wave sensor signatures. High-speed imaging data served as ground truth for model training and validation. These models captured the droplet characteristics with a statistical fidelity exceeding 90%, and vastly outperformed conventional statistical modeling approaches. Thus, this work achieves a practically viable sensing approach for real-time quality monitoring of the DoD-LMJ process, in lieu of the existing data-intensive image-based techniques.

     
    more » « less
  4. Abstract

    Freeze nano 3D printing is a novel process that seamlessly integrates freeze casting and inkjet printing processes. It can fabricate flexible energy products with both macroscale and microscale features. These multi-scale features enable good mechanical and electrical properties with lightweight structures. However, the quality issues are among the biggest barriers that freeze nano printing, and other 3D printing processes, need to come through. In particular, the droplet solidification behavior is crucial for the product quality. The physical based heat transfer models are computationally inefficient for the online solidification time prediction during the printing process. In this paper, we integrate machine learning (i.e., tensor decomposition) methods and physical models to emulate the tensor responses of droplet solidification time from the physical based models. The tensor responses are factorized with joint tensor decomposition, and represented with low dimensional vectors. We then model these low dimensional vectors with Gaussian process models. We demonstrate the proposed framework for emulating the physical models of freeze nano 3D printing, which can help the future real-time process optimization.

     
    more » « less
  5. null (Ed.)
    Deaf spaces are unique indoor environments designed to optimize visual communication and Deaf cultural expression. However, much of the technological research geared towards the deaf involve use of video or wearables for American sign language (ASL) translation, with little consideration for Deaf perspective on privacy and usability of the technology. In contrast to video, RF sensors offer the avenue for ambient ASL recognition while also preserving privacy for Deaf signers. Methods: This paper investigates the RF transmit waveform parameters required for effective measurement of ASL signs and their effect on word-level classification accuracy attained with transfer learning and convolutional autoencoders (CAE). A multi-frequency fusion network is proposed to exploit data from all sensors in an RF sensor network and improve the recognition accuracy of fluent ASL signing. Results: For fluent signers, CAEs yield a 20-sign classification accuracy of %76 at 77 GHz and %73 at 24 GHz, while at X-band (10 Ghz) accuracy drops to 67%. For hearing imitation signers, signs are more separable, resulting in a 96% accuracy with CAEs. Further, fluent ASL recognition accuracy is significantly increased with use of the multi-frequency fusion network, which boosts the 20-sign fluent ASL recognition accuracy to 95%, surpassing conventional feature level fusion by 12%. Implications: Signing involves finer spatiotemporal dynamics than typical hand gestures, and thus requires interrogation with a transmit waveform that has a rapid succession of pulses and high bandwidth. Millimeter wave RF frequencies also yield greater accuracy due to the increased Doppler spread of the radar backscatter. Comparative analysis of articulation dynamics also shows that imitation signing is not representative of fluent signing, and not effective in pre-training networks for fluent ASL classification. Deep neural networks employing multi-frequency fusion capture both shared, as well as sensor-specific features and thus offer significant performance gains in comparison to using a single sensor or feature-level fusion. 
    more » « less