- NSF-PAR ID:
- 10161177
- Date Published:
- Journal Name:
- CIRP
- Volume:
- 84
- Issue:
- 2212-8271
- ISSN:
- 0373-7284
- Page Range / eLocation ID:
- 169-172
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
There has been an increasing need of technologies to manufacturing chemical and biological sensors for various applications ranging from environmental monitoring to human health monitoring. Currently, manufacturing of most chemical and biological sensors relies on a variety of standard microfabrication techniques, such as physical vapor deposition and photolithography, and materials such as metals and semiconductors. Though functional, they are hampered by high cost materials, rigid substrates, and limited surface area. Paper based sensors offer an intriguing alternative that is low cost, mechanically flexible, has the inherent ability to filter and separate analytes, and offers a high surface area, permeable framework advantageous to liquid and vapor sensing. However, a major drawback is that standard microfabrication techniques cannot be used in paper sensor fabrication. To fabricate sensors on paper, low temperature additive techniques must be used, which will require new manufacturing processes and advanced functional materials. In this work, we focus on using aerosol jet printing as a highresolution additive process for the deposition of ink materials to be used in paper-based sensors. This technique can use a wide variety of materials with different viscosities, including materials with high porosity and particles inherent to paper. One area of our efforts involves creating interdigitated microelectrodes on paper in a one-step process using commercially available silver nanoparticle and carbon black based conductive inks. Another area involves use of specialized filter papers as substrates, such as multi-layered fibrous membrane paper consisting of a poly(acrylonitrile) nanofibrous layer and a nonwoven poly(ethylene terephthalate) layer. The poly(acrylonitrile) nanofibrous layer are dense and smooth enough to allow for high resolution aerosol jet printing. With additively fabricated electrodes on the paper, molecularly-functionalized metal nanoparticles are deposited by molecularly-mediated assembling, drop casting, and printing (sensing and electrode materials), allowing full functionalization of the paper, and producing sensor devices with high surface area. These sensors, depending on the electrode configuration, are used for detection of chemical and biological species in vapor phase, such as water vapor and volatile organic compounds, making them applicable to human performance monitoring. These paper based sensors are shown to display an enhancement in sensitivity, as compared to control devices fabricated on non-porous polyimide substrates. These results have demonstrated the feasibility of paper-based printed devices towards manufacturing of a fully wearable, highly-sensitive, and wireless human performance monitor coupled to flexible electronics with the capability to communicate wirelessly to a smartphone or other electronics for data logging and analysis.more » « less
-
This study presents an overview and a few case studies to explicate the transformative power of diverse imaging techniques for smart manufacturing, focusing largely on various
in-situ andex-situ imaging methods for monitoring fusion-based metal additive manufacturing (AM) processes such as directed energy deposition (DED), selective laser melting (SLM), electron beam melting (EBM).In-situ imaging techniques, encompassing high-speed cameras, thermal cameras, and digital cameras, are becoming increasingly affordable, complementary, and are emerging as vital for real-time monitoring, enabling continuous assessment of build quality. For example, high-speed cameras capture dynamic laser-material interaction, swiftly detecting defects, while thermal cameras identify thermal distribution of the melt pool and potential anomalies. The data gathered fromin-situ imaging are then utilized to extract pertinent features that facilitate effective control of process parameters, thereby optimizing the AM processes and minimizing defects. On the other hand,ex-situ imaging techniques play a critical role in comprehensive component analysis. Scanning electron microscopy (SEM), optical microscopy, and 3D-profilometry enable detailed characterization of microstructural features, surface roughness, porosity, and dimensional accuracy. Employing a battery of Artificial Intelligence (AI) algorithms, information from diverse imaging and other multi-modal data sources can be fused, and thereby achieve a more comprehensive understanding of a manufacturing process. This integration enables informed decision-making for process optimization and quality assurance, as AI algorithms analyze the combined data to extract relevant insights and patterns. Ultimately, the power of imaging in additive manufacturing lies in its ability to deliver real-time monitoring, precise control, and comprehensive analysis, empowering manufacturers to achieve supreme levels of precision, reliability, and productivity in the production of components. -
Abstract Automated optical inspection (AOI) is increasingly advocated for in situ quality monitoring of additive manufacturing (AM) processes. The availability of layerwise imaging data improves the information visibility during fabrication processes and is thus conducive to performing online certification. However, few, if any, have investigated the high-speed contact image sensors (CIS) (i.e., originally developed for document scanners and multifunction printers) for AM quality monitoring. In addition, layerwise images show complex patterns and often contain hidden information that cannot be revealed in a single scale. A new and alternative approach will be to analyze these intrinsic patterns with multiscale lenses. Therefore, the objective of this article is to design and develop an AOI system with contact image sensors for multiresolution quality inspection of layerwise builds in additive manufacturing. First, we retrofit the AOI system with contact image sensors in industrially relevant 95 mm/s scanning speed to a laser-powder-bed-fusion (LPBF) machines. Then, we design the experiments to fabricate nine parts under a variety of factor levels (e.g., gas flow blockage, re-coater damage, laser power changes). In each layer, the AOI system collects imaging data of both recoating powder beds before the laser fusion and surface finishes after the laser fusion. Second, layerwise images are pre-preprocessed for alignment, registration, and identification of regions of interests (ROIs) of these nine parts. Then, we leverage the wavelet transformation to analyze ROI images in multiple scales and further extract salient features that are sensitive to process variations, instead of extraneous noises. Third, we perform the paired comparison analysis to investigate how different levels of factors influence the distribution of wavelet features. Finally, these features are shown to be effective in predicting the extent of defects in the computed tomography (CT) data of layerwise AM builds. The proposed framework of multiresolution quality inspection is evaluated and validated using real-world AM imaging data. Experimental results demonstrated the effectiveness of the proposed AOI system with contact image sensors for online quality inspection of layerwise builds in AM processes.
-
Design Rules and In-Situ Quality Monitoring of Thin-Wall Features Made Using Laser Powder Bed FusionThe goal of this work is to quantify the link between the design features (geometry), in-situ process sensor signatures, and build quality of parts made using laser powder bed fusion (LPBF) additive manufacturing (AM) process. This knowledge is critical for establishing design rules for AM parts, and to detecting impending build failures using in-process sensor data. As a step towards this goal, the objectives of this work are two-fold: Quantify the effect of the geometry and orientation on the build quality of thin-wall features. To explain further, the geometry-related factor is the ratio of the length of a thin-wall (l) to its thickness (t) defined as the aspect ratio (length-to-thickness ratio, l/t), and the angular orientation (θ) of the part, which is defined as the angle of the part in the X-Y plane relative to the re-coater blade of the LPBF machine. Assess the thin-wall build quality by analyzing images of the part obtained at each layer from an in-situ optical camera using a convolutional neural network. To realize these objectives, we designed a test part with a set of thin-wall features (fins) with varying aspect ratio from Titanium alloy (Ti-6Al-4V) material — the aspect ratio l/t of the thin-walls ranges from 36 to 183 (11 mm long (constant), and 0.06 mm to 0.3 mm in thickness). These thin-wall test parts were built under three angular orientations of 0°, 60°, and 90°. Further, the parts were examined offline using X-ray computed tomography (XCT). Through the offline XCT data, the build quality of the thin-wall features in terms of their geometric integrity is quantified as a function of the aspect ratio and orientation angle, which suggests a set of design guidelines for building thin-wall structures with LPBF. To monitor the quality of the thin-wall, in-process images of the top surface of the powder bed were acquired at each layer during the build process. The optical images are correlated with the post build quantitative measurements of the thin-wall through a deep learning convolutional neural network (CNN). The statistical correlation (Pearson coefficient, ρ) between the offline XCT measured thin-wall quality, and CNN predicted measurement ranges from 80% to 98%. Consequently, the impending poor quality of a thin-wall is captured from in-situ process data.more » « less
-
null (Ed.)Most modern commodity imaging systems we use directly for photography—or indirectly rely on for downstream applications—employ optical systems of multiple lenses that must balance deviations from perfect optics, manufacturing constraints, tolerances, cost, and footprint. Although optical designs often have complex interactions with downstream image processing or analysis tasks, today’s compound optics are designed in isolation from these interactions. Existing optical design tools aim to minimize optical aberrations, such as deviations from Gauss’ linear model of optics, instead of application-specific losses, precluding joint optimization with hardware image signal processing (ISP) and highly parameterized neural network processing. In this article, we propose an optimization method for compound optics that lifts these limitations. We optimize entire lens systems jointly with hardware and software image processing pipelines, downstream neural network processing, and application-specific end-to-end losses. To this end, we propose a learned, differentiable forward model for compound optics and an alternating proximal optimization method that handles function compositions with highly varying parameter dimensions for optics, hardware ISP, and neural nets. Our method integrates seamlessly atop existing optical design tools, such as Zemax . We can thus assess our method across many camera system designs and end-to-end applications. We validate our approach in an automotive camera optics setting—together with hardware ISP post processing and detection—outperforming classical optics designs for automotive object detection and traffic light state detection. For human viewing tasks, we optimize optics and processing pipelines for dynamic outdoor scenarios and dynamic low-light imaging. We outperform existing compartmentalized design or fine-tuning methods qualitatively and quantitatively, across all domain-specific applications tested.more » « less