skip to main content


Title: Physics-Constrained Dictionary Learning for Selective Laser Melting Process Monitoring
Compressed sensing (CS) as a new data acquisition technique has been applied to monitor manufacturing processes. With a few measurements, sparse coefficient vectors can be recovered by solving an inverse problem and original signals can be reconstructed. Dictionary learning methods have been developed and applied in combination with CS to improve the sparsity level of the recovered coefficient vectors. In this work, a physics-constrained dictionary learning approach is proposed to solve both of reconstruction and classification problems by optimizing measurement, basis, and classification matrices simultaneously with the considerations of the application-specific restrictions. It is applied in image acquisitions in selective laser melting (SLM). The proposed approach includes the optimization in two steps. In the first stage, with the basis matrix fixed, the measurement matrix is optimized by determining the pixel locations for sampling in each image. The optimized measurement matrix only includes one non-zero entry in each row. The optimization of pixel locations is solved based on a constrained FrameSense algorithm. In the second stage, with the measurement matrix fixed, the basis and classification matrices are optimized based on the K-SVD algorithm. With the optimized basis matrix, the coefficient vector can be recovered with CS. The original signal can be reconstructed by the linear combination of the basis matrix and the recovered coefficient vector. The original signal can also be classified to identify different machine states by the linear combination of the classification matrix and the coefficient vector.  more » « less
Award ID(s):
1663227
NSF-PAR ID:
10282303
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of 2021 IISE Annual Conference & Expo
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We compare reconstructed quantum state images of a birefringent sample using direct quantum state tomography and inverse numerical optimization technique. Qubits are used to characterize birefringence in a flat transparent plastic sample by means of polarization sensitive measurement using density matrices of two-level quantum entangled photons. Pairs of entangled photons are generated in a type-II nonlinear crystal. About half of the generated photons interact with a birefringent sample, and coincidence counts are recorded. Coincidence rates of entangled photons are measured for a set of sixteen polarization states. Tomographic and inverse numerical techniques are used to reconstruct the density matrix, the degree of entanglement, and concurrence for each pixel of the investigated sample. An inverse numerical optimization technique is used to obtain a density matrix from measured coincidence counts with the maximum probability. Presented results highlight the experimental noise reduction, greater density matrix estimation, and overall image enhancement. The outcome of the entanglement distillation through projective measurements is a superposition of Bell states with different amplitudes. These changes are used to characterize the birefringence of a 3M tape. Well-defined concurrence and entanglement images of the birefringence are presented. Our results show that inverse numerical techniques improve overall image quality and detail resolution. The technique described in this work has many potential applications. 
    more » « less
  2. Abstract: Coded aperture X-ray computed tomography (CT) has the potential to revolutionize X-ray tomography systems in medical imaging and air and rail transit security - both areas of global importance. It allows either a reduced set of measurements in X-ray CT without degrada- tion in image reconstruction, or measure multiplexed X-rays to simplify the sensing geometry. Measurement reduction is of particular interest in medical imaging to reduce radiation, and airport security often imposes practical constraints leading to limited angle geometries. Coded aperture compressive X-ray CT places a coded aperture pattern in front of the X-ray source in order to obtain patterned projections onto a detector. Compressive sensing (CS) reconstruction algorithms are then used to recover the image. To date, the coded illumination patterns used in conventional CT systems have been random. This paper addresses the code optimization prob- lem for general tomography imaging based on the point spread function (PSF) of the system, which is used as a measure of the sensing matrix quality which connects to the restricted isom- etry property (RIP) and coherence of the sensing matrix. The methods presented are general, simple to use, and can be easily extended to other imaging systems. Simulations are presented where the peak signal to noise ratios (PSNR) of the reconstructed images using optimized coded apertures exhibit significant gain over those attained by random coded apertures. Additionally, results using real X-ray tomography projections are presented. 
    more » « less
  3. Abstract: Coded aperture X-ray computed tomography (CT) has the potential to revolutionize X-ray tomography systems in medical imaging and air and rail transit security - both areas of global importance. It allows either a reduced set of measurements in X-ray CT without degrada- tion in image reconstruction, or measure multiplexed X-rays to simplify the sensing geometry. Measurement reduction is of particular interest in medical imaging to reduce radiation, and airport security often imposes practical constraints leading to limited angle geometries. Coded aperture compressive X-ray CT places a coded aperture pattern in front of the X-ray source in order to obtain patterned projections onto a detector. Compressive sensing (CS) reconstruction algorithms are then used to recover the image. To date, the coded illumination patterns used in conventional CT systems have been random. This paper addresses the code optimization prob- lem for general tomography imaging based on the point spread function (PSF) of the system, which is used as a measure of the sensing matrix quality which connects to the restricted isom- etry property (RIP) and coherence of the sensing matrix. The methods presented are general, simple to use, and can be easily extended to other imaging systems. Simulations are presented where the peak signal to noise ratios (PSNR) of the reconstructed images using optimized coded apertures exhibit significant gain over those attained by random coded apertures. Additionally, results using real X-ray tomography projections are presented. 
    more » « less
  4. This paper proposes a representational model for image pairs such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1). The model couples the following two components: (1) the vector representations of local contents of images and (2) the matrix representations of local pixel displacements caused by the relative motions between the agent and the objects in the 3D scene. When the image frame undergoes changes due to local pixel displacements, the vectors are multiplied by the matrices that represent the local displacements. Thus the vector representation is equivariant as it varies according to the local displacements. Our experiments show that our model can learn Gabor-like filter pairs of quadrature phases. The profiles of the learned filters match those of simple cells in Macaque V1. Moreover, we demonstrate that the model can learn to infer local motions in either a supervised or unsupervised manner. With such a simple model, we achieve competitive results on optical flow estimation. 
    more » « less
  5. This paper is concerned with the estimation of time-varying networks for high-dimensional nonstationary time series. Two types of dynamic behaviors are considered: structural breaks (i.e., abrupt change points) and smooth changes. To simultaneously handle these two types of time-varying features, a two-step approach is proposed: multiple change point locations are first identified on the basis of comparing the difference between the localized averages on sample covariance matrices, and then graph supports are recovered on the basis of a kernelized time-varying constrained L 1 -minimization for inverse matrix estimation (CLIME) estimator on each segment. We derive the rates of convergence for estimating the change points and precision matrices under mild moment and dependence conditions. In particular, we show that this two-step approach is consistent in estimating the change points and the piecewise smooth precision matrix function, under a certain high-dimensional scaling limit. The method is applied to the analysis of network structure of the S&P 500 index between 2003 and 2008. 
    more » « less