skip to main content

Title: MOSS—Multi-Modal Best Subset Modeling in Smart Manufacturing
Smart manufacturing, which integrates a multi-sensing system with physical manufacturing processes, has been widely adopted in the industry to support online and real-time decision making to improve manufacturing quality. A multi-sensing system for each specific manufacturing process can efficiently collect the in situ process variables from different sensor modalities to reflect the process variations in real-time. However, in practice, we usually do not have enough budget to equip too many sensors in each manufacturing process due to the cost consideration. Moreover, it is also important to better interpret the relationship between the sensing modalities and the quality variables based on the model. Therefore, it is necessary to model the quality-process relationship by selecting the most relevant sensor modalities with the specific quality measurement from the multi-modal sensing system in smart manufacturing. In this research, we adopted the concept of best subset variable selection and proposed a new model called Multi-mOdal beSt Subset modeling (MOSS). The proposed MOSS can effectively select the important sensor modalities and improve the modeling accuracy in quality-process modeling via functional norms that characterize the overall effects of individual modalities. The significance of sensor modalities can be used to determine the sensor placement strategy in smart manufacturing. more » Moreover, the selected modalities can better interpret the quality-process model by identifying the most correlated root cause of quality variations. The merits of the proposed model are illustrated by both simulations and a real case study in an additive manufacturing (i.e., fused deposition modeling) process. « less
Authors:
; ;
Award ID(s):
1916174
Publication Date:
NSF-PAR ID:
10281315
Journal Name:
Sensors
Volume:
21
Issue:
1
Page Range or eLocation-ID:
243
ISSN:
1424-8220
Sponsoring Org:
National Science Foundation
More Like this
  1. Training and on-site assistance is critical to help workers master required skills, improve worker productivity, and guarantee the product quality. Traditional training methods lack worker-centered considerations that are particularly in need when workers are facing ever changing demands. In this study, we propose a worker-centered training & assistant system for intelligent manufacturing, which is featured with self-awareness and active-guidance. Multi-modal sensing techniques are applied to perceive each individual worker and a deep learning approach is developed to understand the worker’s behavior and intention. Moreover, an object detection algorithm is implemented to identify the parts/tools the worker is interacting with. Then the worker’s current state is inferred and used for quantifying and assessing the worker performance, from which the worker’s potential guidance demands are analyzed. Furthermore, onsite guidance with multi-modal augmented reality is provided actively and continuously during the operational process. Two case studies are used to demonstrate the feasibility and great potential of our proposed approach and system for applying to the manufacturing industry for frontline workers.
  2. Introduction: Computed tomography perfusion (CTP) imaging requires injection of an intravenous contrast agent and increased exposure to ionizing radiation. This process can be lengthy, costly, and potentially dangerous to patients, especially in emergency settings. We propose MAGIC, a multitask, generative adversarial network-based deep learning model to synthesize an entire CTP series from only a non-contrasted CT (NCCT) input. Materials and Methods: NCCT and CTP series were retrospectively retrieved from 493 patients at UF Health with IRB approval. The data were deidentified and all images were resized to 256x256 pixels. The collected perfusion data were analyzed using the RapidAI CT Perfusion analysis software (iSchemaView, Inc. CA) to generate each CTP map. For each subject, 10 CTP slices were selected. Each slice was paired with one NCCT slice at the same location and two NCCT slices at a predefined vertical offset, resulting in 4.3K CTP images and 12.9K NCCT images used for training. The incorporation of a spatial offset into the NCCT input allows MAGIC to more accurately synthesize cerebral perfusive structures, increasing the quality of the generated images. The studies included a variety of indications, including healthy tissue, mild infarction, and severe infarction. The proposed MAGIC model incorporates a novel multitaskmore »architecture, allowing for the simultaneous synthesis of four CTP modalities: mean transit time (MTT), cerebral blood flow (CBF), cerebral blood volume (CBV), and time to peak (TTP). We propose a novel Physicians-in-the-loop module in the model's architecture, acting as a tunable layer that allows physicians to manually adjust the amount of anatomic detail present in the synthesized CTP series. Additionally, we propose two novel loss terms: multi-modal connectivity loss and extrema loss. The multi-modal connectivity loss leverages the multi-task nature to assert that the mathematical relationship between MTT, CBF, and CBV is satisfied. The extrema loss aids in learning regions of elevated and decreased activity in each modality, allowing for MAGIC to accurately learn the characteristics of diagnostic regions of interest. Corresponding NCCT and CTP slices were paired along the vertical axis. The model was trained for 100 epochs on a NVIDIA TITAN X GPU. Results and Discussion: The MAGIC model’s performance was evaluated on a sample of 40 patients from the UF Health dataset. Across all CTP modalities, MAGIC was able to accurately produce images with high structural agreement between the entire synthesized and clinical perfusion images (SSIMmean=0.801 , UQImean=0.926). MAGIC was able to synthesize CTP images to accurately characterize cerebral circulatory structures and identify regions of infarct tissue, as shown in Figure 1. A blind binary evaluation was conducted to assess the presence of cerebral infarction in both the synthesized and clinical perfusion images, resulting in the synthesized images correctly predicting the presence of cerebral infarction with 87.5% accuracy. Conclusions: We proposed a MAGIC model whose novel deep learning structures and loss terms enable high-quality synthesis of CTP maps and characterization of circulatory structures solely from NCCT images, potentially eliminating the requirement for the injection of an intravenous contrast agent and elevated radiation exposure during perfusion imaging. This makes MAGIC a beneficial tool in a clinical scenario increasing the overall safety, accessibility, and efficiency of cerebral perfusion and facilitating better patient outcomes. Acknowledgements: This work was partially supported by the National Science Foundation, IIS-1908299 III: Small: Modeling Multi-Level Connectivity of Brain Dynamics + REU Supplement, to the University of Florida.« less
  3. In multistage manufacturing systems, modeling multiple quality indices based on the process sensing variables is important. However, the classic modeling technique predicts each quality variable one at a time, which fails to consider the correlation within or between stages. We propose a deep multistage multi-task learning framework to jointly predict all output sensing variables in a unified end-to-end learning framework according to the sequential system architecture in the MMS. Our numerical studies and real case study have shown that the new model has a superior performance compared to many benchmark methods as well as great interpretability through developed variable selection techniques.
  4. Additive Manufacturing (AM) is a crucial component of the smart manufacturing industry. In this paper, we propose an automated quality grading system for the fused deposition modeling (FDM) process as one of the major AM processes using a developed real-time deep convolutional neural network (CNN) model. The CNN model is trained offline using the images of the internal and surface defects in the layer-by-layer deposition of materials and tested online by studying the performance of detecting and grading the failure in AM process at different extruder speeds and temperatures. The model demonstrates an accuracy of 94% and specificity of 96%, as well as above 75% in measures of the F-score, the sensitivity, and the precision for classifying the quality of the AM process in five grades in real-time. The high-performance of the model could not be achieved with the values usually used for printing temperature and printing speed, only in addition with much higher values. The proposed online model adds an automated, consistent, and non-contact quality control signal to the AM process. The quality monitoring signal can also be used by the AM machine to stop the AM process and eliminate the sophisticated inspection of the printed parts for internalmore »defects. The proposed quality control model ensures reliable parts with fewer quality hiccups while improving performance in time and material consumption.« less
  5. Multi-modal bio-sensing has recently been used as effective research tools in affective computing, autism, clinical disorders, and virtual reality among other areas. However, none of the existing bio-sensing systems support multi-modality in a wearable manner outside well-controlled laboratory environments with research-grade measurements. This work attempts to bridge this gap by developing a wearable multi-modal biosensing system capable of collecting, synchronizing, recording and transmitting data from multiple bio-sensors: PPG, EEG, eye-gaze headset, body motion capture, GSR, etc. while also providing task modulation features including visual-stimulus tagging. This study describes the development and integration of the various components of our system. We evaluate the developed sensors by comparing their measurements to those obtained by a standard research-grade bio-sensors. We first evaluate different sensor modalities of our headset, namely earlobe-based PPG module with motion-noise canceling for ECG during heart-beat calculation. We also compare the steady-state visually evoked potentials (SSVEP) measured by our shielded dry EEG sensors with the potentials obtained by commercially available dry EEG sensors. We also investigate the effect of head movements on the accuracy and precision of our wearable eyegaze system. Furthermore, we carry out two practical tasks to demonstrate the applications of using multiple sensor modalities for exploring previouslymore »unanswerable questions in bio-sensing. Specifically, utilizing bio-sensing we show which strategy works best for playing Where is Waldo? visual-search game, changes in EEG corresponding to true versus false target fixations in this game, and predicting the loss/draw/win states through biosensing modalities while learning their limitations in a Rock-Paper-Scissors game.« less