skip to main content


Title: Artificial Intelligence in Advanced Manufacturing: Current Status and Future Outlook
Abstract Today’s manufacturing systems are becoming increasingly complex, dynamic, and connected. The factory operations face challenges of highly nonlinear and stochastic activity due to the countless uncertainties and interdependencies that exist. Recent developments in artificial intelligence (AI), especially Machine Learning (ML) have shown great potential to transform the manufacturing domain through advanced analytics tools for processing the vast amounts of manufacturing data generated, known as Big Data. The focus of this paper is threefold: (1) review the state-of-the-art applications of AI to representative manufacturing problems, (2) provide a systematic view for analyzing data and process dependencies at multiple levels that AI must comprehend, and (3) identify challenges and opportunities to not only further leverage AI for manufacturing, but also influence the future development of AI to better meet the needs of manufacturing. To satisfy these objectives, the paper adopts the hierarchical organization widely practiced in manufacturing plants in examining the interdependencies from the overall system level to the more detailed granular level of incoming material process streams. In doing so, the paper considers a wide range of topics from throughput and quality, supervisory control in human–robotic collaboration, process monitoring, diagnosis, and prognosis, finally to advances in materials engineering to achieve desired material property in process modeling and control.  more » « less
Award ID(s):
1830295
NSF-PAR ID:
10189080
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of Manufacturing Science and Engineering
Volume:
142
Issue:
11
ISSN:
1087-1357
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In September 2019, the fourth and final workshop on the Future of Mechatronics and Robotics Education (FoMRE) was held at a Lawrence Technological University in Southfield, MI. This workshop was organized by faculty at several universities with financial support from industry partners and the National Science Foundation. The purpose of the workshops was to create a cohesive effort among mechatronics and robotics courses, minors and degree programs. Mechatronics and Robotics Engineering (MRE) is an integration of mechanics, controls, electronics, and software, which provides a unique opportunity for engineering students to function on multidisciplinary teams. Due to its multidisciplinary nature, it attracts diverse and innovative students, and graduates better-prepared professional engineers. In this fast growing field, there is a great need to standardize educational material and make MRE education more widely available and easier to adopt. This can only be accomplished if the community comes together to speak with one clear voice about not only the benefits, but also the best ways to teach it. These efforts would also aid in establishing more of these degree programs and integrating minors or majors into existing computer science, mechanical engineering, or electrical engineering departments. The final workshop was attended by approximately 50 practitioners from industry and academia. Participants identified many practical skills required for students to succeed in an MRE curriculum and as practicing engineers after graduation. These skills were then organized into the following categories: professional, independent learning, controller design, numerical simulation and analysis, electronics, software development, and system design. For example, professional skills include technical reports, presentations, and documentation. Independent learning includes reading data sheets, performing internet searches, doing a literature review, and having a maker mindset. Numerical simulation skills include understanding data, presenting data graphically, solving and simulating in software such as MATLAB, Simulink and Excel. Controller design involves selecting a controller, tuning a controller, designing to meet specifications, and understanding when the results are good enough. Electronics skills include selecting sensors, interfacing sensors, interfacing actuators, creating printed circuit boards, wiring on a breadboard, soldering, installing drivers, using integrated circuits, and using microcontrollers. Software development of embedded systems includes agile program design, state machines, analyzing and evaluating code results, commenting code, troubleshooting, debugging, AI and machine learning. Finally, system design includes prototyping, creating CAD models, design for manufacturing, breaking a system down into subsystems, integrating and interfacing subcomponents, having a multidisciplinary perspective, robustness, evaluating tradeoffs, testing, validation, and verification, failure, effect, and mode analysis. A survey was prepared and sent out to the participants from all four workshops as well as other robotics faculty, researchers and industry personnel in order to elicit a broader community response. Because one of the biggest challenges in mechatronics and robotics education is the absence of standardized curricula, textbooks, platforms, syllabi, assignments, and learning outcomes, this was a vital part of the process to achieve some level of consensus. This paper presents an introduction to MRE education, related work on existing programs, methods, results of the practical skills survey, and then draws conclusions based upon these results. It aims to create the foundation for standardizing the development of student skills in mechatronics and robotics curricula across institutions, disciplines, majors and minors. The survey was completed by 94 participants and it was clear that there is a consensus that the primary skills students should have upon completion of MRE courses or a program is a broader multidisciplinary systems-level perspective, an ability to problem solve, and an ability to design a system to meet specifications. 
    more » « less
  2. This study presents an overview and a few case studies to explicate the transformative power of diverse imaging techniques for smart manufacturing, focusing largely on variousin-situandex-situimaging methods for monitoring fusion-based metal additive manufacturing (AM) processes such as directed energy deposition (DED), selective laser melting (SLM), electron beam melting (EBM).In-situimaging techniques, encompassing high-speed cameras, thermal cameras, and digital cameras, are becoming increasingly affordable, complementary, and are emerging as vital for real-time monitoring, enabling continuous assessment of build quality. For example, high-speed cameras capture dynamic laser-material interaction, swiftly detecting defects, while thermal cameras identify thermal distribution of the melt pool and potential anomalies. The data gathered fromin-situimaging are then utilized to extract pertinent features that facilitate effective control of process parameters, thereby optimizing the AM processes and minimizing defects. On the other hand,ex-situimaging techniques play a critical role in comprehensive component analysis. Scanning electron microscopy (SEM), optical microscopy, and 3D-profilometry enable detailed characterization of microstructural features, surface roughness, porosity, and dimensional accuracy. Employing a battery of Artificial Intelligence (AI) algorithms, information from diverse imaging and other multi-modal data sources can be fused, and thereby achieve a more comprehensive understanding of a manufacturing process. This integration enables informed decision-making for process optimization and quality assurance, as AI algorithms analyze the combined data to extract relevant insights and patterns. Ultimately, the power of imaging in additive manufacturing lies in its ability to deliver real-time monitoring, precise control, and comprehensive analysis, empowering manufacturers to achieve supreme levels of precision, reliability, and productivity in the production of components.

     
    more » « less
  3. Modular construction aims at overcoming challenges faced by the traditional construction process such as the shortage of skilled workers, fast-track project requirements, and cost associated with on-site productivity losses and recurrent rework. Since manufacturing is done off-site in controlled factory settings, modular construction is associated with increased productivity and better quality control. However, because every construction project is unique and results in distinct work pieces and building elements to be assembled, modular construction factories necessitate better mechanisms to assist workers during the assembly process in order to minimize errors in selecting the pieces to be assembled and idle times while figuring out the next step in an assembly sequence. Machine intelligence provides opportunities for such assistance; however, a challenge is to rapidly generate large datasets with rich contextual data to train such intelligent agents. This work overviews a mechanism to generate such datasets in virtual environments and evaluates the performance of AI models trained using data generated in virtual environments in recognizing the next installation step in modular assembly sequences. Performance of the trained MV-CNN models (with accuracy of 0.97) shows that virtual environments can potentially be used to generate the required datasets for AI without the costly, time-consuming, and labor-intensive investments needed upfront for capturing real-world data. 
    more » « less
  4. Obeid, I. (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do not have access to such data resources must rely on techniques in which existing models can be adapted to new datasets [6]. A preliminary version of this breast corpus release was tested in a pilot study using a baseline machine learning system, ResNet18 [7], that leverages several open-source Python tools. The pilot corpus was divided into three sets: train, development, and evaluation. Portions of these slides were manually annotated [1] using the nine labels in Table 1 [8] to identify five to ten examples of pathological features on each slide. Not every pathological feature is annotated, meaning excluded areas can include focuses particular to these labels that are not used for training. A summary of the number of patches within each label is given in Table 2. To maintain a balanced training set, 1,000 patches of each label were used to train the machine learning model. Throughout all sets, only annotated patches were involved in model development. The performance of this model in identifying all the patches in the evaluation set can be seen in the confusion matrix of classification accuracy in Table 3. The highest performing labels were background, 97% correct identification, and artifact, 76% correct identification. A correlation exists between labels with more than 6,000 development patches and accurate performance on the evaluation set. Additionally, these results indicated a need to further refine the annotation of invasive ductal carcinoma (“indc”), inflammation (“infl”), nonneoplastic features (“nneo”), normal (“norm”) and suspicious (“susp”). This pilot experiment motivated changes to the corpus that will be discussed in detail in this poster presentation. To increase the accuracy of the machine learning model, we modified how we addressed underperforming labels. One common source of error arose with how non-background labels were converted into patches. Large areas of background within other labels were isolated within a patch resulting in connective tissue misrepresenting a non-background label. In response, the annotation overlay margins were revised to exclude benign connective tissue in non-background labels. Corresponding patient reports and supporting immunohistochemical stains further guided annotation reviews. The microscopic diagnoses given by the primary pathologist in these reports detail the pathological findings within each tissue site, but not within each specific slide. The microscopic diagnoses informed revisions specifically targeting annotated regions classified as cancerous, ensuring that the labels “indc” and “dcis” were used only in situations where a micropathologist diagnosed it as such. Further differentiation of cancerous and precancerous labels, as well as the location of their focus on a slide, could be accomplished with supplemental immunohistochemically (IHC) stained slides. When distinguishing whether a focus is a nonneoplastic feature versus a cancerous growth, pathologists employ antigen targeting stains to the tissue in question to confirm the diagnosis. For example, a nonneoplastic feature of usual ductal hyperplasia will display diffuse staining for cytokeratin 5 (CK5) and no diffuse staining for estrogen receptor (ER), while a cancerous growth of ductal carcinoma in situ will have negative or focally positive staining for CK5 and diffuse staining for ER [9]. Many tissue samples contain cancerous and non-cancerous features with morphological overlaps that cause variability between annotators. The informative fields IHC slides provide could play an integral role in machine model pathology diagnostics. Following the revisions made on all the annotations, a second experiment was run using ResNet18. Compared to the pilot study, an increase of model prediction accuracy was seen for the labels indc, infl, nneo, norm, and null. This increase is correlated with an increase in annotated area and annotation accuracy. Model performance in identifying the suspicious label decreased by 25% due to the decrease of 57% in the total annotated area described by this label. A summary of the model performance is given in Table 4, which shows the new prediction accuracy and the absolute change in error rate compared to Table 3. The breast tissue subset we are developing includes 3,505 annotated breast pathology slides from 296 patients. The average size of a scanned SVS file is 363 MB. The annotations are stored in an XML format. A CSV version of the annotation file is also available which provides a flat, or simple, annotation that is easy for machine learning researchers to access and interface to their systems. Each patient is identified by an anonymized medical reference number. Within each patient’s directory, one or more sessions are identified, also anonymized to the first of the month in which the sample was taken. These sessions are broken into groupings of tissue taken on that date (in this case, breast tissue). A deidentified patient report stored as a flat text file is also available. Within these slides there are a total of 16,971 total annotated regions with an average of 4.84 annotations per slide. Among those annotations, 8,035 are non-cancerous (normal, background, null, and artifact,) 6,222 are carcinogenic signs (inflammation, nonneoplastic and suspicious,) and 2,714 are cancerous labels (ductal carcinoma in situ and invasive ductal carcinoma in situ.) The individual patients are split up into three sets: train, development, and evaluation. Of the 74 cancerous patients, 20 were allotted for both the development and evaluation sets, while the remain 34 were allotted for train. The remaining 222 patients were split up to preserve the overall distribution of labels within the corpus. This was done in hope of creating control sets for comparable studies. Overall, the development and evaluation sets each have 80 patients, while the training set has 136 patients. In a related component of this project, slides from the Fox Chase Cancer Center (FCCC) Biosample Repository (https://www.foxchase.org/research/facilities/genetic-research-facilities/biosample-repository -facility) are being digitized in addition to slides provided by Temple University Hospital. This data includes 18 different types of tissue including approximately 38.5% urinary tissue and 16.5% gynecological tissue. These slides and the metadata provided with them are already anonymized and include diagnoses in a spreadsheet with sample and patient ID. We plan to release over 13,000 unannotated slides from the FCCC Corpus simultaneously with v1.0.0 of TUDP. Details of this release will also be discussed in this poster. Few digitally annotated databases of pathology samples like TUDP exist due to the extensive data collection and processing required. The breast corpus subset should be released by November 2021. By December 2021 we should also release the unannotated FCCC data. We are currently annotating urinary tract data as well. We expect to release about 5,600 processed TUH slides in this subset. We have an additional 53,000 unprocessed TUH slides digitized. Corpora of this size will stimulate the development of a new generation of deep learning technology. In clinical settings where resources are limited, an assistive diagnoses model could support pathologists’ workload and even help prioritize suspected cancerous cases. ACKNOWLEDGMENTS This material is supported by the National Science Foundation under grants nos. CNS-1726188 and 1925494. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. REFERENCES [1] N. Shawki et al., “The Temple University Digital Pathology Corpus,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York City, New York, USA: Springer, 2020, pp. 67 104. https://www.springer.com/gp/book/9783030368432. [2] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning.” Major Research Instrumentation (MRI), Division of Computer and Network Systems, Award No. 1726188, January 1, 2018 – December 31, 2021. https://www. isip.piconepress.com/projects/nsf_dpath/. [3] A. Gulati et al., “Conformer: Convolution-augmented Transformer for Speech Recognition,” in Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2020, pp. 5036-5040. https://doi.org/10.21437/interspeech.2020-3015. [4] C.-J. Wu et al., “Machine Learning at Facebook: Understanding Inference at the Edge,” in Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA), 2019, pp. 331–344. https://ieeexplore.ieee.org/document/8675201. [5] I. Caswell and B. Liang, “Recent Advances in Google Translate,” Google AI Blog: The latest from Google Research, 2020. [Online]. Available: https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html. [Accessed: 01-Aug-2021]. [6] V. Khalkhali, N. Shawki, V. Shah, M. Golmohammadi, I. Obeid, and J. Picone, “Low Latency Real-Time Seizure Detection Using Transfer Deep Learning,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2021, pp. 1 7. https://www.isip. piconepress.com/publications/conference_proceedings/2021/ieee_spmb/eeg_transfer_learning/. [7] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning,” Philadelphia, Pennsylvania, USA, 2020. https://www.isip.piconepress.com/publications/reports/2020/nsf/mri_dpath/. [8] I. Hunt, S. Husain, J. Simons, I. Obeid, and J. Picone, “Recent Advances in the Temple University Digital Pathology Corpus,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2019, pp. 1–4. https://ieeexplore.ieee.org/document/9037859. [9] A. P. Martinez, C. Cohen, K. Z. Hanley, and X. (Bill) Li, “Estrogen Receptor and Cytokeratin 5 Are Reliable Markers to Separate Usual Ductal Hyperplasia From Atypical Ductal Hyperplasia and Low-Grade Ductal Carcinoma In Situ,” Arch. Pathol. Lab. Med., vol. 140, no. 7, pp. 686–689, Apr. 2016. https://doi.org/10.5858/arpa.2015-0238-OA. 
    more » « less
  5. Additive manufacturing (AM) is the process in which a three-dimensional object is built by adding subsequent layers of materials. AM enables novel material compositions and shapes, often without the need for specialized tooling. This technology has the potential to revolutionize how mechanical parts are created, tested, and certified. However, successful real-time AM design requires the integration of complex systems and often necessitates expertise across domains. Simulation-based design approaches, such as those applied in engineering product design and material design, have the potential to improve AM predictive modeling capabilities, particularly when combined with existing knowledge of the underlying mechanics. These predictive models have the potential to reduce the cost of and time for concept-to-final-product development and can be used to supplement experimental tests. The National Academies convened a workshop on October 24-26, 2018 to discuss the frontiers of mechanistic data-driven modeling for AM of metals. Topics of discussion included measuring and modeling process monitoring and control, developing models to represent microstructure evolution, alloy design, and part suitability, modeling phases of process and machine design, and accelerating product and process qualification and certification. These topics then led to the assessment of short-, immediate-, and long-term challenges in AM. This publication summarizes the presentations and discussions from the workshop. 
    more » « less