skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Operation and Productivity Monitoring from Sound Signal of Legacy Pipe Bending Machine via Convolutional Neural Network (CNN)
Abstract This study introduces a non-invasive approach to monitor operation and productivity of a legacy pipe bending machine in real-time based on a lightweight convolutional neural network (CNN) model and internal sound as input data. Various sensors were deployed to determine the optimal sensor type and placement, and labels for training and testing the CNN model were generated through the meticulous collection of sound data in conjunction with webcam videos. The CNN model, which was optimized through hyperparameter tuning via grid search and utilized feature extraction using Log-Mel spectrogram, demonstrated notable prediction accuracies in the test. However, when applied in a real-world manufacturing scenario, the model encountered a significant number of errors in predicting productivity. To navigate through this challenge and enhance the predictive accuracy of the system, a buffer algorithm using the inferences of CNN models was proposed. This algorithm employs a queuing method for continuous sound monitoring securing robust predictions, refines the interpretation of the CNN model inferences, and enhances prediction outcomes in actual implementation where accuracy of monitoring productivity information is crucial. The proposed lightweight CNN model alongside the buffer algorithm was successfully deployed on an edge computer, enabling real-time remote monitoring.  more » « less
Award ID(s):
2134667
PAR ID:
10500682
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
International Journal of Precision Engineering and Manufacturing
Volume:
25
Issue:
7
ISSN:
2234-7593
Format(s):
Medium: X Size: p. 1437-1456
Size(s):
p. 1437-1456
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In situ digital inline holography is a technique which can be used to acquire high‐resolution imagery of plankton and examine their spatial and temporal distributions within the water column in a nonintrusive manner. However, for effective expert identification of an organism from digital holographic imagery, it is necessary to apply a computationally expensive numerical reconstruction algorithm. This lengthy process inhibits real‐time monitoring of plankton distributions. Deep learning methods, such as convolutional neural networks, applied to interference patterns of different organisms from minimally processed holograms can eliminate the need for reconstruction and accomplish real‐time computation. In this article, we integrate deep learning methods with digital inline holography to create a rapid and accurate plankton classification network for 10 classes of organisms that are commonly seen in our data sets. We describe the procedure from preprocessing to classification. Our network achieves 93.8% accuracy when applied to a manually classified testing data set. Upon further application of a probability filter to eliminate false classification, the average precision and recall are 96.8% and 95.0%, respectively. Furthermore, the network was applied to 7500 in situ holograms collected at East Sound in Washington during a vertical profile to characterize depth distribution of the local diatoms. The results are in agreement with simultaneously recorded independent chlorophyll concentration depth profiles. This lightweight network exemplifies its capability for real‐time, high‐accuracy plankton classification and it has the potential to be deployed on imaging instruments for long‐term in situ plankton monitoring. 
    more » « less
  2. Abstract Monitoring the health condition as well as predicting the performance of Lithium-ion batteries are crucial to the reliability and safety of electrical systems such as electric vehicles. However, estimating the discharge capacity and end-of-discharge (EOD) of a battery in real-time remains a challenge. Few works have been reported on the relationship between the capacity degradation of a battery and EOD. We introduce a new data-driven method that combines convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) models to predict the discharge capacity and the EOD using online condition monitoring data. The CNN model extracts long-term correlations among voltage, current, and temperature measurements and then estimates the discharge capacity. The BiLSTM model extracts short-term dependencies in condition monitoring data and predicts the EOD for each discharge cycle while utilizing the capacity predicted by CNN as an additional input. By considering the discharge capacity, the BiLSTM model is able to use the long-term health condition of a battery to improve the prediction accuracy of its short-term performance. We demonstrated that the proposed method can achieve online discharge capacity estimation and EOD prediction efficiently and accurately. 
    more » « less
  3. In military operations, real-time monitoring of soldiers’ health is essential for ensuring mission success and safeguarding personnel, yet such systems face challenges related to accuracy, security, and resource efficiency. This research addresses the critical need for secure, real-time monitoring of soldier vitals in the field, where operational security and performance are paramount. The paper focuses on implementing a machine-learning-based system capable of predicting the health states of soldiers using vitals such as heart rate (HR), respiratory rate (RESP), pulse, and oxygen saturation SpO2. A comprehensive pipeline was developed, including data preprocessing, the addition of noise, and model evaluation, to identify the best-performing machine learning algorithm. The system was tested through simulations to ensure real-time inference on real-life data, with reliable and accurate predictions demonstrated in dynamic environments. The gradient boosting model was selected due to its high accuracy, robustness to noise, and ability to handle complex feature interactions efficiently. Additionally, a lightweight cryptographic security system with a 16-byte key was integrated to protect sensitive health and location data during transmission. The results validate the feasibility of deploying such a system in resource-constrained field conditions while maintaining data confidentiality and operational security. 
    more » « less
  4. null (Ed.)
    Monitoring daily activities is essential for home service robots to take care of the older adults who live alone in their homes. In this article, we proposed a sound-based human activity monitoring (SoHAM) framework by recognizing sound events in a home environment. First, the method of context-aware sound event recognition (CoSER) is developed, which uses contextual information to disambiguate sound events. The locational context of sound events is estimated by fusing the data from the distributed passive infrared (PIR) sensors deployed in the home. A two-level dynamic Bayesian network (DBN) is used to model the intratemporal and intertemporal constraints between the context and the sound events. Second, dynamic sliding time window-based human action recognition (DTW-HaR) is developed to estimate active sound event segments with their labels and durations, then infer actions and their durations. Finally, a conditional random field (CRF) model is proposed to predict human activities based on the recognized action, location, and time. We conducted experiments in our robot-integrated smart home (RiSH) testbed to evaluate the proposed framework. The obtained results show the effectiveness and accuracy of CoSER, action recognition, and human activity monitoring. 
    more » « less
  5. ObjectiveTo identify lifting actions and count the number of lifts performed in videos based on robust class prediction and a streamlined process for reliable real-time monitoring of lifting tasks. BackgroundTraditional methods for recognizing lifting actions often rely on deep learning classifiers applied to human motion data collected from wearable sensors. Despite their high performance, these methods can be difficult to implement on systems with limited hardware resources. MethodThe proposed method follows a five-stage process: (1) BlazePose, a real-time pose estimation model, detects key joints of the human body. (2) These joints are preprocessed by smoothing, centering, and scaling techniques. (3) Kinematic features are extracted from the preprocessed joints. (4) Video frames are classified as lifting or nonlifting using rank-altered kinematic feature pairs. (5) A lifting counting algorithm counts the number of lifts based on the class predictions. ResultsNine rank-altered kinematic feature pairs are identified as key pairs. These pairs were used to construct an ensemble classifier, which achieved 0.89 or above in classification metrics, including accuracy, precision, recall, and F1 score. This classifier showed an accuracy of 0.90 in lifting counting and a latency of 0.06 ms, which is at least 12.5 times faster than baseline classifiers. ConclusionThis study demonstrates that computer vision-based kinematic features could be adopted to effectively and efficiently recognize lifting actions. ApplicationThe proposed method could be deployed on various platforms, including mobile devices and embedded systems, to monitor lifting tasks in real-time for the proactive prevention of work-related low-back injuries. 
    more » « less