Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Robotic technology can benefit disassembly operations by reducing human operators' workload and assisting them with handling hazardous materials. Safety consideration and prediction of the human movement are priorities in close collaboration between humans and robots. The point-by-point forecasting of human hand motion, which forecasts one point at each time, does not provide enough information on human movement due to errors between the actual movement and the predicted value. This study provides a range of possible hand movements to increase safety. It applies three machine learning techniques, including long short-term memory (LSTM), gated recurrent unit (GRU), and Bayesian neural network (BNN) combined with bagging and Monte Carlo dropout (MCD), namely, LSTM-bagging, GRU-bagging, and BNN-MCD to predict the possible movement range. The study uses an inertial measurement unit (IMU) dataset collected from the disassembly of desktop computers by several participants to show the application of the proposed method.more » « lessFree, publicly-accessible full text available May 1, 2026
-
Abstract Human–robot collaboration (HRC) has become an integral element of many manufacturing and service industries. A fundamental requirement for safe HRC is understanding and predicting human trajectories and intentions, especially when humans and robots operate nearby. Although existing research emphasizes predicting human motions or intentions, a key challenge is predicting both human trajectories and intentions simultaneously. This paper addresses this gap by developing a multi-task learning framework consisting of a bi-long short-term memory-based encoder–decoder architecture that obtains the motion data from both human and robot trajectories as inputs and performs two main tasks simultaneously: human trajectory prediction and human intention prediction. The first task predicts human trajectories by reconstructing the motion sequences, while the second task tests two main approaches for intention prediction: supervised learning, specifically a support vector machine, to predict human intention based on the latent representation, and, an unsupervised learning method, the hidden Markov model, that decodes the latent features for human intention prediction. Four encoder designs are evaluated for feature extraction, including interaction-attention, interaction-pooling, interaction-seq2seq, and seq2seq. The framework is validated through a case study of a desktop disassembly task with robots operating at different speeds. The results include evaluating different encoder designs, analyzing the impact of incorporating robot motion into the encoder, and detailed visualizations. The findings show that the proposed framework can accurately predict human trajectories and intentions.more » « lessFree, publicly-accessible full text available May 1, 2026
-
Abstract Electric vehicles (EVs) are considered an environmentally friendly option compared to conventional vehicles. As the most critical module in EVs, batteries are complex electrochemical components with nonlinear behavior. On-board battery system performance is also affected by complicated operating environments. Real-time EV battery in-service status prediction is tricky but vital to enable fault diagnosis and prevent dangerous occurrences. Data-driven models with advantages in time-series analysis can be used to capture the degradation pattern from data about certain performance indicators and predict the battery states. The transformer model can capture long-range dependencies efficiently using a multi-head attention block mechanism. This paper presents the implementation of a standard transformer and an encoder-only transformer neural network to predict EV battery state of health (SOH). Based on the analysis of the lithium-ion battery from the NASA Prognostics Center of Excellence website's publicly accessible dataset, 28 features related to the charge and discharge measurement data are extracted. The features are screened using Pearson correlation coefficients. The results show that the filtered features can improve the model's accuracy and computational efficiency. The proposed standard transformer shows good performance in the SOH prediction.more » « less
-
Abstract This paper presents a deep learning enhanced adaptive unscented Kalman filter (UKF) for predicting human arm motion in the context of manufacturing. Unlike previous network-based methods that solely rely on captured human motion data, which is represented as bone vectors in this paper, we incorporate a human arm dynamic model into the motion prediction algorithm and use the UKF to iteratively forecast human arm motions. Specifically, a Lagrangian-mechanics-based physical model is employed to correlate arm motions with associated muscle forces. Then a Recurrent Neural Network (RNN) is integrated into the framework to predict future muscle forces, which are transferred back to future arm motions based on the dynamic model. Given the absence of measurement data for future human motions that can be input into the UKF to update the state, we integrate another RNN to directly predict human future motions and treat the prediction as surrogate measurement data fed into the UKF. A noteworthy aspect of this study involves the quantification of uncertainties associated with both the data-driven and physical models in one unified framework. These quantified uncertainties are used to dynamically adapt the measurement and process noises of the UKF over time. This adaption, driven by the uncertainties of the RNN models, addresses inaccuracies stemming from the data-driven model and mitigates discrepancies between the assumed and true physical models, ultimately enhancing the accuracy and robustness of our predictions. One unique point of our method is that it integrates a dynamic model of human arms and two RNN models, and uses Monte Carlo dropout sampling to quantify the uncertainties inherent in our RNN prediction models and transforms them into the covariances of the UKF’s measurement and process noises respectively. Compared to the traditional RNN-based prediction, our method demonstrates improved accuracy and robustness in extensive experimental validations of various types of human motions.more » « less
-
Abstract Human intention prediction plays a critical role in human–robot collaboration, as it helps robots improve efficiency and safety by accurately anticipating human intentions and proactively assisting with tasks. While current applications often focus on predicting intent once human action is completed, recognizing human intent in advance has received less attention. This study aims to equip robots with the capability to forecast human intent before completing an action, i.e., early intent prediction. To achieve this objective, we first extract features from human motion trajectories by analyzing changes in human joint distances. These features are then utilized in a Hidden Markov Model (HMM) to determine the state transition times from uncertain intent to certain intent. Second, we propose two models including a Transformer and a Bi-LSTM for classifying motion intentions. Then, we design a human–robot collaboration experiment in which the operator reaches multiple targets while the robot moves continuously following a predetermined path. The data collected through the experiment were divided into two groups: full-length data and partial data before state transitions detected by the HMM. Finally, the effectiveness of the suggested framework for predicting intentions is assessed using two different datasets, particularly in a scenario when motion trajectories are similar but underlying intentions vary. The results indicate that using partial data prior to the motion completion yields better accuracy compared to using full-length data. Specifically, the transformer model exhibits a 2% improvement in accuracy, while the Bi-LSTM model demonstrates a 6% increase in accuracy.more » « less
-
Abstract Product disassembly plays a crucial role in the recycling, remanufacturing, and reuse of end-of-use (EoU) products. However, the current manual disassembly process is inefficient due to the complexity and variation of EoU products. While fully automating disassembly is not economically viable given the intricate nature of the task, there is potential in using human–robot collaboration (HRC) to enhance disassembly operations. HRC combines the flexibility and problem-solving abilities of humans with the precise repetition and handling of unsafe tasks by robots. Nevertheless, numerous challenges persist in technology, human workers, and remanufacturing work, which require comprehensive multidisciplinary research to address critical gaps. These challenges have motivated the authors to provide a detailed discussion on the opportunities and obstacles associated with introducing HRC to disassembly. In this regard, the authors have conducted a review of the recent progress in HRC disassembly and present the insights gained from this analysis from three distinct perspectives: technology, workers, and work.more » « less
-
Abstract Despite the importance of product repairability, current methods for assessing and grading repairability are limited, which hampers the efforts of designers, remanufacturers, original equipment manufacturers (OEMs), and repair shops. To improve the efficiency of assessing product repairability, this study introduces two artificial intelligence (AI) based approaches. The first approach is a supervised learning framework that utilizes object detection on product teardown images to measure repairability. Transfer learning is employed with machine learning architectures such as ConvNeXt, GoogLeNet, ResNet50, and VGG16 to evaluate repairability scores. The second approach is an unsupervised learning framework that combines feature extraction and cluster learning to identify product design features and group devices with similar designs. It utilizes an oriented FAST and rotated BRIEF feature extractor (ORB) along with k-means clustering to extract features from teardown images and categorize products with similar designs. To demonstrate the application of these assessment approaches, smartphones are used as a case study. The results highlight the potential of artificial intelligence in developing an automated system for assessing and rating product repairability.more » « less
-
Free, publicly-accessible full text available August 15, 2026
-
Product disassembly is integral to remanufacturing and recovery operations of end-of-use devices. Traditionally, disassembly has been conducted manually with significant safety risks to human workers. In recent years, robotic disassembly has gained popularity to alleviate human workload and safety concerns. Despite these advancements, robots have limited capabilities in handling all disassembly tasks independently. It is essential to assess whether a robot is capable of performing specific disassembly tasks or not. This study proposes a disassembly scoring framework that evaluates robotic feasibility for disassembling components based on five design-related factors: weight, shape, size, accessibility, and positioning. For each factor, a disassembly score is defined to analyze its specific impact on robotic grasping and placement capabilities. Further, the relationship between the five factors and robotic capabilities, such as grasping and placing, is discussed by an example of the UR5e manipulator. To show the potential for automating the generation of disassembly metric, the Multi-Axis Vision Transformer (MaxViT) model is used to determine component sizes through image processing of the XPS 8700 desktop. Moreover, the application of the proposed disassembly scoring framework is discussed in terms of determining the appropriate work setting for disassembly operations under three main categories: human–robot collaboration (HRC), Semi-HRC, and Worker-Only settings. A disassembly time metric for calculating disassembly time for HRC is also proposed. The study outcomes determine the proper work settings based on the robotic capability.more » « lessFree, publicly-accessible full text available June 1, 2026
An official website of the United States government
