Human-robot collaboration (HRC) has become an integral element of many industries, including manufacturing. A fundamental requirement for safe HRC is to understand and predict human intentions and trajectories, especially when humans and robots operate in close proximity. However, predicting both human intention and trajectory components simultaneously remains a research gap. In this paper, we have developed a multi-task learning (MTL) framework designed for HRC, which processes motion data from both human and robot trajectories. The first task predicts human trajectories, focusing on reconstructing the motion sequences. The second task employs supervised learning, specifically a Support Vector Machine (SVM), to predict human intention based on the latent representation. In addition, an unsupervised learning method, Hidden Markov Model (HMM), is utilized for human intention prediction that offers a different approach to decoding the latent features. The proposed framework uses MTL to understand human behavior in complex manufacturing environments. The novelty of the work includes the use of a latent representation to capture temporal dynamics in human motion sequences and a comparative analysis of various encoder architectures. We validate our framework through a case study focused on a HRC disassembly desktop task. The findings confirm the system's capability to accurately predict both human intentions and trajectories. 
                        more » 
                        « less   
                    
                            
                            Multi-Task Learning for Intention and Trajectory Prediction in Human-Robot Collaborative Disassembly Tasks
                        
                    
    
            Abstract Human-robot collaboration (HRC) has become an integral element of many industries, including manufacturing. A fundamental requirement for safe HRC is to understand and predict human intentions and trajectories, especially when humans and robots operate in close proximity. However, predicting both human intention and trajectory components simultaneously remains a research gap. In this paper, we have developed a multi-task learning (MTL) framework designed for HRC, which processes motion data from both human and robot trajectories. The first task predicts human trajectories, focusing on reconstructing the motion sequences. The second task employs supervised learning, specifically a Support Vector Machine (SVM), to predict human intention based on the latent representation. In addition, an unsupervised learning method, Hidden Markov Model (HMM), is utilized for human intention prediction that offers a different approach to decoding the latent features. The proposed framework uses MTL to understand human behavior in complex manufacturing environments. The novelty of the work includes the use of a latent representation to capture temporal dynamics in human motion sequences and a comparative analysis of various encoder architectures. We validate our framework through a case study focused on a HRC disassembly desktop task. The findings confirm the system’s capability to accurately predict both human intentions and trajectories. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2026533
- PAR ID:
- 10566249
- Publisher / Repository:
- American Society of Mechanical Engineers
- Date Published:
- ISBN:
- 978-0-7918-8834-6
- Format(s):
- Medium: X
- Location:
- Washington, DC, USA
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract Human–robot collaboration (HRC) has become an integral element of many manufacturing and service industries. A fundamental requirement for safe HRC is understanding and predicting human trajectories and intentions, especially when humans and robots operate nearby. Although existing research emphasizes predicting human motions or intentions, a key challenge is predicting both human trajectories and intentions simultaneously. This paper addresses this gap by developing a multi-task learning framework consisting of a bi-long short-term memory-based encoder–decoder architecture that obtains the motion data from both human and robot trajectories as inputs and performs two main tasks simultaneously: human trajectory prediction and human intention prediction. The first task predicts human trajectories by reconstructing the motion sequences, while the second task tests two main approaches for intention prediction: supervised learning, specifically a support vector machine, to predict human intention based on the latent representation, and, an unsupervised learning method, the hidden Markov model, that decodes the latent features for human intention prediction. Four encoder designs are evaluated for feature extraction, including interaction-attention, interaction-pooling, interaction-seq2seq, and seq2seq. The framework is validated through a case study of a desktop disassembly task with robots operating at different speeds. The results include evaluating different encoder designs, analyzing the impact of incorporating robot motion into the encoder, and detailed visualizations. The findings show that the proposed framework can accurately predict human trajectories and intentions.more » « less
- 
            Pai Zheng (Ed.)Abstract A significant challenge in human–robot collaboration (HRC) is coordinating robot and human motions. Discoordination can lead to production delays and human discomfort. Prior works seek coordination by planning robot paths that consider humans or their anticipated occupancy as static obstacles, making them nearsighted and prone to entrapment by human motion. This work presents the spatio-temporal avoidance of predictions-prediction and planning framework (STAP-PPF) to improve robot–human coordination in HRC. STAP-PPF predicts multi-step human motion sequences based on the locations of objects the human manipulates. STAP-PPF then proactively determines time-optimal robot paths considering predicted human motion and robot speed restrictions anticipated according to the ISO15066 speed and separation monitoring (SSM) mode. When executing robot paths, STAP-PPF continuously updates human motion predictions. In real-time, STAP-PPF warps the robot’s path to account for continuously updated human motion predictions and updated SSM effects to mitigate delays and human discomfort. Results show the STAP-PPF generates robot trajectories of shorter duration. STAP-PPF robot trajectories also adapted better to real-time human motion deviation. STAP-PPF robot trajectories also maintain greater robot/human separation throughout tasks requiring close human–robot interaction. Tests with an assembly sequence demonstrate STAP-PPF’s ability to predict multi-step human tasks and plan robot motions for the sequence. STAP-PPF also most accurately estimates robot trajectory durations, within 30% of actual, which can be used to adapt the robot sequencing to minimize disruption.more » « less
- 
            Human-Robot Collaboration (HRC) aims to create environments where robots can understand workspace dynamics and actively assist humans in operations, with the human intention recognition being fundamental to efficient and safe task fulfillment. Language-based control and communication is a natural and convenient way to convey human intentions. However, traditional language models require instructions to be articulated following a rigid, predefined syntax, which can be unnatural, inefficient, and prone to errors. This paper investigates the reasoning abilities that emerged from the recent advancement of Large Language Models (LLMs) to overcome these limitations, allowing for human instructions to be used to enhance human-robot communication. For this purpose, a generic GPT 3.5 model has been fine-tuned to interpret and translate varied human instructions into essential attributes, such as task relevancy and tools and/or parts required for the task. These attributes are then fused with perceived on-going robot action to generate a sequence of relevant actions. The developed technique is evaluated in a case study where robots initially misinterpreted human actions and picked up wrong tools and parts for assembly. It is shown that the fine-tuned LLM can effectively identify corrective actions across a diverse range of instructional human inputs, thereby enhancing the robustness of human-robot collaborative assembly for smart manufacturing.more » « less
- 
            Activity recognition is a crucial aspect in smart manufacturing and human-robot collaboration, as robots play a vital role in improving efficiency and safety by accurately recognizing human intentions and proactively assisting with tasks. Current human intention recognition applications only consider the accuracy of recognition but ignore the importance of predicting it in advance. Given human reaching movements, we want to equip the robot with the ability to predict human intent not only with precise recognition but also at an early stage. In this paper, we propose a framework to apply Transformer-based and LSTM-based models to learn motion intentions. Second, based on the observation of distances of human joints along the motion trajectory, we explore how we can use the hidden Markov model to find intent state transitions, i.e., intent uncertainty and intent certainty. Finally, two data types are generated, one for the full data and the other for the length of data before state transitions; both data are evaluated on models to assess the robustness of intention prediction. We conducted experiments in a manufacturing workspace where the experimenter reaches multiple scattered targets and further this experimental scenario was designed to examine how intents differ, but motions are only slightly different. The proposed models were then evaluated with experimental data, and further performance comparisons were made between models and between different intents. Finally, early predictions were validated to be better than using full-length data.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    