skip to main content

Search for: All records

Creators/Authors contains: "Liang, Xiao"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Human intention prediction plays a critical role in human–robot collaboration, as it helps robots improve efficiency and safety by accurately anticipating human intentions and proactively assisting with tasks. While current applications often focus on predicting intent once human action is completed, recognizing human intent in advance has received less attention. This study aims to equip robots with the capability to forecast human intent before completing an action, i.e., early intent prediction. To achieve this objective, we first extract features from human motion trajectories by analyzing changes in human joint distances. These features are then utilized in a Hidden Markov Model (HMM) to determine the state transition times from uncertain intent to certain intent. Second, we propose two models including a Transformer and a Bi-LSTM for classifying motion intentions. Then, we design a human–robot collaboration experiment in which the operator reaches multiple targets while the robot moves continuously following a predetermined path. The data collected through the experiment were divided into two groups: full-length data and partial data before state transitions detected by the HMM. Finally, the effectiveness of the suggested framework for predicting intentions is assessed using two different datasets, particularly in a scenario when motion trajectories are similar but underlying intentions vary. The results indicate that using partial data prior to the motion completion yields better accuracy compared to using full-length data. Specifically, the transformer model exhibits a 2% improvement in accuracy, while the Bi-LSTM model demonstrates a 6% increase in accuracy.

    more » « less
    Free, publicly-accessible full text available May 1, 2025
  2. Abstract

    Product disassembly plays a crucial role in the recycling, remanufacturing, and reuse of end-of-use (EoU) products. However, the current manual disassembly process is inefficient due to the complexity and variation of EoU products. While fully automating disassembly is not economically viable given the intricate nature of the task, there is potential in using human–robot collaboration (HRC) to enhance disassembly operations. HRC combines the flexibility and problem-solving abilities of humans with the precise repetition and handling of unsafe tasks by robots. Nevertheless, numerous challenges persist in technology, human workers, and remanufacturing work, which require comprehensive multidisciplinary research to address critical gaps. These challenges have motivated the authors to provide a detailed discussion on the opportunities and obstacles associated with introducing HRC to disassembly. In this regard, the authors have conducted a review of the recent progress in HRC disassembly and present the insights gained from this analysis from three distinct perspectives: technology, workers, and work.

    more » « less
    Free, publicly-accessible full text available February 1, 2025
  3. Free, publicly-accessible full text available January 1, 2025
  4. Free, publicly-accessible full text available August 1, 2024
  5. Activity recognition is a crucial aspect in smart manufacturing and human-robot collaboration, as robots play a vital role in improving efficiency and safety by accurately recognizing human intentions and proactively assisting with tasks. Current human intention recognition applications only consider the accuracy of recognition but ignore the importance of predicting it in advance. Given human reaching movements, we want to equip the robot with the ability to predict human intent not only with precise recognition but also at an early stage. In this paper, we propose a framework to apply Transformer-based and LSTM-based models to learn motion intentions. Second, based on the observation of distances of human joints along the motion trajectory, we explore how we can use the hidden Markov model to find intent state transitions, i.e., intent uncertainty and intent certainty. Finally, two data types are generated, one for the full data and the other for the length of data before state transitions; both data are evaluated on models to assess the robustness of intention prediction. We conducted experiments in a manufacturing workspace where the experimenter reaches multiple scattered targets and further this experimental scenario was designed to examine how intents differ, but motions are only slightly different. The proposed models were then evaluated with experimental data, and further performance comparisons were made between models and between different intents. Finally, early predictions were validated to be better than using full-length data. 
    more » « less
    Free, publicly-accessible full text available August 1, 2024
  6. Free, publicly-accessible full text available May 31, 2024
  7. Abstract Disassembly is an essential process for the recovery of end-of-life (EOL) electronics in remanufacturing sites. Nevertheless, the process remains labor-intensive due to EOL electronics’ high degree of uncertainty and complexity. The robotic technology can assist in improving disassembly efficiency; however, the characteristics of EOL electronics pose difficulties for robot operation, such as removing small components. For such tasks, detecting small objects is critical for robotic disassembly systems. Screws are widely used as fasteners in ordinary electronic products while having small sizes and varying shapes in a scene. To enable robotic systems to disassemble screws, the location information and the required tools need to be predicted. This paper proposes a computer vision framework for detecting screws and recommending related tools for disassembly. First, a YOLOv4 algorithm is used to detect screw targets in EOL electronic devices and a screw image extraction mechanism is executed based on the position coordinates predicted by YOLOv4. Second, after obtaining the screw images, the EfficientNetv2 algorithm is applied for screw shape classification. In addition to proposing a framework for automatic small-object detection, we explore how to modify the object detection algorithm to improve its performance and discuss the sensitivity of tool recommendations to the detection predictions. A case study of three different types of screws in EOL electronics is used to evaluate the performance of the proposed framework. 
    more » « less
    Free, publicly-accessible full text available March 1, 2024
  8. Abstract

    Drones have increasingly collaborated with human workers in some workspaces, such as warehouses. The failure of a drone flight may bring potential risks to human beings' life safety during some aerial tasks. One of the most common flight failures is triggered by damaged propellers. To quickly detect physical damage to propellers, recognise risky flights, and provide early warnings to surrounding human workers, a new and comprehensive fault diagnosis framework is presented that uses only the audio caused by propeller rotation without accessing any flight data. The diagnosis framework includes three components: leverage convolutional neural networks, transfer learning, and Bayesian optimisation. Particularly, the audio signal from an actual flight is collected and transferred into time–frequency spectrograms. First, a convolutional neural network‐based diagnosis model that utilises these spectrograms is developed to identify whether there is any broken propeller involved in a specific drone flight. Additionally, the authors employ Monte Carlo dropout sampling to obtain the inconsistency of diagnostic results and compute the mean probability score vector's entropy (uncertainty) as another factor to diagnose the drone flight. Next, to reduce data dependence on different drone types, the convolutional neural network‐based diagnosis model is further augmented by transfer learning. That is, the knowledge of a well‐trained diagnosis model is refined by using a small set of data from a different drone. The modified diagnosis model has the ability to detect the broken propeller of the second drone. Thirdly, to reduce the hyperparameters' tuning efforts and reinforce the robustness of the network, Bayesian optimisation takes advantage of the observed diagnosis model performances to construct a Gaussian process model that allows the acquisition function to choose the optimal network hyperparameters. The proposed diagnosis framework is validated via real experimental flight tests and has a reasonably high diagnosis accuracy.

    more » « less