skip to main content


Title: A self-aware and active-guiding training & assistant system for worker-centered intelligent manufacturing
Training and on-site assistance is critical to help workers master required skills, improve worker productivity, and guarantee the product quality. Traditional training methods lack worker-centered considerations that are particularly in need when workers are facing ever changing demands. In this study, we propose a worker-centered training & assistant system for intelligent manufacturing, which is featured with self-awareness and active-guidance. Multi-modal sensing techniques are applied to perceive each individual worker and a deep learning approach is developed to understand the worker’s behavior and intention. Moreover, an object detection algorithm is implemented to identify the parts/tools the worker is interacting with. Then the worker’s current state is inferred and used for quantifying and assessing the worker performance, from which the worker’s potential guidance demands are analyzed. Furthermore, onsite guidance with multi-modal augmented reality is provided actively and continuously during the operational process. Two case studies are used to demonstrate the feasibility and great potential of our proposed approach and system for applying to the manufacturing industry for frontline workers.  more » « less
Award ID(s):
1646162
NSF-PAR ID:
10129791
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Manufacturing Letters
Volume:
21
Page Range / eLocation ID:
45-49
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This study aims at sensing and understanding the worker’s activity in a human-centered intelligent manufacturing system. We propose a novel multi-modal approach for worker activity recognition by leveraging information from different sensors and in different modalities. Specifically, a smart armband and a visual camera are applied to capture Inertial Measurement Unit (IMU) signals and videos, respectively. For the IMU signals, we design two novel feature transform mechanisms, in both frequency and spatial domains, to assemble the captured IMU signals as images, which allow using convolutional neural networks to learn the most discriminative features. Along with the above two modalities, we propose two other modalities for the video data, i.e., at the video frame and video clip levels. Each of the four modalities returns a probability distribution on activity prediction. Then, these probability distributions are fused to output the worker activity classification result. A worker activity dataset is established, which at present contains 6 common activities in assembly tasks, i.e., grab a tool/part, hammer a nail, use a power-screwdriver, rest arms, turn a screwdriver, and use a wrench. The developed multi-modal approach is evaluated on this dataset and achieves recognition accuracies as high as 97% and 100% in the leave-one-out and half-half experiments, respectively. 
    more » « less
  2. Crowdsourcing has rapidly become a computing paradigm in machine learning and artificial intelligence. In crowdsourcing, multiple labels are collected from crowd-workers on an instance usually through the Internet. These labels are then aggregated as a single label to match the ground truth of the instance. Due to its open nature, human workers in crowdsourcing usually come with various levels of knowledge and socio-economic backgrounds. Effectively handling such human factors has been a focus in the study and applications of crowdsourcing. For example, Bi et al studied the impacts of worker's dedication, expertise, judgment, and task difficulty (Bi et al 2014). Qiu et al offered methods for selecting workers based on behavior prediction (Qiu et al 2016). Barbosa and Chen suggested rehumanizing crowdsourcing to deal with human biases (Barbosa 2019). Checco et al studied adversarial attacks on crowdsourcing for quality control (Checco et al 2020). There are many more related works available in literature. In contrast to commonly used binary-valued labels, interval-valued labels (IVLs) have been introduced very recently (Hu et al 2021). Applying statistical and probabilistic properties of interval-valued datasets, Spurling et al quantitatively defined worker's reliability in four measures: correctness, confidence, stability, and predictability (Spurling et al 2021). Calculating these measures, except correctness, does not require the ground truth of each instance but only worker’s IVLs. Applying these quantified reliability measures, people have significantly improved the overall quality of crowdsourcing (Spurling et al 2022). However, in real world applications, the reliability of a worker may vary from time to time rather than a constant. It is necessary to monitor worker’s reliability dynamically. Because a worker j labels instances sequentially, we treat j’s IVLs as an interval-valued time series in our approach. Assuming j’s reliability relies on the IVLs within a time window only, we calculate j’s reliability measures with the IVLs in the current time window. Moving the time window forward with our proposed practical strategies, we can monitor j’s reliability dynamically. Furthermore, the four reliability measures derived from IVLs are time varying too. With regression analysis, we can separate each reliability measure as an explainable trend and possible errors. To validate our approaches, we use four real world benchmark datasets in our computational experiments. Here are the main findings. The reliability weighted interval majority voting (WIMV) and weighted preferred matching probability (WPMP) schemes consistently overperform the base schemes in terms of much higher accuracy, precision, recall, and F1-score. Note: the base schemes are majority voting (MV), interval majority voting (IMV), and preferred matching probability (PMP). Through monitoring worker’s reliability, our computational experiments have successfully identified possible attackers. By removing identified attackers, we have ensured the quality. We have also examined the impact of window size selection. It is necessary to monitor worker’s reliability dynamically, and our computational results evident the potential success of our approaches.This work is partially supported by the US National Science Foundation through the grant award NSF/OIA-1946391.

     
    more » « less
  3. Smart City is a key component in Internet of Things (IoTs), so it has attracted much attention. The emergence of Mobile Crowd Sensing (MCS) systems enables many smart city applications. In an MCS system, sensing tasks are allocated to a number of mobile users. As a result, the sensing related context of each mobile user plays a significant role on service quality. However, some important sensing context is ignored in the literature. This motivates us to propose a Context-aware Multi-Armed Bandit (C-MAB) incentive mechanism to facilitate quality-based worker selection in an MCS system. We evaluate a worker’s service quality by its context (i.e., extrinsic ability and intrinsic ability) and cost. Based on our proposed C-MAB incentive mechanism and quality evaluation design, we develop a Modified Thompson Sampling Worker Selection (MTS-WS) algorithm to select workers in a reinforcement learning manner. MTS-WS is able to choose effective workers because it can maintain accurate worker quality information by updating evaluation parameters according to the status of task accomplishment. We theoretically prove that our C-MAB incentive mechanism is selection efficient, computationally efficient, individually rational, and truthful. Finally, we evaluate our MTS-WS algorithm on simulated and real-world datasets in comparison with some other classic algorithms. Our evaluation results demonstrate that MTS-WS achieves the highest cumulative utility of the requester and social welfare. 
    more » « less
  4. Abstract

    With the development of industrial automation and artificial intelligence, robotic systems are developing into an essential part of factory production, and the human-robot collaboration (HRC) becomes a new trend in the industrial field. In our previous work, ten dynamic gestures have been designed for communication between a human worker and a robot in manufacturing scenarios, and a dynamic gesture recognition model based on Convolutional Neural Networks (CNN) has been developed. Based on the model, this study aims to design and develop a new real-time HRC system based on multi-threading method and the CNN. This system enables the real-time interaction between a human worker and a robotic arm based on dynamic gestures. Firstly, a multi-threading architecture is constructed for high-speed operation and fast response while schedule more than one task at the same time. Next, A real-time dynamic gesture recognition algorithm is developed, where a human worker’s behavior and motion are continuously monitored and captured, and motion history images (MHIs) are generated in real-time. The generation of the MHIs and their identification using the classification model are synchronously accomplished. If a designated dynamic gesture is detected, it is immediately transmitted to the robotic arm to conduct a real-time response. A Graphic User Interface (GUI) for the integration of the proposed HRC system is developed for the visualization of the real-time motion history and classification results of the gesture identification. A series of actual collaboration experiments are carried out between a human worker and a six-degree-of-freedom (6 DOF) Comau industrial robot, and the experimental results show the feasibility and robustness of the proposed system.

     
    more » « less
  5. Abstract

    With the increasing employment of robots in multiple areas such as smart manufacturing and intelligent transportation, both undergraduate and graduate students from computing related majors (e.g., computer science and information technology) demonstrated strong interests in learning robotics technology to broaden their career opportunities. However, instilling computing students with robotics knowledge remains a challenge since most of them have limited pre-training in engineering subjects such as electronics and mechatronics. Therefore, robotics education for computing students demands an immersive real-world learning environment by considering both theories and intensive hands-on projects. Different from traditional textbook-directed robotics learning, in this study, a situated learning-based robotics education pedagogy is proposed for computing students to equip them with robotics expertise and foster their problem-solving skills in real-world human–robot interaction contexts. To create a realistic human–robot collaboration situation, a multi-modal collaborative robot is employed in the classroom-based learning community for the whole semester. Mini-project-based homework and team projects are designed for students to practice their critical thinking and hands-on experiences. The bidirectional-evaluation approach is utilized by the instructor and students to assess the quality of the proposed pedagogy. Practice results and student evaluations suggested that the proposed situated learning-based pedagogy and robotics curriculum provided computing students to learn robotics in an effective way, which was well recognized and accepted by students even most of them were beginners. Future work of this study is also discussed.

     
    more » « less