Abstract In this paper, an optimization-based dynamic modeling method is used for human-robot lifting motion prediction. The three-dimensional (3D) human arm model has 13 degrees of freedom (DOFs) and the 3D robotic arm (Sawyer robotic arm) has 10 DOFs. The human arm and robotic arm are built in Denavit-Hartenberg (DH) representation. In addition, the 3D box is modeled as a floating-base rigid body with 6 global DOFs. The interactions between human arm and box, and robot and box are modeled as a set of grasping forces which are treated as unknowns (design variables) in the optimization formulation. The inverse dynamic optimization is used to simulate the lifting motion where the summation of joint torque squares of human arm is minimized subjected to physical and task constraints. The design variables are control points of cubic B-splines of joint angle profiles of the human arm, robotic arm, and box, and the box grasping forces at each time point. A numerical example is simulated for huma-robot lifting with a 10 Kg box. The human and robotic arms’ joint angle, joint torque, and grasping force profiles are reported. These optimal outputs can be used as references to control the human-robot collaborative lifting task. 
                        more » 
                        « less   
                    
                            
                            MACHINE LEARNING-BASED ROBOTIC OBJECT DETECTION AND GRASPING FOR COLLABORATIVE ASSEMBLY
                        
                    
    
            An integral part of information-centric smart manufacturing is the adaptation of industrial robots to complement human workers in a collaborative manner. While advancement in sensing has enabled real-time monitoring of workspace, understanding the semantic information in the workspace, such as parts and tools, remains a challenge for seamless robot integration. The resulting lack of adaptivity to perform in a dynamic workspace have limited robots to tasks with pre-defined actions. In this paper, a machine learning-based robotic object detection and grasping method is developed to improve the adaptivity of robots. Specifically, object detection based on the concept of single-shot detection (SSD) and convolutional neural network (CNN) is investigated to recognize and localize objects in the workspace. Subsequently, the extracted information from object detection, such as the type, position, and orientation of the object, is fed into a multi-layer perceptron (MLP) to generate the desired joint angles of robotic arm for proper object grasping and handover to the human worker. Network training is guided by forward kinematics of the robotic arm in a self-supervised manner to mitigate issues such as singularity in computation. The effectiveness of the developed method is validated on an eDo robotic arm in a human-robot collaborative assembly case study. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1830295
- PAR ID:
- 10353048
- Editor(s):
- Hideki Aoyama; Keiich Shirase
- Date Published:
- Journal Name:
- Proc. 2022 International Symposium on Flexible Automation (ISFA)
- Page Range / eLocation ID:
- 180 - 187
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            This paper presents the design of a wearable robotic forearm for close-range human-robot collaboration. The robot's function is to serve as a lightweight supernumerary third arm for shared workspace activities. We present a functional prototype resulting from an iterative design process including several user studies. An analysis of the robot's kinematics shows an increase in reachable workspace by 246 % compared to the natural human reach. The robot's degrees of freedom and range of motion support a variety of usage scenarios with the robot as a collaborative tool, including self-handovers, fetching objects while the human's hands are occupied, assisting human-human collaboration, and stabilizing an object. We analyze the bio-mechanical loads for these scenarios and find that the design is able to operate within human ergonomic wear limits. We then report on a pilot human-robot interaction study that indicates robot autonomy is more task-time efficient and preferred by users when compared to direct voice-control. These results suggest that the design presented here is a promising configuration for a lightweight wearable robotic augmentation device, and can serve as a basis for further research into human-wearable collaboration.more » « less
- 
            Abstract In this study, a 13 degrees of freedom (DOFs) three-dimensional (3D) human arm model and a 10 DOFs 3D robotic arm model are used to validate the grasping force for human-robot lifting motion prediction. The human arm and robotic arm are modeled in Denavit-Hartenberg (DH) representation. In addition, the 3D box is modeled as a floating-base rigid body with 6 global DOFs. The human-box and robot-box interactions are characterized as a collection of grasping forces. The joint torque squares of human arm and robot arm are minimized subjected to physics and task constraints. The design variables include (1) control points of cubic B-splines of joint angle profiles of the human arm, robotic arm, and box; and (2) the discretized grasping forces during lifting. Both numerical and experimental human-robot liftings were performed with a 2 kg box. The simulation reports the human arm’s joint angle profiles, joint torque profiles, and grasping force profiles. The comparisons of the joint angle profiles and grasping force profiles between experiment and simulation are presented. The simulated joint angle profiles have similar trends to the experimental data. It is concluded that human and robot share the load during lifting process, and the predicted human grasping force matches the measured experimental grasping force reasonably well.more » « less
- 
            Autonomous robots that understand human instructions can significantly enhance the efficiency in human-robot assembly operations where robotic support is needed to handle unknown objects and/or provide on-demand assistance. This paper introduces a vision AI-based method for human-robot collaborative (HRC) assembly, enabled by a large language model (LLM). Upon 3D object reconstruction and pose establishment through neural object field modelling, a visual servoing-based mobile robotic system performs object manipulation and navigation guidance to a mobile robot. The LLM model provides text-based logic reasoning and high-level control command generation for natural human-robot interactions. The effectiveness of the presented method is experimentally demonstrated.more » « less
- 
            null (Ed.)We present a new design method that is tailored for designing a physical interactive robotic arm for overground physical interaction. Designing such robotic arms present various unique requirements that differ from existing robotic arms, which are used for general manipulation, such as being able to generate required forces at every point inside the workspace and/or having low intrinsic mechanical impedance. Our design method identifies these requirements and categorizes them into kinematic and dynamic characteristics of the robot and then ensures that these unique considerations are satisfied in the early design phase. The robot’s capability for use in such tasks is analyzed using mathematical simulations of the designed robot, and discussion of its dynamic characteristics is presented. With our proposed method, the robot arm is ensured to perform various overground interactive tasks with a human.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    