Abstract Balancing parallel robots throughout their workspace while avoiding the use of balancing masses and respecting design practicality constraints is difficult. Medical robots demand such compact and lightweight designs. This paper considers the difficult task of achieving optimal approximate balancing of a parallel robot throughout a desired task-based dexterous workspace using balancing springs only. While it is possible to achieve perfect balancing in a path, only approximate balancing may be achieved without the addition of balancing masses. Design considerations for optimal robot base placement and the effects of placement of torsional balancing springs are presented. Using a modal representation for the balancing torque requirements, we use recent results on the design of wire-wrapped cam mechanisms to achieve balancing throughout a task-based workspace. A simulation study shows that robot base placement can have a detrimental effect on the attainability of a practical design solution for static balancing. We also show that optimal balancing using torsional springs is best achieved when all springs are at the actuated joints and that the wire-wrapped cam design can significantly improve the performance of static balancing. The methodology presented in this paper provides practical design solutions that yield simple, lightweight and compact designs suitable for medical applications where such traits are paramount. 
                        more » 
                        « less   
                    
                            
                            Precision Evaluation of Large Payload SCARA Robot for PCB Assembly
                        
                    
    
            Abstract The placement of SMD components is usually performed with Cartesian type robots, a task known as pick-and-place (P&P). Small Selective Compliance Articulated Robot Arm (SCARA) robots are also growing in popularity for this use because of their quick and accurate performance. This paper describes the use of the Lean Robotic Micromanufacturing (LRM) framework applied on a large, 10kg payload, industrial SCARA robot for PCB assembly. The LRM framework guided the precision evaluation of the PCB assembly process and provided a prediction of the placement precision and yield. We experimentally evaluated the repeatability of the system, as well as the resulting collective errors during the assembly. Results confirm that the P&P task can achieve the required assembly tolerance of 200 microns without employing closed-loop visual servoing, therefore considerably decreasing the system complexity and assembly time. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1849213
- PAR ID:
- 10410846
- Date Published:
- Journal Name:
- ASME 2022 17th International Manufacturing Science and Engineering Conference
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Industrial robots, as mature and high-efficient equipment, have been applied to various fields, such as vehicle manufacturing, product packaging, painting, welding, and medical surgery. Most industrial robots are only operating in their own workspace, in other words, they are floor-mounted at the fixed locations. Just some industrial robots are wall-mounted on one linear rail based on the applications. Sometimes, industrial robots are ceiling-mounted on an X-Y gantry to perform upside-down manipulation tasks. The main objective of this paper is to describe the NeXus, a custom robotic system that has been designed for precision microsystem integration tasks with such a gantry. The system tasks include assembly, bonding, and 3D printing of sensor arrays, solar cells, and microrobotic prototypes. The NeXus consists of a custom designed frame, providing structural rigidity, a large overhead X-Y gantry carrying a 6 degrees of freedom industrial robot, and several other precision positioners and processes. We focus here on the design and precision evaluation of the overhead ceiling-mounted industrial robot of NeXus and its supporting frame. We first simulated the behavior of the frame using Finite Element Analysis (FEA), then experimentally evaluated the pose repeatability of the robot end-effector using three different types of sensors. Results verify that the performance objectives of the design are achieved.more » « less
- 
            Abstract Human–robot collaboration (HRC) has become an integral element of many manufacturing and service industries. A fundamental requirement for safe HRC is understanding and predicting human trajectories and intentions, especially when humans and robots operate nearby. Although existing research emphasizes predicting human motions or intentions, a key challenge is predicting both human trajectories and intentions simultaneously. This paper addresses this gap by developing a multi-task learning framework consisting of a bi-long short-term memory-based encoder–decoder architecture that obtains the motion data from both human and robot trajectories as inputs and performs two main tasks simultaneously: human trajectory prediction and human intention prediction. The first task predicts human trajectories by reconstructing the motion sequences, while the second task tests two main approaches for intention prediction: supervised learning, specifically a support vector machine, to predict human intention based on the latent representation, and, an unsupervised learning method, the hidden Markov model, that decodes the latent features for human intention prediction. Four encoder designs are evaluated for feature extraction, including interaction-attention, interaction-pooling, interaction-seq2seq, and seq2seq. The framework is validated through a case study of a desktop disassembly task with robots operating at different speeds. The results include evaluating different encoder designs, analyzing the impact of incorporating robot motion into the encoder, and detailed visualizations. The findings show that the proposed framework can accurately predict human trajectories and intentions.more » « less
- 
            Human-Robot Collaboration (HRC) aims to create environments where robots can understand workspace dynamics and actively assist humans in operations, with the human intention recognition being fundamental to efficient and safe task fulfillment. Language-based control and communication is a natural and convenient way to convey human intentions. However, traditional language models require instructions to be articulated following a rigid, predefined syntax, which can be unnatural, inefficient, and prone to errors. This paper investigates the reasoning abilities that emerged from the recent advancement of Large Language Models (LLMs) to overcome these limitations, allowing for human instructions to be used to enhance human-robot communication. For this purpose, a generic GPT 3.5 model has been fine-tuned to interpret and translate varied human instructions into essential attributes, such as task relevancy and tools and/or parts required for the task. These attributes are then fused with perceived on-going robot action to generate a sequence of relevant actions. The developed technique is evaluated in a case study where robots initially misinterpreted human actions and picked up wrong tools and parts for assembly. It is shown that the fine-tuned LLM can effectively identify corrective actions across a diverse range of instructional human inputs, thereby enhancing the robustness of human-robot collaborative assembly for smart manufacturing.more » « less
- 
            In this paper, we develop the analytical framework for a novel Wireless signal-based Sensing capability for Robotics (WSR) by leveraging a robots’ mobility in 3D space. It allows robots to primarily measure relative direction, or Angle-of-Arrival (AOA), to other robots, while operating in non-line-of-sight unmapped environments and without requiring external infrastructure. We do so by capturing all of the paths that a wireless signal traverses as it travels from a transmitting to a receiving robot in the team, which we term as an AOA profile. The key intuition behind our approach is to enable a robot to emulate antenna arrays as it moves freely in 2D and 3D space. The small differences in the phase of the wireless signals are thus processed with knowledge of robots’ local displacement to obtain the profile, via a method akin to Synthetic Aperture Radar (SAR). The main contribution of this work is the development of (i) a framework to accommodate arbitrary 2D and 3D motion, as well as continuous mobility of both signal transmitting and receiving robots, while computing AOA profiles between them and (ii) a Cramer–Rao Bound analysis, based on antenna array theory, that provides a lower bound on the variance in AOA estimation as a function of the geometry of robot motion. This is a critical distinction with previous work on SAR-based methods that restrict robot mobility to prescribed motion patterns, do not generalize to the full 3D space, and require transmitting robots to be stationary during data acquisition periods. We show that allowing robots to use their full mobility in 3D space while performing SAR results in more accurate AOA profiles and thus better AOA estimation. We formally characterize this observation as the informativeness of the robots’ motion, a computable quantity for which we derive a closed form. All analytical developments are substantiated by extensive simulation and hardware experiments on air/ground robot platforms using 5 GHz WiFi. Our experimental results bolster our analytical findings, demonstrating that 3D motion provides enhanced and consistent accuracy, with a total AOA error of less than 10◦for 95% of trials. We also analytically characterize the impact of displacement estimation errors on the measured AOA and validate this theory empirically using robot displacements obtained using an off-the-shelf Intel Tracking Camera T265. Finally, we demonstrate the performance of our system on a multi-robot task where a heterogeneous air/ground pair of robots continuously measure AOA profiles over a WiFi link to achieve dynamic rendezvous in an unmapped, 300 m2environment with occlusions.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    