Abstract In this study, a 13 degrees of freedom (DOFs) three-dimensional (3D) human arm model and a 10 DOFs 3D robotic arm model are used to validate the grasping force for human-robot lifting motion prediction. The human arm and robotic arm are modeled in Denavit-Hartenberg (DH) representation. In addition, the 3D box is modeled as a floating-base rigid body with 6 global DOFs. The human-box and robot-box interactions are characterized as a collection of grasping forces. The joint torque squares of human arm and robot arm are minimized subjected to physics and task constraints. The design variables include (1) control points of cubic B-splines of joint angle profiles of the human arm, robotic arm, and box; and (2) the discretized grasping forces during lifting. Both numerical and experimental human-robot liftings were performed with a 2 kg box. The simulation reports the human arm’s joint angle profiles, joint torque profiles, and grasping force profiles. The comparisons of the joint angle profiles and grasping force profiles between experiment and simulation are presented. The simulated joint angle profiles have similar trends to the experimental data. It is concluded that human and robot share the load during lifting process, and the predicted human grasping force matches the measured experimental grasping force reasonably well.
more »
« less
Reliable Vision-Based Grasping Target Recognition for Upper Limb Prostheses
Computer vision has shown promising potential in wearable robotics applications (e.g., human grasping target prediction and context understanding). However, in practice, the performance of computer vision algorithms is challenged by insufficient or biased training, observation noise, cluttered background, etc. By leveraging Bayesian deep learning (BDL), we have developed a novel, reliable vision-based framework to assist upper limb prosthesis grasping during arm reaching. This framework can measure different types of uncertainties from the model and data for grasping target recognition in realistic and challenging scenarios. A probability calibration network was developed to fuse the uncertainty measures into one calibrated probability for online decision making. We formulated the problem as the prediction of grasping target while arm reaching. Specifically, we developed a 3-D simulation platform to simulate and analyze the performance of vision algorithms under several common challenging scenarios in practice. In addition, we integrated our approach into a shared control framework of a prosthetic arm and demonstrated its potential at assisting human participants with fluent target reaching and grasping tasks.
more »
« less
- PAR ID:
- 10173357
- Date Published:
- Journal Name:
- IEEE Transactions on Cybernetics
- ISSN:
- 2168-2267
- Page Range / eLocation ID:
- 1 to 13
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
While current vision algorithms excel at many challenging tasks, it is unclear how well they understand the physical dynamics of real-world environments. Here we introduce Physion, a dataset and benchmark for rigorously evaluating the ability to predict how physical scenarios will evolve over time. Our dataset features realistic simulations of a wide range of physical phenomena, including rigid and soft-body collisions, stable multi-object configurations, rolling, sliding, and projectile motion, thus providing a more comprehensive challenge than previous benchmarks. We used Physion to benchmark a suite of models varying in their architecture, learning objective, input-output structure, and training data. In parallel, we obtained precise measurements of human prediction behavior on the same set of scenarios, allowing us to directly evaluate how well any model could approximate human behavior. We found that vision algorithms that learn object-centric representations generally outperform those that do not, yet still fall far short of human performance. On the other hand, graph neural networks with direct access to physical state information both perform substantially better and make predictions that are more similar to those made by humans. These results suggest that extracting physical representations of scenes is the main bottleneck to achieving human-level and human-like physical understanding in vision algorithms. We have publicly released all data and code to facilitate the use of Physion to benchmark additional models in a fully reproducible manner, enabling systematic evaluation of progress towards vision algorithms that understand physical environments as robustly as people do.more » « less
-
Abstract In this paper, an optimization-based dynamic modeling method is used for human-robot lifting motion prediction. The three-dimensional (3D) human arm model has 13 degrees of freedom (DOFs) and the 3D robotic arm (Sawyer robotic arm) has 10 DOFs. The human arm and robotic arm are built in Denavit-Hartenberg (DH) representation. In addition, the 3D box is modeled as a floating-base rigid body with 6 global DOFs. The interactions between human arm and box, and robot and box are modeled as a set of grasping forces which are treated as unknowns (design variables) in the optimization formulation. The inverse dynamic optimization is used to simulate the lifting motion where the summation of joint torque squares of human arm is minimized subjected to physical and task constraints. The design variables are control points of cubic B-splines of joint angle profiles of the human arm, robotic arm, and box, and the box grasping forces at each time point. A numerical example is simulated for huma-robot lifting with a 10 Kg box. The human and robotic arms’ joint angle, joint torque, and grasping force profiles are reported. These optimal outputs can be used as references to control the human-robot collaborative lifting task.more » « less
-
In this paper, we meticulously examine the robustness of computer vision object detection frameworks within the intricate realm of real-world traffic scenarios, with a particular emphasis on challenging adverse weather conditions. Conventional evaluation methods often prove inadequate in addressing the complexities inherent in dynamic traffic environments—an increasingly vital consideration as global advancements in autonomous vehicle technologies persist. Our investigation delves specifically into the nuanced performance of these algorithms amidst adverse weather conditions like fog, rain, snow, sun flare, and more, acknowledging the substantial impact of weather dynamics on their precision. Significantly, we seek to underscore that an object detection framework excelling in clear weather may encounter significant challenges in adverse conditions. Our study incorporates in-depth ablation studies on dual modality architectures, exploring a range of applications including traffic monitoring, vehicle tracking, and object tracking. The ultimate goal is to elevate the safety and efficiency of transportation systems, recognizing the pivotal role of robust computer vision systems in shaping the trajectory of future autonomous and intelligent transportation technologies.more » « less
-
Picking an item in the presence of other objects can be challenging as it involves occlusions and partial views. Given object models, one approach is to perform object pose estimation and use the most likely candidate pose per object to pick the target without collisions. This approach, however, ignores the uncertainty of the perception process both regarding the target’s and the surrounding objects’ poses. This work proposes first a perception process for 6D pose estimation, which returns a discrete distribution of object poses in a scene. Then, an open-loop planning pipeline is proposed to return safe and effective solutions for moving a robotic arm to pick, which (a) minimizes the probability of collision with the obstructing objects; and (b) maximizes the probability of reaching the target item. The planning framework models the challenge as a stochastic variant of the Minimum Constraint Removal (MCR) problem. The effectiveness of the methodology is verified given both simulated and real data in different scenarios. The experiments demonstrate the importance of considering the uncertainty of the perception process in terms of safe execution. The results also show that the methodology is more effective than conservative MCR approaches, which avoid all possible object poses regardless of the reported uncertainty.more » « less
An official website of the United States government

