skip to main content

Title: Drosophibot: a fruit fly inspired bio-robot
We introduce Drosophibot, a hexapod robot with legs designed based on the Common fruit fly, Drosophila melanogaster, built as a test platform for neural control development. The robot models anatomical aspects not present in other, similar bio-robots such as a retractable abdominal segment, insect-like dynamic scaling, and compliant feet segments in the hopes that more similar biomechanics will lead to more similar neural control and resulting behaviors. In increasing biomechanical modeling accuracy, we aim to gain further insight into the insect’s nervous system to inform the current model and subsequent neural controllers for legged robots.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
8th International Conference, Living Machines 2019
Page Range / eLocation ID:
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Human-centered environments provide affordances for and require the use of two-handed, or bimanual, manipulations. Robots designed to function in, and physically interact with, these environments have not been able to meet these requirements because standard bimanual control approaches have not accommodated the diverse, dynamic, and intricate coordinations between two arms to complete bimanual tasks. In this work, we enabled robots to more effectively perform bimanual tasks by introducing a bimanual shared-control method. The control method moves the robot’s arms to mimic the operator’s arm movements but provides on-the-fly assistance to help the user complete tasks more easily. Our method used a bimanual action vocabulary, constructed by analyzing how people perform two-hand manipulations, as the core abstraction level for reasoning about how to assist in bimanual shared autonomy. The method inferred which individual action from the bimanual action vocabulary was occurring using a sequence-to-sequence recurrent neural network architecture and turned on a corresponding assistance mode, signals introduced into the shared-control loop designed to make the performance of a particular bimanual action easier or more efficient. We demonstrate the effectiveness of our method through two user studies that show that novice users could control a robot to complete a range of complex manipulation tasks more successfully using our method compared to alternative approaches. We discuss the implications of our findings for real-world robot control scenarios. 
    more » « less
  2. Rehabilitation of human motor function is an issue of growing significance, and human-interactive robots offer promising potential to meet the need. For the lower extremity, however, robot-aided therapy has proven challenging. To inform effective approaches to robotic gait therapy, it is important to better understand unimpaired locomotor control: its sensitivity to different mechanical contexts and its response to perturbations. The present study evaluated the behavior of 14 healthy subjects who walked on a motorized treadmill and overground while wearing an exoskeletal ankle robot. Their response to a periodic series of ankle plantar flexion torque pulses, delivered at periods different from, but sufficiently close to, their preferred stride cadence, was assessed to determine whether gait entrainment occurred, how it differed across conditions, and if the adapted motor behavior persisted after perturbation. Certain aspects of locomotor control were exquisitely sensitive to walking context, while others were not. Gaits entrained more often and more rapidly during overground walking, yet, in all cases, entrained gaits synchronized the torque pulses with ankle push-off, where they provided assistance with propulsion. Furthermore, subjects entrained to perturbation periods that required an adaption toward slower cadence, even though the pulses acted to accelerate gait, indicating a neural adaptation of locomotor control. Lastly, during 15 post-perturbation strides, the entrained gait period was observed to persist more frequently during overground walking. This persistence was correlated with the number of strides walked at the entrained gait period (i.e., longer exposure), which also indicated a neural adaptation. NEW & NOTEWORTHY We show that the response of human locomotion to physical interaction differs between treadmill and overground walking. Subjects entrained to a periodic series of ankle plantar flexion torque pulses that shifted their gait cadence, synchronizing ankle push-off with the pulses (so that they assisted propulsion) even when gait cadence slowed. Entrainment was faster overground and, on removal of torque pulses, the entrained gait period persisted more prominently overground, indicating a neural adaptation of locomotor control. 
    more » « less
  3. Soft robots have recently drawn extensive attention thanks to their unique ability of adapting to complicated environments. Soft robots are designed in a variety of shapes of aiming for many different applications. However, accurate modelling and control of soft robots is still an open problem due to the complex robot structure and uncertain interaction with the environment. In fact, there is no unified framework for the modeling and control of generic soft robots. In this paper, we present a novel data-driven machine learning method for modeling a cable-driven soft robot. This machine learning algorithm, named deterministic learning (DL), uses soft robot motion data to train a radial basis function neural network (RBFNN). The soft robot motion dynamics are then guaranteed to be accurately identified, represented, and stored as an RBFNN model with converged constant neural network weights. To validate our method, We have built a simulated soft robot almost identical to our real inchworm soft robot, and we have tested the DL algorithm in simulation. Furthermore, a neural network weight combining technique is used which can extract and combine useful dynamics information from multiple robot motion trajectories. 
    more » « less
  4. Abstract

    Shared control of mobile robots integrates manual input with auxiliary autonomous controllers to improve the overall system performance. However, prior work that seeks to find the optimal shared control ratio needs an accurate human model, which is usually challenging to obtain. In this study, the authors develop an extended Twin Delayed Deep Deterministic Policy Gradient (DDPG) (TD3X)‐based shared control framework that learns to assist a human operator in teleoperating mobile robots optimally. The robot's states, shared control ratio in the previous time step, and human's control input is used as inputs to the reinforcement learning (RL) agent, which then outputs the optimal shared control ratio between human input and autonomous controllers without knowing the human model. Noisy softmax policies are developed to make the TD3X algorithm feasible under the constraint of a shared control ratio. Furthermore, to accelerate the training process and protect the robot, a navigation demonstration policy and a safety guard are developed. A neural network (NN) structure is developed to maintain the correlation of sensor readings among heterogeneous input data and improve the learning speed. In addition, an extended DAGGER (DAGGERX) human agent is developed for training the RL agent to reduce human workload. Robot simulations and experiments with humans in the loop are conducted. The results show that the DAGGERX human agent can simulate real human inputs in the worst‐case scenarios with a mean square error of 0.0039. Compared to the original TD3 agent, the TD3X‐based shared control system decreased the average collision number from 387.3 to 44.4 in a simplistic environment and 394.2 to 171.2 in a more complex environment. The maximum average return increased from 1043 to 1187 with a faster converge speed in the simplistic environment, while the performance is equally good in the complex environment because of the use of an advanced human agent. In the human subject tests, participants' average perceived workload was significantly lower in shared control than that in exclusively manual control (26.90 vs. 40.07,p = 0.013).

    more » « less
  5. Multi-robot cooperative control has been extensively studied using model-based distributed control methods. However, such control methods rely on sensing and perception modules in a sequential pipeline design, and the separation of perception and controls may cause processing latencies and compounding errors that affect control performance. End-to-end learning overcomes this limitation by implementing direct learning from onboard sensing data, with control commands output to the robots. Challenges exist in end-to-end learning for multi-robot cooperative control, and previous results are not scalable. We propose in this article a novel decentralized cooperative control method for multi-robot formations using deep neural networks, in which inter-robot communication is modeled by a graph neural network (GNN). Our method takes LiDAR sensor data as input, and the control policy is learned from demonstrations that are provided by an expert controller for decentralized formation control. Although it is trained with a fixed number of robots, the learned control policy is scalable. Evaluation in a robot simulator demonstrates the triangular formation behavior of multi-robot teams of different sizes under the learned control policy.

    more » « less