skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on February 25, 2026

Title: Toward autonomous event-based sensorimotor control with supervised gait learning and obstacle avoidance for robot navigation
Miniature robots are useful during disaster response and accessing remote or unsafe areas. They need to navigate uneven terrains without supervision and under severe resource constraints such as limited compute, storage and power budget. Event-based sensorimotor control in edge robotics has potential to enable fully autonomous and adaptive robot navigation systems capable of responding to environmental fluctuations by learning new types of motion and real-time decision making to avoid obstacles. This work presents a novel bio-inspired framework with a hierarchical control system to address these limitations, utilizing a tunable multi-layer neural network with a hardware-friendly Central Pattern Generator (CPG) as the core coordinator to govern the precise timing of periodic motion. Autonomous operation is managed by a Dynamic State Machine (DSM) at the top of the hierarchy, providing the necessary adaptability to handle environmental challenges such as obstacles or uneven terrain. The multi-layer neural network uses a nonlinear neuron model which employs mixed feedback at multiple timescales to produce rhythmic patterns of bursting events to control the motors. A comprehensive study of the architecture's building blocks is presented along with a detailed analysis of network equations. Finally, we demonstrate the proposed framework on the Petoi robot, which can autonomously learn walk and crawl gaits using supervised Spike-Time Dependent Plasticity (STDP) learning algorithm, transition between the learned gaits stored as new states, through the DSM for real-time obstacle avoidance. Measured results of the system performance are summarized and compared with other works to highlight our unique contributions.  more » « less
Award ID(s):
2328815
PAR ID:
10636714
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Frontiers in Neuroscience
Date Published:
Journal Name:
Frontiers in Neuroscience
Volume:
19
ISSN:
1662-453X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Agile-legged robots have proven to be highly effective in navigating and performing tasks in complex and challenging environments, including disaster zones and industrial settings. However, these applications commonly require the capability of carrying heavy loads while maintaining dynamic motion. Therefore, this article presents a novel methodology for incorporating adaptive control into a force-based control system. Recent advancements in the control of quadruped robots show that force control can effectively realize dynamic locomotion over rough terrain. By integrating adaptive control into the force-based controller, our proposed approach can maintain the advantages of the baseline framework while adapting to significant model uncertainties and unknown terrain impact models. Experimental validation was successfully conducted on the Unitree A1 robot. With our approach, the robot can carry heavy loads (up to 50% of its weight) while performing dynamic gaits such as fast trotting and bounding across uneven terrains. 
    more » « less
  2. Soft robots have recently drawn extensive attention thanks to their unique ability of adapting to complicated environments. Soft robots are designed in a variety of shapes of aiming for many different applications. However, accurate modelling and control of soft robots is still an open problem due to the complex robot structure and uncertain interaction with the environment. In fact, there is no unified framework for the modeling and control of generic soft robots. In this paper, we present a novel data-driven machine learning method for modeling a cable-driven soft robot. This machine learning algorithm, named deterministic learning (DL), uses soft robot motion data to train a radial basis function neural network (RBFNN). The soft robot motion dynamics are then guaranteed to be accurately identified, represented, and stored as an RBFNN model with converged constant neural network weights. To validate our method, We have built a simulated soft robot almost identical to our real inchworm soft robot, and we have tested the DL algorithm in simulation. Furthermore, a neural network weight combining technique is used which can extract and combine useful dynamics information from multiple robot motion trajectories. 
    more » « less
  3. Abstract As artificial intelligence and industrial automation are developing, human–robot collaboration (HRC) with advanced interaction capabilities has become an increasingly significant area of research. In this paper, we design and develop a real-time, multi-model HRC system using speech and gestures. A set of 16 dynamic gestures is designed for communication from a human to an industrial robot. A data set of dynamic gestures is designed and constructed, and it will be shared with the community. A convolutional neural network is developed to recognize the dynamic gestures in real time using the motion history image and deep learning methods. An improved open-source speech recognizer is used for real-time speech recognition of the human worker. An integration strategy is proposed to integrate the gesture and speech recognition results, and a software interface is designed for system visualization. A multi-threading architecture is constructed for simultaneously operating multiple tasks, including gesture and speech data collection and recognition, data integration, robot control, and software interface operation. The various methods and algorithms are integrated to develop the HRC system, with a platform constructed to demonstrate the system performance. The experimental results validate the feasibility and effectiveness of the proposed algorithms and the HRC system. 
    more » « less
  4. Abstract This paper aims to develop a distributed layered control framework for the navigation, planning, and control of multi-agent quadrupedal robots subject to environments with uncertain obstacles and various disturbances. At the highest layer of the proposed layered control, a reference path for all agents is calculated, considering artificial potential fields (APF) under a priori known obstacles. Second, in the middle layer, we employ a distributed nonlinear model predictive control (NMPC) scheme with a one-step delay communication protocol (OSDCP) subject to reduced-order and linear inverted pendulum (LIP) models of agents to ensure the feasibility of the gaits and collision avoidance, addressing the degree of uncertainty in real-time. Finally, low-level nonlinear whole-body controllers (WBCs) impose the full-order locomotion models of agents to track the optimal and reduced-order trajectories. The proposed controller is validated for effectiveness and robustness on up to four A1 quadrupedal robots in simulations and two robots in the experiments.1 Simulations and experimental validations demonstrate that the proposed approach can effectively address the real-time planning and control problem. In particular, multiple A1 robots are shown to navigate various environments, maintaining collision-free distances while being subject to unknown external disturbances such as pushes and rough terrain. 
    more » « less
  5. null (Ed.)
    Shared autonomy provides a framework where a human and an automated system, such as a robot, jointly control the system’s behavior, enabling an effective solution for various applications, including human-robot interaction and remote operation of a semi-autonomous system. However, a challenging problem in shared autonomy is safety because the human input may be unknown and unpredictable, which affects the robot’s safety constraints. If the human input is a force applied through physical contact with the robot, it also alters the robot’s behavior to maintain safety. We address the safety issue of shared autonomy in real-time applications by proposing a two-layer control framework. In the first layer, we use the history of human input measurements to infer what the human wants the robot to do and define the robot’s safety constraints according to that inference. In the second layer, we formulate a rapidly-exploring random tree of barrier pairs, with each barrier pair composed of a barrier function and a controller. Using the controllers in these barrier pairs, the robot is able to maintain its safe operation under the intervention from the human input. This proposed control framework allows the robot to assist the human while preventing them from encountering safety issues. We demonstrate the proposed control framework on a simulation of a two-linkage manipulator robot. 
    more » « less