skip to main content


Search for: All records

Creators/Authors contains: "Howard, Thomas M"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Recent advances in data-driven models for grounded language understanding have enabled robots to interpret increasingly complex instructions. Two fundamental limitations of these methods are that most require a full model of the environment to be known a priori, and they attempt to reason over a world representation that is flat and unnecessarily detailed, which limits scalability. Recent semantic mapping methods address partial observability by exploiting language as a sensor to infer a distribution over topological, metric and semantic properties of the environment. However, maintaining a distribution over highly detailed maps that can support grounding of diverse instructions is computationally expensive and hinders real-time human-robot collaboration. We propose a novel framework that learns to adapt perception according to the task in order to maintain compact distributions over semantic maps. Experiments with a mobile manipulator demonstrate more efficient instruction following in a priori unknown environments. 
    more » « less
  2. The complexity associated with the control of highly-articulated legged robots scales quickly as the number of joints increases. Traditional approaches to the control of these robots are often impractical for many real-time applications. This work thus presents a novel sampling-based planning ap- proach for highly-articulated robots that utilizes a probabilistic graphical model (PGM) to infer in real-time how to optimally modify goal-driven, locomotive behaviors for use in closed-loop control. Locomotive behaviors are quantified in terms of the parameters associated with a network of neural oscillators, or rather a central pattern generator (CPG). For the first time, we show that the PGM can be used to optimally modulate different behaviors in real-time (i.e., to select of optimal choice of parameter values across the CPG model) in response to changes both in the local environment and in the desired control signal. The PGM is trained offline using a library of optimal behaviors that are generated using a gradient-free optimization framework. 
    more » « less
  3. Sampling-based motion planning algorithms provide a means to adapt the behaviors of autonomous robots to changing or unknown a priori environmental conditions. However, as the size of the space over which a sampling-based approach needs to search is increased (perhaps due to considering robots with many degree of freedom) the computational limits necessary for real-time operation are quickly exceeded. To address this issue, this paper presents a novel sampling-based approach to locomotion planning for highly-articulated robots wherein the parameters associated with a class of locomotive behaviors (e.g., inter-leg coordination, stride length, etc.) are inferred in real-time using a sample-efficient algorithm. More specifically, this work presents a data-based approach wherein offline-learned optimal behaviors, represented using central pattern generators (CPGs), are used to train a class of probabilistic graphical models (PGMs). The trained PGMs are then used to inform a sampling distribution of inferred walking gaits for legged hexapod robots. Simulated as well as hardware results are presented to demonstrate the successful application of the online inference algorithm. 
    more » « less
  4. Sampling-based motion planning algorithms provide a means to adapt the behaviors of autonomous robots to changing or unknown a priori environmental conditions. However, as the size of the space over which a sampling-based approach needs to search is increased (perhaps due to considering robots with many degree of freedom) the computational limits necessary for real-time operation are quickly exceeded. To address this issue, this paper presents a novel sampling-based approach to locomotion planning for highly-articulated robots wherein the parameters associated with a class of locomotive behaviors (e.g., inter-leg coordination, stride length, etc.) are inferred in real-time using a sample-efficient algorithm. More specifically, this work presents a data-based approach wherein offline-learned optimal behaviors, represented using central pattern generators (CPGs), are used to train a class of probabilistic graphical models (PGMs). The trained PGMs are then used to inform a sampling distribution of inferred walking gaits for legged hexapod robots. Simulated as well as hardware results are presented to demonstrate the successful application of the online inference algorithm. 
    more » « less
  5. The speed and accuracy with which robots are able to interpret natural language is fundamental to realizing effective human-robot interaction. A great deal of attention has been paid to developing models and approximate inference algorithms that improve the efficiency of language understanding. However, existing methods still attempt to reason over a representation of the environment that is flat and unnecessarily detailed, which limits scalability. An open problem is then to develop methods capable of producing the most compact environment model sufficient for accurate and efficient natural language understanding. We propose a model that leverages environment-related information encoded within instructions to identify the subset of observations and perceptual classifiers necessary to perceive a succinct, instruction-specific environment representation. The framework uses three probabilistic graphical models trained from a corpus of annotated instructions to infer salient scene semantics, perceptual classifiers, and grounded symbols. Experimental results on two robots operating in different environments demonstrate that by exploiting the content and the structure of the instructions, our method learns compact environment representations that significantly improve the efficiency of natural language symbol grounding. 
    more » « less
  6. The speed and accuracy with which robots are able to interpret natural language is fundamental to realizing effective human-robot interaction. A great deal of attention has been paid to developing models and approximate inference algorithms that improve the efficiency of language understanding. However, existing methods still attempt to reason over a representation of the environment that is flat and unnecessarily detailed, which limits scalability. An open problem is then to develop methods capable of producing the most compact environment model sufficient for accurate and efficient natural language understanding. We propose a model that leverages environment-related information encoded within instructions to identify the subset of observations and perceptual classifiers necessary to perceive a succinct, instruction-specific environment representation. The framework uses three probabilistic graphical models trained from a corpus of annotated instructions to infer salient scene semantics, perceptual classifiers, and grounded symbols. Experimental results on two robots operating in different environments demonstrate that by exploiting the content and the structure of the instructions, our method learns compact environment representations that significantly improve the efficiency of natural language symbol grounding. 
    more » « less
  7. The utility of collaborative manipulators for shared tasks is highly dependent on the speed and accuracy of communication between the human and the robot. The run-time of recently developed probabilistic inference models for situated symbol grounding of natural language instructions depends on the complexity of the representation of the environment in which they reason. As we move towards more complex bi-directional interactions, tasks, and environments, we need intelligent perception models that can selectively infer precise pose, semantics, and affordances of the objects when inferring exhaustively detailed world models is inefficient and prohibits real-time interaction with these robots. In this paper we propose a model of language and perception for the problem of adapting the configuration of the robot perception pipeline for tasks where constructing exhaustively detailed models of the environment is inefficient and in- consequential for symbol grounding. We present experimental results from a synthetic corpus of natural language instructions for robot manipulation in example environments. The results demonstrate that by adapting perception we get significant gains in terms of run-time for perception and situated symbol grounding of the language instructions without a loss in the accuracy of the latter. 
    more » « less
  8. Approaches to autonomous navigation for unmanned ground vehicles rely on motion planning algorithms that optimize maneuvers under kinematic and environmental constraints. Algorithms that combine heuristic search with local optimization are well suited to domains where solution optimality is favored over speed and memory resources are limited as they often improve the optimality of solutions without increasing the sampling density. To address the runtime performance limitations of such algorithms, this paper introduces Predictively Adapted State Lattices, an extension of recombinant motion planning search space construction that adapts the representation by selecting regions to optimize using a learned model trained to predict the expected improvement. The model aids in prioritizing computations that optimize regions where significant improvement is anticipated. We evaluate the performance of the proposed method through statistical and qualitative comparisons to alternative State Lattice approaches for a simulated mobile robot with nonholonomic constraints. Results demonstrate an advance in the ability of recombinant motion planning search spaces to improve relative optimality at reduced runtime in varyingly complex environments. 
    more » « less