skip to main content


Search for: All records

Creators/Authors contains: "Srivastava, Vaibhav"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Using the context of human-supervised object collection tasks, we explore policies for a robot to seek assistance from a human supervisor and avoid loss of human trust in the robot. We consider a human-robot interaction scenario in which a mobile manipulator chooses to collect objects either autonomously or through human assistance; while the human supervisor monitors the robot’s operation, assists when asked, or intervenes if the human perceives that the robot may not accomplish its goal. We design an optimal assistance-seeking policy for the robot using a Partially Observable Markov Decision Process (POMDP) setting in which human trust is a hidden state and the objective is to maximize collaborative performance. We conduct two sets of human-robot interaction experiments. The data from the first set of experiments is used to estimate POMDP parameters, which are used to compute an optimal assistance-seeking policy that is used in the second experiment. For most participants, the estimated POMDP reveals that humans are more likely to intervene when their trust is low and the robot is performing a high-complexity task; and that the robot asking for assistance in high-complexity tasks can increase human trust in the robot. Our experimental results show that the proposed trust-aware policy yields superior performance compared with an optimal trust-agnostic policy. 
    more » « less
    Free, publicly-accessible full text available May 31, 2024
  2. Free, publicly-accessible full text available May 31, 2024
  3. We propose Deterministic Sequencing of Exploration and Exploitation (DSEE) algorithm with interleaving exploration and exploitation epochs for model-based RL problems that aim to simultaneously learn the system model, i.e., a Markov decision process (MDP), and the associated optimal policy. During exploration, DSEE explores the environment and updates the estimates for expected reward and transition probabilities. During exploitation, the latest estimates of the expected reward and transition probabilities are used to obtain a robust policy with high probability. We design the lengths of the exploration and exploitation epochs such that the cumulative regret grows as a sub-linear function of time. 
    more » « less
  4. We consider a prototypical path planning problem on a graph with uncertain cost of mobility on its edges. At a given node, the planning agent can access the true cost for edges to its neighbors and uses a noisy simulator to estimate the cost-to-go from the neighboring nodes. The objective of the planning agent is to select a neighboring node such that, with high probability, the cost-to-go is minimized for the worst possible realization of uncertain parameters in the simulator. By modeling the cost-to-go as a Gaussian process (GP) for every realization of the uncertain parameters, we apply a scenario approach in which we draw fixed independent samples of the uncertain parameter. We present a scenario-based iterative algorithm using the upper confidence bound (UCB) of the fixed independent scenarios to compute the choice of the neighbor to go to. We characterize the performance of the proposed algorithm in terms of a novel notion of regret defined with respect to an additional draw of the uncertain parameter, termed as scenario regret under re-draw. In particular, we characterize a high probability upper bound on the regret under re-draw for any finite number of iterations of the algorithm, and show that this upper bound tends to zero asymptotically with the number of iterations. We supplement our analysis with numerical results. 
    more » « less
  5. Designing effective rehabilitation strategies for upper extremities, particularly hands and fingers, warrants the need for a computational model of human motor learning. The presence of large degrees of freedom (DoFs) available in these systems makes it difficult to balance the trade-off between learning the full dexterity and accomplishing manipulation goals. The motor learning literature argues that humans use motor synergies to reduce the dimension of control space. Using the low-dimensional space spanned by these synergies, we develop a computational model based on the internal model theory of motor control. We analyze the proposed model in terms of its convergence properties and fit it to the data collected from human experiments. We compare the performance of the fitted model to the experimental data and show that it captures human motor learning behavior well. 
    more » « less