Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
This paper introduces a strategy for satisfying basic control objectives for systems whose dynamics are almost entirely unknown. This setting is motivated by a scenario where a system undergoes a critical failure, thus significantly changing its dynamics. In such a case, retaining the ability to satisfy basic control objectives such as reach-avoid is imperative. To deal with significant restrictions on our knowledge of system dynamics, we develop a theory of myopic control. The primary goal of myopic control is to, at any given time, optimize the current direction of the system trajectory, given solely the limited information obtained about the system until that time. Building upon this notion, we propose a control algorithm which simultaneously uses small perturbations in the control effort to learn local system dynamics while moving in the direction which seems to be optimal based on previously obtained knowledge. We show that the algorithm results in a trajectory that is nearly optimal in the myopic sense, i.e., it is moving in a direction that seems to be nearly the best at the given time, and provide formal bounds for suboptimality. We demonstrate the usefulness of the proposed algorithm on a high-fidelity simulation of a damaged Boeing 747 seeking to remain in level flight.more » « less
-
Standard methods for synthesis of control policies in Markov decision processes with unknown transition probabilities largely rely on a combination of exploration and exploitation. While these methods often offer theoretical guarantees on system performance, the number of time steps and samples needed to initially explore the environment before synthesizing a well-performing control policy is impractically large. This paper partially alleviates such a burden by incorporating a priori existing knowledge into learning, when such knowledge is available. Based on prior information about bounds on the differences between the transition probabilities at different states, we propose a learning approach where the transition probabilities at a given state are not only learned from outcomes of repeatedly performing a certain action at that state, but also from outcomes of performing actions at states that are known to have similar transition probabilities. Since the directly obtained information is more reliable at determining transition probabilities than second-hand information, i.e., information obtained from similar but potentially slightly different states, samples obtained indirectly are weighted with respect to the known bounds on the differences of transition probabilities. While the proposed strategy can naturally lead to errors in learned transition probabilities, we show that, by proper choice of the weights, such errors can be reduced, and the number of steps needed to form a near-optimal control policy in the Bayesian sense can be significantly decreased.more » « less