skip to main content


Search for: All records

Award ID contains: 1904394

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper introduces a computationally efficient approach for solving Model Predictive Control (MPC) reference tracking problems with state and control constraints. The approach consists of three key components: First, a log-domain interior-point quadratic programming method that forms the basis of the overall approach; second, a method of warm-starting this optimizer by using the MPC solution from the previous timestep; and third, a computational governor that bounds the suboptimality of the warm-start by altering the reference command provided to the MPC problem. As a result, the closed-loop system is altered in a manner so that MPC solutions can be computed using fewer optimizer iterations per timestep. In a numerical experiment, the computational governor reduces the worst-case computation time of a standard MPC implementation by 90%, while maintaining good closed-loop performance. 
    more » « less
  2. This paper illustrates an approach to integrate learning into spacecraft automated rendezvous, proximity maneuvering, and docking (ARPOD) operations. Spacecraft rendezvous plays a significant role in many spacecraft missions including orbital transfers, ISS re-supply, on-orbit refueling and servicing, and debris removal. On one hand, precise modeling and prediction of spacecraft dynamics can be challenging due to the uncertainties and perturbation forces in the spacecraft operating environment and due to multi-layered structure of its nominal control system. On the other hand, spacecraft maneuvers need to satisfy required constraints (thrust limits, line of sight cone constraints, relative velocity of approach, etc.) to ensure safety and achieve ARPOD objectives. This paper considers an application of a learning-based reference governor (LRG) to enforce constraints without relying on a dynamic model of the spacecraft during the mission. Similar to the conventional Reference Governor (RG), the LRG is an add-on supervisor to a closed-loop control system, serving as a pre-filter on the command generated by the ARPOD planner. As the RG, LRG modifies, if it becomes necessary, the command to a constraint-admissible reference to enforce specified constraints. The LRG is distinguished, however, by the ability to rely on learning instead of an explicit model of the system, and guarantees constraints satisfaction during and after the learning. Simulations of spacecraft constrained relative motion maneuvers on a low Earth orbit are reported that demonstrate the effectiveness of the proposed approach. 
    more » « less