Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We investigate the controllability of an origami system composed of Miura-ori cells. A substantial volume of research on folding architecture, kinematic behavior, and actuation techniques of origami structures has been conducted. However, understanding their transient dynamics and constructing control models remains a formidable task, primarily due to their innate flexibility and compliance. In light of this challenge, we discretize the origami system into a network composed of interconnected particle masses alongside bar and hinge elements. This yields a state-space representation of the system's dynamics, enabling us to obtain the system's controllability attributes. Informed by this computational framework, we explore the controllability Gramian-based method for finding the most efficient crease line for Miura-ori cell deployment using an actuator. We demonstrate that the deployment efficiency guided by this theoretical method agrees with the empirical results obtained from the energy consumption to deploy the origami structure. This investigation paves the way toward designing and operating an efficient the complex actuation system for origami tessellations.more » « lessFree, publicly-accessible full text available September 1, 2026
-
Free, publicly-accessible full text available July 8, 2026
-
In this article, we study linearly constrained policy optimization over the manifold of Schur stabilizing controllers, equipped with a Riemannian metric that emerges naturally in the context of optimal control problems. We provide extrinsic analysis of a generic constrained smooth cost function that subsequently facilitates subsuming any such constrained problem into this framework. By studying the second-order geometry of this manifold, we provide a Newton-type algorithm that does not rely on the exponential mapping nor a retraction, while ensuring local convergence guarantees. The algorithm hinges instead upon the developed stability certificate and the linear structure of the constraints. We then apply our methodology to two well-known constrained optimal control problems. Finally, several numerical examples showcase the performance of the proposed algorithm.more » « less
-
In this paper, we consider direct policy optimization for the linear-quadratic Gaussian (LQG) setting. Over the past few years, it has been recognized that the landscape of stabilizing output-feedback controllers of relevance to LQG has an intricate geometry, particularly as it pertains to the existence of spurious stationary points. In order to address such challenges, in this paper, we first adopt a Riemannian metric for the space of stabilizing full-order minimal output-feedback controllers. We then proceed to prove that the orbit space of such controllers modulo coordinate transformation admits a Riemannian quotient manifold structure. This geometric structure is then used to develop a Riemannian gradient descent for the direct LQG policy optimization. We prove a local convergence guarantee with linear rate and show the proposed approach exhibits significantly faster and more robust numerical performance as compared with ordinary gradient descent for LQG. Subsequently, we provide reasons for this observed behavior; in particular, we argue that optimizing over the orbit space of controllers is the right theoretical and computational setup for direct LQG policy optimization.more » « less
-
Control of networked systems, comprised of interacting agents, is often achieved through modeling the underlying interactions. Constructing accurate models of such interactions–in the meantime–can become prohibitive in applications. Data-driven control methods avoid such complications by directly synthesizing a controller from the observed data. In this paper, we propose an algorithm referred to as Data-driven Structured Policy Iteration (D2SPI), for synthesizing an efficient feedback mechanism that respects the sparsity pattern induced by the underlying interaction network. In particular, our algorithm uses temporary “auxiliary” communication links in order to enable the required information exchange on a (smaller) sub-network during the “learning phase”—links that will be removed subsequently for the final distributed feedback synthesis. We then proceed to show that the learned policy results in a stabilizing structured policy for the entire network. Our analysis is then followed by showing the stability and convergence of the proposed distributed policies throughout the learning phase, exploiting a construct referred to as the “Patterned monoid.” The performance of D2SPI is then demonstrated using representative simulation scenarios.more » « less
-
Abstract—In this paper, we consider policy optimization over the Riemannian submanifolds of stabilizing controllers arising from constrained Linear Quadratic Regulators (LQR), including output feedback and structured synthesis. In this direction, we provide a Riemannian Newton-type algorithm that enjoys local convergence guarantees and exploits the inherent geometry of the problem. Instead of relying on the exponential mapping or a global retraction, the proposed algorithm revolves around the developed stability certificate and the constraint structure, utilizing the intrinsic geometry of the synthesis problem. We then showcase the utility of the proposed algorithm through numerical examples.more » « less
-
Gradient-based methods have been widely used for system design and optimization in diverse application domains. Recently, there has been a renewed interest in studying theoretical properties of these methods in the context of control and reinforcement learning. This article surveys some of the recent developments on policy optimization, a gradient-based iterative approach for feedback control synthesis that has been popularized by successes of reinforcement learning. We take an interdisciplinary perspective in our exposition that connects control theory, reinforcement learning, and large-scale optimization. We review a number of recently developed theoretical results on the optimization landscape, global convergence, and sample complexityof gradient-based methods for various continuous control problems, such as the linear quadratic regulator (LQR), [Formula: see text] control, risk-sensitive control, linear quadratic Gaussian (LQG) control, and output feedback synthesis. In conjunction with these optimization results, we also discuss how direct policy optimization handles stability and robustness concerns in learning-based control, two main desiderata in control engineering. We conclude the survey by pointing out several challenges and opportunities at the intersection of learning and control.more » « less
-
This tutorial paper aims to explore the role of graph theory for studying networked and multi-agent systems. The session will cover basic concepts from graph theory along with surveying its role in problems related to cooperative control and distributed decision-making. Finally, we will also introduce some advanced topics from graph theory in the hope of encouraging further discussion and explore new research opportunities in system and control theory.more » « less
An official website of the United States government

Full Text Available