Effectively balancing traffic in datacenter networks is a crucial operational goal. Most existing load balancing approaches are handcrafted to the structure of the network and/or network workloads. Thus, new load balancing strategies are required if the underlying network conditions change, e.g., due to hard or grey failures, network topology evolution, or workload shifts. While we can theoretically derive the optimal load balancing strategy by solving an optimization problem given certain traffic and topology conditions, these problems take too much time to solve and makes the derived solution stale to deploy. In this paper, we describe a load balancing scheme Learned Load Balancing (LLB), which is a general approach to finding an optimal load balancing strategy for a given network topology and workload, and is fast enough in practice to deploy the inferred strategies. LLB uses deep supervised learning techniques to learn how to handle different traffic patterns and topology changes, and adapts to any failures in the underlying network. LLB leverages emerging trends in network telemetry, programmable switching, and “smart” NICs. Our experiments show that LLB performs well under failures and can be expanded to more complex, multi-layered network topologies. We also prototype neural network inference on smartNICs to demonstrate the workability of LLB.
more »
« less
Top-down modeling of distributed neural dynamics for motion control
In neuroscience a topic of interest pertains to understanding the neural circuit and network mechanisms that enable a range of motor functions, including motion and navigation. While engineers have strong mathematical conceptualizations regarding how these functions can be achieved using control-theoretic frameworks, it is far from clear whether similar strategies are embodied within neural circuits. In this work, we adopt a ‘top-down’ strategy to postulate how certain nonlinear control strategies might be achieved through the actions of a network of biophysical neurons acting on multiple time-scales. Specifically, we study how neural circuits might interact to learn and execute an optimal strategy for spatial control. Our approach is comprised of an optimal nonlinear control problem where a high-level objective function encapsulates the fundamental requirements of the task at hand. We solve this optimization using an iterative method based on Pontryagin's Maximum Principle. It turns out that the proposed solution methodology can be translated into the dynamics of neural populations that act to produce the optimal solutions in a distributed fashion. Importantly, we are able to provide conditions under which these networks are guaranteed to arrive at an optimal solution. In total, this work provides an iterative optimization framework that confers a novel interpretation regarding how nonlinear control can be achieved in neural circuits.
more »
« less
- PAR ID:
- 10284698
- Date Published:
- Journal Name:
- 2021 American Control Conference (ACC)
- Page Range / eLocation ID:
- 2757 to 2762
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In recent years, great advances in understanding the opportunities for nonlinear vibration energy harvesting systems have been achieved giving attention to either the structural or electrical subsystems. Yet, a notable disconnect appears in the knowledge on optimal means to integrate nonlinear energy harvesting structures with effective nonlinear rectifying and power management circuits for practical applications. Motivated to fill this knowledge gap, this research employs impedance principles to investigate power optimization strategies for a nonlinear vibration energy harvester interfaced with a bridge rectifier and a buck-boost converter. The frequency and amplitude dependence of the internal impedance of the harvester structure challenges the conventional impedance matching concepts. Instead, a system-level optimization strategy is established and validated through simulations and experiments. Through careful studies, the means to optimize the electrical power with partial information of the electrical load is revealed and verified in comparison to the full analysis. These results suggest that future study and implementation of optimal nonlinear energy harvesting systems may find effective guidance through power flow concepts built on linear theories despite the presence of nonlinearities in structures and circuits.more » « less
-
This work presents a methodology for analysis and control of nonlinear fluid systems using neural networks. The approach is demonstrated in four different study cases: the Lorenz system, a modified version of the Kuramoto-Sivashinsky equation, a streamwise-periodic two-dimensional channel flow, and a confined cylinder flow. Neural networks are trained as models to capture the complex system dynamics and estimate equilibrium points through a Newton method, enabled by back-propagation. These neural network surrogate models (NNSMs) are leveraged to train a second neural network, which is designed to act as a stabilizing closed-loop controller. The training process employs a recurrent approach, whereby the NNSM and the neural network controller are chained in closed loop along a finite time horizon. By cycling through phases of combined random open-loop actuation and closed-loop control, an iterative training process is introduced to overcome the lack of data near equilibrium points. This approach improves the accuracy of the models in the most critical region for achieving stabilization. Through the use of L1 regularization within loss functions, the NNSMs can also guide optimal sensor placement, reducing the number of sensors from an initial candidate set. The data sets produced during the iterative training process are also leveraged for conducting a linear stability analysis through a modified dynamic mode decomposition approach. The results demonstrate the effectiveness of computationally inexpensive neural networks in modeling, controlling, and enabling stability analysis of nonlinear systems, providing insights into the system behavior and offering potential for stabilization of complex fluid systems.more » « less
-
Nonlinear optimal control problems are challenging to solve efficiently due to non-convexity. This paper introduces a trajectory optimization approach that achieves real-time performance by combining machine learning to predict optimal trajectories with refinement by quadratic optimization. First, a library of optimal trajectories is calculated offline and used to train a neural network. Online, the neural network predicts a trajectory for a novel initial state and cost function, and this prediction is further optimized by a sparse quadratic programming solver. We apply this approach to a fly-to-target movement problem for an indoor quadrotor. Experiments demonstrate that the technique calculates near-optimal trajectories in a few milliseconds, and generates agile movement that can be tracked more accurately than existing methods.more » « less
-
Animal brains evolved to optimize behavior in dynamic environments, flexibly selecting actions that maximize future rewards in different contexts. A large body of experimental work indicates that such optimization changes the wiring of neural circuits, appropriately mapping environmental input onto behavioral outputs. A major unsolved scientific question is how optimal wiring adjustments, which must target the connections responsible for rewards, can be accomplished when the relation between sensory inputs, action taken, environmental context with rewards is ambiguous. The credit assignment problem can be categorized into context-independent structural credit assignment and context-dependent continual learning. In this perspective, we survey prior approaches to these two problems and advance the notion that the brain’s specialized neural architectures provide efficient solutions. Within this framework, the thalamus with its cortical and basal ganglia interactions serves as a systems-level solution to credit assignment. Specifically, we propose that thalamocortical interaction is the locus of meta-learning where the thalamus provides cortical control functions that parametrize the cortical activity association space. By selecting among these control functions, the basal ganglia hierarchically guide thalamocortical plasticity across two timescales to enable meta-learning. The faster timescale establishes contextual associations to enable behavioral flexibility while the slower one enables generalization to new contexts.more » « less
An official website of the United States government

