skip to main content


Title: Synchronization of complex-valued dynamic networks with intermittently adaptive coupling: A direct error method
reveal the mechanism of intermittent coupling, where the nodes are connected merely in discontinuous time durations. Instead of the common weighted average technique, by proposing a direct error method and constructing piecewise Lyapunov functions, several intermittently adaptive designs are developed to update the complex-valued coupling weights. Especially, an adaptive pinning protocol is designed for ICCVNs with heterogeneous coupling weights and the synchronization is ensured by piecewise adjusting the complex-valued weights of edges within a spanning tree. For ICCVNs with homogeneous coupling weights, based on a connected dominating set, an intermittently adaptive algorithm is developed which just depends on the information of the dominating set with their neighbors. At the end, the established theoretical results are verified by providing two numerical examples.  more » « less
Award ID(s):
1917275
NSF-PAR ID:
10158885
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Automatica
Volume:
112
ISSN:
0005-1098
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Models recently used in the literature proving residual networks (ResNets) are better than linear predictors are actually different from standard ResNets that have been widely used in computer vision. In addition to the assumptions such as scalar-valued output or single residual block, the models fundamentally considered in the literature have no nonlinearities at the final residual representation that feeds into the final affine layer. To codify such a difference in nonlinearities and reveal a linear estimation property, we define ResNEsts, i.e., Residual Nonlinear Estimators, by simply dropping nonlinearities at the last residual representation from standard ResNets. We show that wide ResNEsts with bottleneck blocks can always guarantee a very desirable training property that standard ResNets aim to achieve, i.e., adding more blocks does not decrease performance given the same set of basis elements. To prove that, we first recognize ResNEsts are basis function models that are limited by a coupling problem in basis learning and linear prediction. Then, to decouple prediction weights from basis learning, we construct a special architecture termed augmented ResNEst (A-ResNEst) that always guarantees no worse performance with the addition of a block. As a result, such an A-ResNEst establishes empirical risk lower bounds for a ResNEst using corresponding bases. Our results demonstrate ResNEsts indeed have a problem of diminishing feature reuse; however, it can be avoided by sufficiently expanding or widening the input space, leading to the above-mentioned desirable property. Inspired by the densely connected networks (DenseNets) that have been shown to outperform ResNets, we also propose a corresponding new model called Densely connected Nonlinear Estimator (DenseNEst). We show that any DenseNEst can be represented as a wide ResNEst with bottleneck blocks. Unlike ResNEsts, DenseNEsts exhibit the desirable property without any special = architectural re-design. 
    more » « less
  2. Models recently used in the literature proving residual networks (ResNets) are better than linear predictors are actually different from standard ResNets that have been widely used in computer vision. In addition to the assumptions such as scalar-valued output or single residual block, the models fundamentally considered in the literature have no nonlinearities at the final residual representation that feeds into the final affine layer. To codify such a difference in nonlinearities and reveal a linear estimation property, we define ResNEsts, i.e., Residual Nonlinear Estimators, by simply dropping nonlinearities at the last residual representation from standard ResNets. We show that wide ResNEsts with bottleneck blocks can always guarantee a very desirable training property that standard ResNets aim to achieve, i.e., adding more blocks does not decrease performance given the same set of basis elements. To prove that, we first recognize ResNEsts are basis function models that are limited by a coupling problem in basis learning and linear prediction. Then, to decouple prediction weights from basis learning, we construct a special architecture termed augmented ResNEst (A-ResNEst) that always guarantees no worse performance with the addition of a block. As a result, such an A-ResNEst establishes empirical risk lower bounds for a ResNEst using corresponding bases. Our results demonstrate ResNEsts indeed have a problem of diminishing feature reuse; however, it can be avoided by sufficiently expanding or widening the input space, leading to the above-mentioned desirable property. Inspired by the densely connected networks (DenseNets) that have been shown to outperform ResNets, we also propose a corresponding new model called Densely connected Nonlinear Estimator (DenseNEst). We show that any DenseNEst can be represented as a wide ResNEst with bottleneck blocks. Unlike ResNEsts, DenseNEsts exhibit the desirable property without any special architectural re-design. 
    more » « less
  3. Summary

    This paper studies adaptive model predictive control (AMPC) of systems with time‐varying and potentially state‐dependent uncertainties. We propose an estimation and prediction architecture within the min‐max MPC framework. An adaptive estimator is presented to estimate the set‐valued measures of the uncertainty using piecewise constant adaptive law, which can be arbitrarily accurate if the sampling period in adaptation is small enough. Based on such measures, a prediction scheme is provided that predicts the time‐varying feasible set of the uncertainty over the prediction horizon. We show that if the uncertainty and its first derivatives are locally Lipschitz, the stability of the system with AMPC can always be guaranteed under the standard assumptions for traditional min‐max MPC approaches, while the AMPC algorithm enhances the control performance by efficiently reducing the size of the feasible set of the uncertainty in min‐max MPC setting. Copyright © 2017 John Wiley & Sons, Ltd.

     
    more » « less
  4. Complex network theory has focused on properties of networks with real-valued edge weights. However, in signal transfer networks, such as those representing the transfer of light across an interferometer, complex-valued edge weights are needed to represent the manipulation of the signal in both magnitude and phase. These complex-valued edge weights introduce interference into the signal transfer, but it is unknown how such interference affects network properties such as small-worldness. To address this gap, we have introduced a small-world interferometer network model with complex-valued edge weights and generalized existing network measures to define the interferometric clustering coefficient, the apparent path length, and the interferometric small-world coefficient. Using high-performance computing resources, we generated a large set of small-world interferometers over a wide range of parameters in system size, nearest-neighbor count, and edge-weight phase and computed their interferometric network measures. We found that the interferometric small-world coefficient depends significantly on the amount of phase on complex-valued edge weights: for small edge-weight phases, constructive interference led to a higher interferometric small-world coefficient; while larger edge-weight phases induced destructive interference which led to a lower interferometric small-world coefficient. Thus, for the small-world interferometer model, interferometric measures are necessary to capture the effect of interference on signal transfer. This model is an example of the type of problem that necessitates interferometric measures, and applies to any wave-based network including quantum networks. 
    more » « less
  5.  
    more » « less