A widely held assumption on network dynamics is that similar components are more likely to exhibit similar behavior than dissimilar ones and that generic differences among them are necessarily detrimental to synchronization. Here, we show that this assumption does not generally hold in oscillator networks when communication delays are present. We demonstrate, in particular, that random parameter heterogeneity among oscillators can consistently rescue the system from losing synchrony. This finding is supported by electrochemical-oscillator experiments performed on a multielectrode array network. Remarkably, at intermediate levels of heterogeneity, random mismatches are more effective in promoting synchronization than parameter assignments specifically designed to facilitate identical synchronization. Our results suggest that, rather than being eliminated or ignored, intrinsic disorder in technological and biological systems can be harnessed to help maintain coherence required for function.
Spontaneous synchronization is ubiquitous in natural and man-made systems. It underlies emergent behaviors such as neuronal response modulation and is fundamental to the coordination of robot swarms and autonomous vehicle fleets. Due to its simplicity and physical interpretability, pulse-coupled oscillators has emerged as one of the standard models for synchronization. However, existing analytical results for this model assume ideal conditions, including homogeneous oscillator frequencies and negligible coupling delays, as well as strict requirements on the initial phase distribution and the network topology. Using reinforcement learning, we obtain an optimal pulse-interaction mechanism (encoded in phase response function) that optimizes the probability of synchronization even in the presence of nonideal conditions. For small oscillator heterogeneities and propagation delays, we propose a heuristic formula for highly effective phase response functions that can be applied to general networks and unrestricted initial phase distributions. This allows us to bypass the need to relearn the phase response function for every new network.
more » « less- Award ID(s):
- 1738902
- NSF-PAR ID:
- 10433493
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- PNAS Nexus
- Volume:
- 2
- Issue:
- 4
- ISSN:
- 2752-6542
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Biological neural computation is inherently asynchronous due to large variations in neuronal spike timing and transmission delays. So-far, most theoretical work on neural networks assumes the synchronous setting where neurons fire simultaneously in discrete rounds. In this work we aim at understanding the barriers of asynchronous neural computation from an algorithmic perspective. We consider an extension of the widely studied model of synchronized spiking neurons [Maass, Neural= Networks 97] to the asynchronous setting by taking into account edge and node delays. Edge Delays: We define an asynchronous model for spiking neurons in which the latency values (i.e., transmission delays) of non self-loop edges vary adversarially over time. This extends the recent work of [Hitron and Parter, ESA’19] in which the latency values are restricted to be fixed over time. Our first contribution is an impossibility result that implies that the assumption that self-loop edges have no delays (as assumed in Hitron and Parter) is indeed necessary. Interestingly, in real biological networks self-loop edges (a.k.a. autapse) are indeed free of delays, and the latter has been noted by neuroscientists to be crucial for network synchronization. To capture the computational challenges in this setting, we first consider the implementation of a single NOT gate. This simple function already captures the fundamental difficulties in the asynchronous setting. Our key technical results are space and time upper and lower bounds for the NOT function, our time bounds are tight. In the spirit of the distributed synchronizers [Awerbuch and Peleg, FOCS’90] and following [Hitron and Parter, ESA’19], we then provide a general synchronizer machinery. Our construction is very modular and it is based on efficient circuit implementation of threshold gates. The complexity of our scheme is measured by the overhead in the number of neurons and the computation time, both are shown to be polynomial in the largest latency value, and the largest incoming degree ∆ of the original network. Node Delays: We introduce the study of asynchronous communication due to variations in the response rates of the neurons in the network. In real brain networks, the round duration varies between different neurons in the network. Our key result is a simulation methodology that allows one to transform the above mentioned synchronized solution under edge delays into a synchronized under node delays while incurring a small overhead w.r.t space and time.more » « less
-
Biological neural computation is inherently asynchronous due to large variations in neuronal spike timing and transmission delays. So-far, most theoretical work on neural networks assumes the synchronous setting where neurons fire simultaneously in discrete rounds. In this work we aim at understanding the barriers of asynchronous neural computation from an algorithmic perspective. We consider an extension of the widely studied model of synchronized spiking neurons [Maass, Neural Networks 97] to the asynchronous setting by taking into account edge and node delays. - Edge Delays: We define an asynchronous model for spiking neurons in which the latency values (i.e., transmission delays) of non self-loop edges vary adversarially over time. This extends the recent work of [Hitron and Parter, ESA'19] in which the latency values are restricted to be fixed over time. Our first contribution is an impossibility result that implies that the assumption that self-loop edges have no delays (as assumed in Hitron and Parter) is indeed necessary. Interestingly, in real biological networks self-loop edges (a.k.a. autapse) are indeed free of delays, and the latter has been noted by neuroscientists to be crucial for network synchronization. To capture the computational challenges in this setting, we first consider the implementation of a single NOT gate. This simple function already captures the fundamental difficulties in the asynchronous setting. Our key technical results are space and time upper and lower bounds for the NOT function, our time bounds are tight. In the spirit of the distributed synchronizers [Awerbuch and Peleg, FOCS'90] and following [Hitron and Parter, ESA'19], we then provide a general synchronizer machinery. Our construction is very modular and it is based on efficient circuit implementation of threshold gates. The complexity of our scheme is measured by the overhead in the number of neurons and the computation time, both are shown to be polynomial in the largest latency value, and the largest incoming degree Δ of the original network. - Node Delays: We introduce the study of asynchronous communication due to variations in the response rates of the neurons in the network. In real brain networks, the round duration varies between different neurons in the network. Our key result is a simulation methodology that allows one to transform the above mentioned synchronized solution under edge delays into a synchronized under node delays while incurring a small overhead w.r.t space and time.more » « less
-
Abstract The striking similarity between biological locomotion gaits and the evolution of phase patterns in coupled oscillatory network can be traced to the role of central pattern generator located in the spinal cord. Bio-inspired robotics aim at harnessing this control approach for generation of rhythmic patterns for synchronized limb movement. Here, we utilize the phenomenon of synchronization and emergent spatiotemporal pattern from the interaction among coupled oscillators to generate a range of locomotion gait patterns. We experimentally demonstrate a central pattern generator network using capacitively coupled Vanadium Dioxide nano-oscillators. The coupled oscillators exhibit stable limit-cycle oscillations and tunable natural frequencies for real-time programmability of phase-pattern. The ultra-compact 1 Transistor-1 Resistor implementation of oscillator and bidirectional capacitive coupling allow small footprint area and low operating power. Compared to biomimetic CMOS based neuron and synapse models, our design simplifies on-chip implementation and real-time tunability by reducing the number of control parameters.
-
null (Ed.)Networks of coupled oscillators are some of the most studied objects in the theory of dynamical systems. Two important areas of current interest are the study of synchrony in highly disordered systems and the modeling of systems with adaptive network structures. Here, we present a single approach to both of these problems in the form of "KuraNet", a deep-learning-based system of coupled oscillators that can learn to synchronize across a distribution of disordered network conditions. The key feature of the model is the replacement of the traditionally static couplings with a coupling function which can learn optimal interactions within heterogeneous oscillator populations. We apply our approach to the eponymous Kuramoto model and demonstrate how KuraNet can learn data-dependent coupling structures that promote either global or cluster synchrony. For example, we show how KuraNet can be used to empirically explore the conditions of global synchrony in analytically impenetrable models with disordered natural frequencies, external field strengths, and interaction delays. In a sequence of cluster synchrony experiments, we further show how KuraNet can function as a data classifier by synchronizing into coherent assemblies. In all cases, we show how KuraNet can generalize to both new data and new network scales, making it easy to work with small systems and form hypotheses about the thermodynamic limit. Our proposed learning-based approach is broadly applicable to arbitrary dynamical systems with wide-ranging relevance to modeling in physics and systems biology.more » « less