WinnerTakeAll (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain’s fundamental computational abilities.However, not much is known about the robustness of a spikebased WTA network to the inherent randomness of the input spike trains. In this work, we consider a spikebased k–WTA model where in n randomly generated input spike trains compete with each other based on their underlying firing rates, and k winners are supposed to be selected. We slot the time evenly with each time slot of length 1ms, and model then input spike trains as n independent Bernoulli processes. We analytically characterize the minimum waiting time needed so that a target minimax decision accuracy (success probability) can be reached.We first derive an informationtheoretic lower bound on the decision time. We show that to guarantee a (minimax) decision error≤δ(whereδ∈(0,1)), the waiting time of any WTA circuit is at least((1−δ) log(k(n−k) + 1)−1)TR,whereR ⊆(0,1) is a finite set of rates, and TR is a difficulty parameter of a WTA task with respect to setRfor independent input spike trains.Additionally,TR is independent ofδ,n, andk. We then design a simple WTA circuit whose waiting time isO((log(1δ)+ logk(n−k))TR), provided that the local memory of each output neuron is sufficiently long. It turns out that for any fixed δ, this decision time is orderoptimal (i.e., it 2 matches the above lower bound up to a multiplicative constant factor) in terms of its scaling inn,k, and TR.
more »
« less
SpikeBased WinnerTakeAll Computation: Fundamental Limits and OrderOptimal Circuits
Winnertakeall (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain’s fundamental computational abilities. However, not much is known about the robustness of a spikebased WTA network to the inherent randomness of the input spike trains.In this work, we consider a spikebased k–WTA model wherein n randomly generated input spike trains compete with each other based on their underlying firing rates and k winners are supposed to be selected. We slot the time evenly with each time slot of length 1 ms and model then input spike trains as n independent Bernoulli processes. We analytically characterize the minimum waiting time needed so that a target minimax decision accuracy (success probability) can be reached. We first derive an informationtheoretic lower bound on the waiting time. We show that to guarantee a (minimax) decision error≤δ(whereδ∈(0,1)), the waiting time of any WTA circuit is at least ((1−δ)log(k(n−k)+1)−1)TR,where R⊆(0,1)is a finite set of rates and TR is a difficulty parameter of a WTA task with respect to set R for independent input spike trains. Additionally,TR is independent of δ,n,and k. We then design a simple WTA circuit whose waiting time is 2524L. Su, C.J. Chang, and N. Lynch O((log(1δ)+logk(n−k))TR),provided that the local memory of each output neuron is sufficiently long. It turns out that for any fixed δ, this decision time is orderoptimal (i.e., it matches the above lower bound up to a multiplicative constant factor) in terms of its scaling inn,k, and TR.
more »
« less
 Award ID(s):
 1810758
 NSFPAR ID:
 10161820
 Date Published:
 Journal Name:
 Neural computation
 Volume:
 31
 Issue:
 12
 ISSN:
 08997667
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


We consider the task of measuring time with probabilistic threshold gates implemented by bioinspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuroscience. To gain a better algorithmic understanding onto these processes, we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds.We first consider a deterministic implementation of a neural timer and show that Θ(logt)(deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1−δ, and should stop firing after at most 2 t rounds with probability 1−δ for some input parameter δ∈(0,1). Our key result is a construction of a neural timer with O(log log 1/δ) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Ω(min{log log 1/δ,logt}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks.Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [AwerbuchPeleg, FOCS’90], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time.more » « less

Given a metric space ℳ = (X,δ), a weighted graph G over X is a metric tspanner of ℳ if for every u,v ∈ X, δ(u,v) ≤ δ_G(u,v) ≤ t⋅ δ(u,v), where δ_G is the shortest path metric in G. In this paper, we construct spanners for finite sets in metric spaces in the online setting. Here, we are given a sequence of points (s₁, …, s_n), where the points are presented one at a time (i.e., after i steps, we have seen S_i = {s₁, … , s_i}). The algorithm is allowed to add edges to the spanner when a new point arrives, however, it is not allowed to remove any edge from the spanner. The goal is to maintain a tspanner G_i for S_i for all i, while minimizing the number of edges, and their total weight. Under the L₂norm in ℝ^d for arbitrary constant d ∈ ℕ, we present an online (1+ε)spanner algorithm with competitive ratio O_d(ε^{d} log n), improving the previous bound of O_d(ε^{(d+1)}log n). Moreover, the spanner maintained by the algorithm has O_d(ε^{1d}log ε^{1})⋅ n edges, almost matching the (offline) optimal bound of O_d(ε^{1d})⋅ n. In the plane, a tighter analysis of the same algorithm provides an almost quadratic improvement of the competitive ratio to O(ε^{3/2}logε^{1}log n), by comparing the online spanner with an instanceoptimal spanner directly, bypassing the comparison to an MST (i.e., lightness). As a counterpart, we design a sequence of points that yields a Ω_d(ε^{d}) lower bound for the competitive ratio for online (1+ε)spanner algorithms in ℝ^d under the L₁norm. Then we turn our attention to online spanners in general metrics. Note that, it is not possible to obtain a spanner with stretch less than 3 with a subquadratic number of edges, even in the offline setting, for general metrics. We analyze an online version of the celebrated greedy spanner algorithm, dubbed ordered greedy. With stretch factor t = (2k1)(1+ε) for k ≥ 2 and ε ∈ (0,1), we show that it maintains a spanner with O(ε^{1}logε^{1})⋅ n^{1+1/k} edges and O(ε^{1}n^{1/k}log² n) lightness for a sequence of n points in a metric space. We show that these bounds cannot be significantly improved, by introducing an instance that achieves an Ω(1/k⋅ n^{1/k}) competitive ratio on both sparsity and lightness. Furthermore, we establish the tradeoff among stretch, number of edges and lightness for points in ultrametrics, showing that one can maintain a (2+ε)spanner for ultrametrics with O(ε^{1}logε^{1})⋅ n edges and O(ε^{2}) lightness.more » « less

Given a metric space ℳ = (X,δ), a weighted graph G over X is a metric tspanner of ℳ if for every u,v ∈ X, δ(u,v) ≤ δ_G(u,v) ≤ t⋅ δ(u,v), where δ_G is the shortest path metric in G. In this paper, we construct spanners for finite sets in metric spaces in the online setting. Here, we are given a sequence of points (s₁, …, s_n), where the points are presented one at a time (i.e., after i steps, we have seen S_i = {s₁, … , s_i}). The algorithm is allowed to add edges to the spanner when a new point arrives, however, it is not allowed to remove any edge from the spanner. The goal is to maintain a tspanner G_i for S_i for all i, while minimizing the number of edges, and their total weight. Under the L₂norm in ℝ^d for arbitrary constant d ∈ ℕ, we present an online (1+ε)spanner algorithm with competitive ratio O_d(ε^{d} log n), improving the previous bound of O_d(ε^{(d+1)}log n). Moreover, the spanner maintained by the algorithm has O_d(ε^{1d}log ε^{1})⋅ n edges, almost matching the (offline) optimal bound of O_d(ε^{1d})⋅ n. In the plane, a tighter analysis of the same algorithm provides an almost quadratic improvement of the competitive ratio to O(ε^{3/2}logε^{1}log n), by comparing the online spanner with an instanceoptimal spanner directly, bypassing the comparison to an MST (i.e., lightness). As a counterpart, we design a sequence of points that yields a Ω_d(ε^{d}) lower bound for the competitive ratio for online (1+ε)spanner algorithms in ℝ^d under the L₁norm. Then we turn our attention to online spanners in general metrics. Note that, it is not possible to obtain a spanner with stretch less than 3 with a subquadratic number of edges, even in the offline setting, for general metrics. We analyze an online version of the celebrated greedy spanner algorithm, dubbed ordered greedy. With stretch factor t = (2k1)(1+ε) for k ≥ 2 and ε ∈ (0,1), we show that it maintains a spanner with O(ε^{1}logε^{1})⋅ n^{1+1/k} edges and O(ε^{1}n^{1/k}log² n) lightness for a sequence of n points in a metric space. We show that these bounds cannot be significantly improved, by introducing an instance that achieves an Ω(1/k⋅ n^{1/k}) competitive ratio on both sparsity and lightness. Furthermore, we establish the tradeoff among stretch, number of edges and lightness for points in ultrametrics, showing that one can maintain a (2+ε)spanner for ultrametrics with O(ε^{1}logε^{1})⋅ n edges and O(ε^{2}) lightness.more » « less

We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speedups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{10} + sqrt{n} epsilon^{12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{1}), with B an upper bound on the tracenorm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for lowrank Hamiltonians, given quantum states encoding these Hamiltonians, with a polylogarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest.more » « less