skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Comparison of Absolute and Relative Neural Encoding Schemes in Addition and Subtraction Functional Subnetworks
As neural networks have become increasingly prolific solutions to modern problems in science and engineering, there has been a congruent rise in the popularity of the numerical machine learning techniques used to design them. While numerical methods are highly generalizable, they also tend to produce unintuitive networks with inscrutable behavior. One solution to the problem of network interpretability is to use analytical design techniques, but these methods are relatively underdeveloped compared to their numerical alternatives. To increase the utilization of analytical techniques and eventually facilitate the symbiotic integration of both design strategies, it is necessary to improve the efficacy of analytical methods on fundamental function approximation tasks that can be used to perform more complex operations. Toward this end, this manuscript extends the design constraints of the addition and subtraction subnetworks of the functional subnetwork approach (FSA) to arbitrarily many inputs, and then derives new constraints for an alternative neural encoding/decoding scheme. This encoding/decoding scheme involves storing information in the activation ratio of a subnetwork’s neurons, rather than directly in their membrane voltages. We show that our new “relative” encoding/decoding scheme has both qualitative and quantitative advantages compared to the existing “absolute” encoding/decoding scheme, including helping to mitigate saturation and improving approximation accuracy. Our relative encoding scheme will be extended to other functional subnetworks in future work to assess its advantages on more complex operations.  more » « less
Award ID(s):
2015317
PAR ID:
10517543
Author(s) / Creator(s):
;
Publisher / Repository:
Springer, Cham.
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Meder, F. (Ed.)
    As neural networks have become increasingly prolific solutions to modern problems in science and engineering, there has been a congruent rise in the popularity of the numerical machine learning techniques used to design them. While numerical methods are highly generalizable, they also tend to produce unintuitive networks with inscrutable behavior. One solution to the problem of network interpretability is to use analytical design techniques, but these methods are relatively underdeveloped compared to their numerical alternatives. To increase the utilization of analytical techniques and eventually facilitate the symbiotic integration of both design strategies, it is necessary to improve the efficacy of analytical methods on fundamental function approximation tasks that can be used to perform more complex operations. Toward this end, this manuscript extends the design constraints of the addition and subtraction subnetworks of the functional subnetwork approach (FSA) to arbitrarily many inputs, and then derives new constraints for an alternative neural encoding/decoding scheme. This encoding/decoding scheme involves storing information in the activation ratio of a subnetwork’s neurons, rather than directly in their membrane voltages. We show that our new “relative” encoding/decoding scheme has both qualitative and quantitative advantages compared to the existing “absolute” encoding/decoding scheme, including helping to mitigate saturation and improving approximation accuracy. Our relative encoding scheme will be extended to other functional subnetworks in future work to assess its advantages on more complex operations. 
    more » « less
  2. Abstract Objective . Neural decoding is an important tool in neural engineering and neural data analysis. Of various machine learning algorithms adopted for neural decoding, the recently introduced deep learning is promising to excel. Therefore, we sought to apply deep learning to decode movement trajectories from the activity of motor cortical neurons. Approach . In this paper, we assessed the performance of deep learning methods in three different decoding schemes, concurrent, time-delay, and spatiotemporal. In the concurrent decoding scheme where the input to the network is the neural activity coincidental to the movement, deep learning networks including artificial neural network (ANN) and long-short term memory (LSTM) were applied to decode movement and compared with traditional machine learning algorithms. Both ANN and LSTM were further evaluated in the time-delay decoding scheme in which temporal delays are allowed between neural signals and movements. Lastly, in the spatiotemporal decoding scheme, we trained convolutional neural network (CNN) to extract movement information from images representing the spatial arrangement of neurons, their activity, and connectomes (i.e. the relative strengths of connectivity between neurons) and combined CNN and ANN to develop a hybrid spatiotemporal network. To reveal the input features of the CNN in the hybrid network that deep learning discovered for movement decoding, we performed a sensitivity analysis and identified specific regions in the spatial domain. Main results . Deep learning networks (ANN and LSTM) outperformed traditional machine learning algorithms in the concurrent decoding scheme. The results of ANN and LSTM in the time-delay decoding scheme showed that including neural data from time points preceding movement enabled decoders to perform more robustly when the temporal relationship between the neural activity and movement dynamically changes over time. In the spatiotemporal decoding scheme, the hybrid spatiotemporal network containing the concurrent ANN decoder outperformed single-network concurrent decoders. Significance . Taken together, our study demonstrates that deep learning could become a robust and effective method for the neural decoding of behavior. 
    more » « less
  3. Central pattern generators (CPGs) are ubiquitous neural circuits that contribute to an eclectic collection of rhythmic behaviors across an equally diverse assortment of animal species. Due to their prominent role in many neuromechanical phenomena, numerous bioinspired robots have been designed to both investigate and exploit the operation of these neural oscillators. In order to serve as effective tools for these robotics applications, however, it is often necessary to be able to adjust the phase alignment of multiple CPGs during operation. To achieve this goal, we present the design of our phase difference control (PDC) network using a functional subnetwork approach (FSA) wherein subnetworks that perform basic mathematical operations are assembled such that they serve to control the relative phase lead/lag of target CPGs. Our PDC network operates by first estimating the phase difference between two CPGs, then comparing this phase difference to a reference signal that encodes the desired phase difference, and finally eliminating any error by emulating a proportional controller that adjusts the CPG oscillation frequencies. The architecture of our PDC network, as well as its various parameters, are all determined via analytical design rules that allow for direct interpretability of the network behavior. Simulation results for both the complete PDC network and a selection of its various functional subnetworks are provided to demonstrate the efficacy of our methodology. 
    more » « less
  4. Large-scale deep neural networks are both memory and computation-intensive, thereby posing stringent requirements on the computing platforms. Hardware accelerations of deep neural networks have been extensively investigated. Specific forms of binary neural networks (BNNs) and stochastic computing-based neural networks (SCNNs) are particularly appealing to hardware implementations since they can be implemented almost entirely with binary operations. Despite the obvious advantages in hardware implementation, these approximate computing techniques are questioned by researchers in terms of accuracy and universal applicability. Also it is important to understand the relative pros and cons of SCNNs and BNNs in theory and in actual hardware implementations. In order to address these concerns, in this paper, we prove that the ”ideal” SCNNs and BNNs satisfy the universal approximation property with probability 1 (due to the stochastic behavior), which is a new angle from the original approximation property. The proof is conducted by first proving the property for SCNNs from the strong law of large numbers, and then using SCNNs as a “bridge” to prove for BNNs. Besides the universal approximation property, we also derive an appropriate bound for bit length M in order to provide insights for the actual neural network implementations. Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity. In other words, they have the same asymptotic energy consumption with the growth of network size. We also provide a detailed analysis of the pros and cons of SCNNs and BNNs for hardware implementations and conclude that SCNNs are more suitable. 
    more » « less
  5. Large-scale deep neural networks are both memory and computation-intensive, thereby posing stringent requirements on the computing platforms. Hardware accelerations of deep neural networks have been extensively investigated. Spe- cific forms of binary neural networks (BNNs) and stochastic computing-based neural networks (SCNNs) are particularly appealing to hardware implementations since they can be im- plemented almost entirely with binary operations. Despite the obvious advantages in hardware implementation, these approximate computing techniques are questioned by researchers in terms of accuracy and universal applicability. Also it is important to understand the relative pros and cons of SCNNs and BNNs in theory and in actual hardware im- plementations. In order to address these concerns, in this pa- per we prove that the ”ideal” SCNNs and BNNs satisfy the universal approximation property with probability 1 (due to the stochastic behavior), which is a new angle from the orig- inal approximation property. The proof is conducted by first proving the property for SCNNs from the strong law of large numbers, and then using SCNNs as a “bridge” to prove for BNNs. Besides the universal approximation property, we also derive an appropriate bound for bit length M in order to pro- vide insights for the actual neural network implementations. Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy com- plexity. In other words, they have the same asymptotic energy consumption with the growth of network size. We also pro- vide a detailed analysis of the pros and cons of SCNNs and BNNs for hardware implementations and conclude that SC- NNs are more suitable. 
    more » « less