skip to main content


This content will become publicly available on October 25, 2024

Title: The Small-World Effect for Interferometer Networks
Complex network theory has focused on properties of networks with real-valued edge weights. However, in signal transfer networks, such as those representing the transfer of light across an interferometer, complex-valued edge weights are needed to represent the manipulation of the signal in both magnitude and phase. These complex-valued edge weights introduce interference into the signal transfer, but it is unknown how such interference affects network properties such as small-worldness. To address this gap, we have introduced a small-world interferometer network model with complex-valued edge weights and generalized existing network measures to define the interferometric clustering coefficient, the apparent path length, and the interferometric small-world coefficient. Using high-performance computing resources, we generated a large set of small-world interferometers over a wide range of parameters in system size, nearest-neighbor count, and edge-weight phase and computed their interferometric network measures. We found that the interferometric small-world coefficient depends significantly on the amount of phase on complex-valued edge weights: for small edge-weight phases, constructive interference led to a higher interferometric small-world coefficient; while larger edge-weight phases induced destructive interference which led to a lower interferometric small-world coefficient. Thus, for the small-world interferometer model, interferometric measures are necessary to capture the effect of interference on signal transfer. This model is an example of the type of problem that necessitates interferometric measures, and applies to any wave-based network including quantum networks.  more » « less
Award ID(s):
1839232
NSF-PAR ID:
10488290
Author(s) / Creator(s):
Publisher / Repository:
arXiv.org
Date Published:
Journal Name:
arXivorg
ISSN:
2331-8422
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Binary neural networks (BNNs) substitute complex arithmetic operations with simple bit-wise operations. The binarized weights and activations in BNNs can drastically reduce memory requirement and energy consumption, making it attractive for edge ML applications with limited resources. However, the severe memory capacity and energy constraints of low-power edge devices call for further reduction of BNN models beyond binarization. Weight pruning is a proven solution for reducing the size of many neural network (NN) models, but the binary nature of BNN weights make it difficult to identify insignificant weights to remove. In this paper, we present a pruning method based on latent weight with layer-level pruning sensitivity analysis which reduces the over-parameterization of BNNs, allowing for accuracy gains while drastically reducing the model size. Our method advocates for a heuristics that distinguishes weights by their latent weights, a real-valued vector used to compute the pseudogradient during backpropagation. It is tested using three different convolutional NNs on the MNIST, CIFAR-10, and Imagenette datasets with results indicating a 33%--46% reduction in operation count, with no accuracy loss, improving upon previous works in accuracy, model size, and total operation count. 
    more » « less
  2. Neurons in real brains are complex computational units, capable of input-specific damping, inter-trial memory, and context-dependent signal processing. Artificial neurons, on the other hand, are usually implemented as simple weighted sums. Here we explore if increasing the computational power of individual neurons can yield more powerful neural networks. Specifically, we introduce Deep Artificial Neurons (DANs)—small neural networks with shared, learnable parameters embedded within a larger network. DANs act as filters between nodes in the net-work; namely, they receive vectorized inputs from multiple neurons in the previous layer, condense these signals into a single output, then send this processed signal to the neurons in the subsequent layer. We demonstrate that it is possible to meta-learn shared parameters for the various DANS in the network in order to facilitate continual and transfer learning during deployment. Specifically, we present experimental results on (1) incremental non-linear regression tasks and (2)unsupervised class-incremental image reconstruction that show that DANs allow a single network to update its synapses (i.e., regular weights) over time with minimal forgetting. Notably, our approach uses standard backpropagation, does not require experience replay, and does need separate wake/sleep phases. 
    more » « less
  3. Abstract Radio Frequency Interference (RFI) is an ever-present limiting factor among radio telescopes even in the most remote observing locations. When looking to retain the maximum amount of sensitivity and reduce contamination for Epoch of Reionization studies, the identification and removal of RFI is especially important. In addition to improved RFI identification, we must also take into account computational efficiency of the RFI-Identification algorithm as radio interferometer arrays such as the Hydrogen Epoch of Reionization Array grow larger in number of receivers. To address this, we present a Deep Fully Convolutional Neural Network (DFCN) that is comprehensive in its use of interferometric data, where both amplitude and phase information are used jointly for identifying RFI. We train the network using simulated HERA visibilities containing mock RFI, yielding a known “ground truth” dataset for evaluating the accuracy of various RFI algorithms. Evaluation of the DFCN model is performed on observations from the 67 dish build-out, HERA-67, and achieves a data throughput of 1.6× 105 HERA time-ordered 1024 channeled visibilities per hour per GPU. We determine that relative to an amplitude only network including visibility phase adds important adjacent time-frequency context which increases discrimination between RFI and Non-RFI. The inclusion of phase when predicting achieves a Recall of 0.81, Precision of 0.58, and F2 score of 0.75 as applied to our HERA-67 observations. 
    more » « less
  4. Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images’ size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model’s results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings. 
    more » « less
  5. Abstract Motivation

    Network alignment (NA) aims to find a node mapping that conserves similar regions between compared networks. NA is applicable to many fields, including computational biology, where NA can guide the transfer of biological knowledge from well- to poorly-studied species across aligned network regions. Existing NA methods can only align static networks. However, most complex real-world systems evolve over time and should thus be modeled as dynamic networks. We hypothesize that aligning dynamic network representations of evolving systems will produce superior alignments compared to aligning the systems’ static network representations, as is currently done.

    Results

    For this purpose, we introduce the first ever dynamic NA method, DynaMAGNA ++. This proof-of-concept dynamic NA method is an extension of a state-of-the-art static NA method, MAGNA++. Even though both MAGNA++ and DynaMAGNA++ optimize edge as well as node conservation across the aligned networks, MAGNA++ conserves static edges and similarity between static node neighborhoods, while DynaMAGNA++ conserves dynamic edges (events) and similarity between evolving node neighborhoods. For this purpose, we introduce the first ever measure of dynamic edge conservation and rely on our recent measure of dynamic node conservation. Importantly, the two dynamic conservation measures can be optimized with any state-of-the-art NA method and not just MAGNA++. We confirm our hypothesis that dynamic NA is superior to static NA, on synthetic and real-world networks, in computational biology and social domains. DynaMAGNA++ is parallelized and has a user-friendly graphical interface.

    Availability and implementation

    http://nd.edu/∼cone/DynaMAGNA++/.

    Supplementary information

    Supplementary data are available at Bioinformatics online.

     
    more » « less