skip to main content


Search for: All records

Creators/Authors contains: "Jia, Z."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Neural networks are powerful tools. Applying them in computer systems—operating systems, databases, and networked systems—attracts much attention. However, neural networks are complicated black boxes that may produce unexpected results. To train networks with well-defined behaviors, we introduce ouroboros, a system that constructs verified neural networks. Verified neural networks are those that satisfy user-defined safety properties, known as specifications. Ouroboros builds verified networks by a training-verification loop that combines deep learning training and neural network verification. The system employs multiple techniques to fill the gap between today’s verification and the properties required for systems. Ouroboros also accelerates the training-verification loop by spec-aware learning. Our experiments show that ouroboros can train verified networks for five applications that we study and has a 2.8× speedup on average compared with the vanilla training-verification loop. 
    more » « less
  2. Abstract

    Neural circuit function is shaped both by the cell types that comprise the circuit and the connections between those cell types1. Neural cell types have previously been defined by morphology2, 3, electrophysiology4, 5, transcriptomic expression6–8, connectivity9–13, or even a combination of such modalities14–16. More recently, the Patch-seq technique has enabled the characterization of morphology (M), electrophysiology (E), and transcriptomic (T) properties from individual cells17–20. Using this technique, these properties were integrated to define 28, inhibitory multimodal, MET-types in mouse primary visual cortex21. It is unknown how these MET-types connect within the broader cortical circuitry however. Here we show that we can predict the MET-type identity of inhibitory cells within a large-scale electron microscopy (EM) dataset and these MET-types have distinct ultrastructural features and synapse connectivity patterns. We found that EM Martinotti cells, a well defined morphological cell type22, 23known to be Somatostatin positive (Sst+)24, 25, were successfully predicted to belong to Sst+ MET-types. Each identified MET-type had distinct axon myelination patterns and synapsed onto specific excitatory targets. Our results demonstrate that morphological features can be used to link cell type identities across imaging modalities, which enables further comparison of connectivity in relation to transcriptomic or electrophysiological properties. Furthermore, our results show that MET-types have distinct connectivity patterns, supporting the use of MET-types and connectivity to meaningfully define cell types.

     
    more » « less
  3. Graph Neural Networks (GNNs) are based on repeated aggregations of information from nodes’ neighbors in a graph. However, because nodes share many neighbors, a naive implementation leads to repeated and inefficient aggregations and represents significant computational overhead. Here we propose Hierarchically Aggregated computation Graphs (HAGs), a new GNN representation technique that explicitly avoids redundancy by managing intermediate aggregation results hierarchically and eliminates repeated computations and unnecessary data transfers in GNN training and inference. HAGs perform the same computations and give the same models/accuracy as traditional GNNs, but in a much shorter time due to optimized computations. To identify redundant computations, we introduce an accurate cost function and use a novel search algorithm to find optimized HAGs. Experiments show that the HAG representation significantly outperforms the standard GNN by increasing the end-to-end training throughput by up to 2.8× and reducing the aggregations and data transfers in GNN training by up to 6.3× and 5.6×, with only 0.1% memory overhead. Overall, our results represent an important advancement in speeding-up and scaling-up GNNs without any loss in model predictive performance. 
    more » « less