Comparison between a novel compressed sensing-based neural network and traditional neural network approaches for electrical impedance tomography reconstruction
- Award ID(s):
- 2138756
- PAR ID:
- 10575635
- Editor(s):
- Rizzo, Piervincenzo; Su, Zhongqing; Ricci, Fabrizio; Peters, Kara J
- Publisher / Repository:
- SPIE
- Date Published:
- ISBN:
- 9781510672086
- Page Range / eLocation ID:
- 54
- Format(s):
- Medium: X
- Location:
- Long Beach, United States
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)
-
Abstract Machine learning influences numerous aspects of modern society, empowers new technologies, from Alphago to ChatGPT, and increasingly materializes in consumer products such as smartphones and self-driving cars. Despite the vital role and broad applications of artificial neural networks, we lack systematic approaches, such as network science, to understand their underlying mechanism. The difficulty is rooted in many possible model configurations, each with different hyper-parameters and weighted architectures determined by noisy data. We bridge the gap by developing a mathematical framework that maps the neural network’s performance to the network characters of the line graph governed by the edge dynamics of stochastic gradient descent differential equations. This framework enables us to derive a neural capacitance metric to universally capture a model’s generalization capability on a downstream task and predict model performance using only early training results. The numerical results on 17 pre-trained ImageNet models across five benchmark datasets and one NAS benchmark indicate that our neural capacitance metric is a powerful indicator for model selection based only on early training results and is more efficient than state-of-the-art methods.more » « less
-
This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence. We introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature. As baselines, we train several recurrent neural network models on acceptability classification, and find that our models outperform unsupervised models by Lau et al. (2016) on CoLA. Error-analysis on specific grammatical phenomena reveals that both Lau et al.’s models and ours learn systematic generalizations like subject-verb-object order. However, all models we test perform far below human level on a wide range of grammatical constructions.more » « less
An official website of the United States government

