skip to main content


Title: A theory of high dimensional regression with arbitrary correlations between input features and target functions: sample complexity, multiple descent curves and a hierarchy of phase transitions
The performance of neural networks depends on precise relationships between four distinct ingredients: the architecture, the loss function, the statistical structure of inputs, and the ground truth target function. Much theoretical work has focused on understanding the role of the first two ingredients under highly simplified models of random uncorrelated data and target functions. In contrast, performance likely relies on a conspiracy between the statistical structure of the input distribution and the structure of the function to be learned. To understand this better we revisit ridge regression in high dimensions, which corresponds to an exceedingly simple architecture and loss function, but we analyze its performance under arbitrary correlations between input features and the target function. We find a rich mathematical structure that includes: (1) a dramatic reduction in sample complexity when the target function aligns with data anisotropy; (2) the existence of multiple descent curves; (3) a sequence of phase transitions in the performance, loss landscape, and optimal regularization as a function of the amount of data that explains the first two effects.  more » « less
Award ID(s):
1845166
NSF-PAR ID:
10293707
Author(s) / Creator(s):
;
Date Published:
Journal Name:
International Conference on Machine Learning
Volume:
139
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Nonlinear response history analysis (NLRHA) is generally considered to be a reliable and robust method to assess the seismic performance of buildings under strong ground motions. While NLRHA is fairly straightforward to evaluate individual structures for a select set of ground motions at a specific building site, it becomes less practical for performing large numbers of analyses to evaluate either (1) multiple models of alternative design realizations with a site‐specific set of ground motions, or (2) individual archetype building models at multiple sites with multiple sets of ground motions. In this regard, surrogate models offer an alternative to running repeated NLRHAs for variable design realizations or ground motions. In this paper, a recently developed surrogate modeling technique, called probabilistic learning on manifolds (PLoM), is presented to estimate structural seismic response. Essentially, the PLoM method provides an efficient stochastic model to develop mappings between random variables, which can then be used to efficiently estimate the structural responses for systems with variations in design/modeling parameters or ground motion characteristics. The PLoM algorithm is introduced and then used in two case studies of 12‐story buildings for estimating probability distributions of structural responses. The first example focuses on the mapping between variable design parameters of a multidegree‐of‐freedom analysis model and its peak story drift and acceleration responses. The second example applies the PLoM technique to estimate structural responses for variations in site‐specific ground motion characteristics. In both examples, training data sets are generated for orthogonal input parameter grids, and test data sets are developed for input parameters with prescribed statistical distributions. Validation studies are performed to examine the accuracy and efficiency of the PLoM models. Overall, both examples show good agreement between the PLoM model estimates and verification data sets. Moreover, in contrast to other common surrogate modeling techniques, the PLoM model is able to preserve correlation structure between peak responses. Parametric studies are conducted to understand the influence of different PLoM tuning parameters on its prediction accuracy.

     
    more » « less
  2. We propose a novel family of connectionist models based on kernel machines and consider the problem of learning layer by layer a compositional hypothesis class (i.e., a feedforward, multilayer architecture) in a supervised setting. In terms of the models, we present a principled method to “kernelize” (partly or completely) any neural network (NN). With this method, we obtain a counterpart of any given NN that is powered by kernel machines instead of neurons. In terms of learning, when learning a feedforward deep architecture in a supervised setting, one needs to train all the components simultaneously using backpropagation (BP) since there are no explicit targets for the hidden layers (Rumelhart, Hinton, & Williams, 1986). We consider without loss of generality the two-layer case and present a general framework that explicitly characterizes a target for the hidden layer that is optimal for minimizing the objective function of the network. This characterization then makes possible a purely greedy training scheme that learns one layer at a time, starting from the input layer. We provide instantiations of the abstract framework under certain architectures and objective functions. Based on these instantiations, we present a layer-wise training algorithm for an l-layer feedforward network for classification, where l≥2 can be arbitrary. This algorithm can be given an intuitive geometric interpretation that makes the learning dynamics transparent. Empirical results are provided to complement our theory. We show that the kernelized networks, trained layer-wise, compare favorably with classical kernel machines as well as other connectionist models trained by BP. We also visualize the inner workings of the greedy kernelized models to validate our claim on the transparency of the layer-wise algorithm. 
    more » « less
  3. By mimicking biomimetic synaptic processes, the success of artificial intelligence (AI) has been astounding with various applications such as driving automation, big data analysis, and natural-language processing.[1-4] Due to a large quantity of data transmission between the separated memory unit and the logic unit, the classical computing system with von Neumann architecture consumes excessive energy and has a significant processing delay.[5] Furthermore, the speed difference between the two units also causes extra delay, which is referred to as the memory wall.[6, 7] To keep pace with the rapid growth of AI applications, enhanced hardware systems that particularly feature an energy-efficient and high-speed hardware system need to be secured. The novel neuromorphic computing system, an in-memory architecture with low power consumption, has been suggested as an alternative to the conventional system. Memristors with analog-type resistive switching behavior are a promising candidate for implementing the neuromorphic computing system since the devices can modulate the conductance with cycles that act as synaptic weights to process input signals and store information.[8, 9]

    The memristor has sparked tremendous interest due to its simple two-terminal structure, including top electrode (TE), bottom electrode (BE), and an intermediate resistive switching (RS) layer. Many oxide materials, including HfO2, Ta2O5, and IGZO, have extensively been studied as an RS layer of memristors. Silicon dioxide (SiO2) features 3D structural conformity with the conventional CMOS technology and high wafer-scale homogeneity, which has benefited modern microelectronic devices as dielectric and/or passivation layers. Therefore, the use of SiO2as a memristor RS layer for neuromorphic computing is expected to be compatible with current Si technology with minimal processing and material-related complexities.

    In this work, we proposed SiO2-based memristor and investigated switching behaviors metallized with different reduction potentials by applying pure Cu and Ag, and their alloys with varied ratios. Heavily doped p-type silicon was chosen as BE in order to exclude any effects of the BE ions on the memristor performance. We previously reported that the selection of TE is crucial for achieving a high memory window and stable switching performance. According to the study which compares the roles of Cu (switching stabilizer) and Ag (large switching window performer) TEs for oxide memristors, we have selected the TE materials and their alloys to engineer the SiO2-based memristor characteristics. The Ag TE leads to a larger memory window of the SiO2memristor, but the device shows relatively large variation and less reliability. On the other hand, the Cu TE device presents uniform gradual switching behavior which is in line with our previous report that Cu can be served as a stabilizer, but with small on/off ratio.[9] These distinct performances with Cu and Ag metallization leads us to utilize a Cu/Ag alloy as the TE. Various compositions of Cu/Ag were examined for the optimization of the memristor TEs. With a Cu/Ag alloying TE with optimized ratio, our SiO2based memristor demonstrates uniform switching behavior and memory window for analog switching applications. Also, it shows ideal potentiation and depression synaptic behavior under the positive/negative spikes (pulse train).

    In conclusion, the SiO2memristors with different metallization were established. To tune the property of RS layer, the sputtering conditions of RS were varied. To investigate the influence of TE selections on switching performance of memristor, we integrated Cu, Ag and Cu/Ag alloy as TEs and compared the switch characteristics. Our encouraging results clearly demonstrate that SiO2with Cu/Ag is a promising memristor device with synaptic switching behavior in neuromorphic computing applications.

    Acknowledgement

    This work was supported by the U.S. National Science Foundation (NSF) Award No. ECCS-1931088. S.L. and H.W.S. acknowledge the support from the Improvement of Measurement Standards and Technology for Mechanical Metrology (Grant No. 22011044) by KRISS.

    References

    [1] Younget al.,IEEE Computational Intelligence Magazine,vol. 13, no. 3, pp. 55-75, 2018.

    [2] Hadsellet al.,Journal of Field Robotics,vol. 26, no. 2, pp. 120-144, 2009.

    [3] Najafabadiet al.,Journal of Big Data,vol. 2, no. 1, p. 1, 2015.

    [4] Zhaoet al.,Applied Physics Reviews,vol. 7, no. 1, 2020.

    [5] Zidanet al.,Nature Electronics,vol. 1, no. 1, pp. 22-29, 2018.

    [6] Wulfet al.,SIGARCH Comput. Archit. News,vol. 23, no. 1, pp. 20–24, 1995.

    [7] Wilkes,SIGARCH Comput. Archit. News,vol. 23, no. 4, pp. 4–6, 1995.

    [8] Ielminiet al.,Nature Electronics,vol. 1, no. 6, pp. 333-343, 2018.

    [9] Changet al.,Nano Letters,vol. 10, no. 4, pp. 1297-1301, 2010.

    [10] Qinet al., Physica Status Solidi (RRL) - Rapid Research Letters, pssr.202200075R1, In press, 2022.

     
    more » « less
  4. The traditional von Neumann architecture limits the increase in computing efficiency and results in massive power consumption in modern computers due to the separation of storage and processing units. The novel neuromorphic computation system, an in-memory computing architecture with low power consumption, is aimed to break the bottleneck and meet the needs of the next generation of artificial intelligence (AI) systems. Thus, it is urgent to find a memory technology to implement the neuromorphic computing nanosystem. Nowadays, the silicon-based flash memory dominates non-volatile memory market, however, it is facing challenging issues to achieve the requirements of future data storage device development due to the drawbacks, such as scaling issue, relatively slow operation speed, and high voltage for program/erase operations. The emerging resistive random-access memory (RRAM) has prompted extensive research as its simple two-terminal structure, including top electrode (TE) layer, bottom electrode (BE) layer, and an intermediate resistive switching (RS) layer. It can utilize a temporary and reversible dielectric breakdown to cause the RS phenomenon between the high resistance state (HRS) and the low resistance state (LRS). RRAM is expected to outperform conventional memory device with the advantages, notably its low-voltage operation, short programming time, great cyclic stability, and good scalability. Among the materials for RS layer, indium gallium zinc oxide (IGZO) has shown attractive prospects in abundance and high atomic diffusion property of oxygen atoms, transparency. Additionally, its electrical properties can be easily modulated by controlling the stoichiometric ratio of indium and gallium as well as oxygen potential in the sputter gas. Moreover, since the IGZO can be applied to both the thin-film transistor (TFT) channel and RS layer, it has a great potential for fully integrated transparent electronics application. In this work, we proposed amorphous transparent IGZO-based RRAMs and investigated switching behaviors of the memory cells prepared with different top electrodes. First, ITO was choosing to serve as both TE and BE to achieve high transmittance. A multi-target magnetron sputtering system was employed to deposit all three layers (TE, RS, BE layers) on glass substrate. I-V characteristics were evaluated by a semiconductor parameter analyzer, and the bipolar RS feature of our RRAM devices was demonstrated by typical butterfly curves. The optical transmission analysis was carried out via a UV-Vis spectrometer and the average transmittance was around 80% out of entire devices in the visible-light wavelength range, implying high transparency. We adjusted the oxygen partial pressure during the sputtering of IGZO to optimize the property because the oxygen vacancy concentration governs the RS performance. Electrode selection is crucial and can impact the performance of the whole device. Thus, Cu TE was chosen for our second type of device because the diffusion of Cu ions can be beneficial for the formation of the conductive filament (CF). A ~5 nm SiO 2 barrier layer was employed between TE and RS layers to confine the diffusion of Cu into the RS layer. At the same time, this SiO 2 inserting layer can provide an additional interfacial series resistance in the device to lower the off current, consequently, improve the on/off ratio and whole performance. Finally, an oxygen affinity metal Ti was selected as the TE for our third type of device because the concentration of the oxygen atoms can be shifted towards the Ti electrode, which provides an oxygengettering activity near the Ti metal. This process may in turn lead to the formation of a sub-stoichiometric region in the neighboring oxide that is believed to be the origin of better performance. In conclusion, the transparent amorphous IGZO-based RRAMs were established. To tune the property of RS layer, the sputtering conditions of RS were varied. To investigate the influence of TE selections on switching performance of RRAMs, we integrated a set of TE materials, and a barrier layer on IGZO-based RRAM and compared the switch characteristics. Our encouraging results clearly demonstrate that IGZO is a promising material in RRAM applications and breaking the bottleneck of current memory technologies. 
    more » « less
  5. Introduction: Computed tomography perfusion (CTP) imaging requires injection of an intravenous contrast agent and increased exposure to ionizing radiation. This process can be lengthy, costly, and potentially dangerous to patients, especially in emergency settings. We propose MAGIC, a multitask, generative adversarial network-based deep learning model to synthesize an entire CTP series from only a non-contrasted CT (NCCT) input. Materials and Methods: NCCT and CTP series were retrospectively retrieved from 493 patients at UF Health with IRB approval. The data were deidentified and all images were resized to 256x256 pixels. The collected perfusion data were analyzed using the RapidAI CT Perfusion analysis software (iSchemaView, Inc. CA) to generate each CTP map. For each subject, 10 CTP slices were selected. Each slice was paired with one NCCT slice at the same location and two NCCT slices at a predefined vertical offset, resulting in 4.3K CTP images and 12.9K NCCT images used for training. The incorporation of a spatial offset into the NCCT input allows MAGIC to more accurately synthesize cerebral perfusive structures, increasing the quality of the generated images. The studies included a variety of indications, including healthy tissue, mild infarction, and severe infarction. The proposed MAGIC model incorporates a novel multitask architecture, allowing for the simultaneous synthesis of four CTP modalities: mean transit time (MTT), cerebral blood flow (CBF), cerebral blood volume (CBV), and time to peak (TTP). We propose a novel Physicians-in-the-loop module in the model's architecture, acting as a tunable layer that allows physicians to manually adjust the amount of anatomic detail present in the synthesized CTP series. Additionally, we propose two novel loss terms: multi-modal connectivity loss and extrema loss. The multi-modal connectivity loss leverages the multi-task nature to assert that the mathematical relationship between MTT, CBF, and CBV is satisfied. The extrema loss aids in learning regions of elevated and decreased activity in each modality, allowing for MAGIC to accurately learn the characteristics of diagnostic regions of interest. Corresponding NCCT and CTP slices were paired along the vertical axis. The model was trained for 100 epochs on a NVIDIA TITAN X GPU. Results and Discussion: The MAGIC model’s performance was evaluated on a sample of 40 patients from the UF Health dataset. Across all CTP modalities, MAGIC was able to accurately produce images with high structural agreement between the entire synthesized and clinical perfusion images (SSIMmean=0.801 , UQImean=0.926). MAGIC was able to synthesize CTP images to accurately characterize cerebral circulatory structures and identify regions of infarct tissue, as shown in Figure 1. A blind binary evaluation was conducted to assess the presence of cerebral infarction in both the synthesized and clinical perfusion images, resulting in the synthesized images correctly predicting the presence of cerebral infarction with 87.5% accuracy. Conclusions: We proposed a MAGIC model whose novel deep learning structures and loss terms enable high-quality synthesis of CTP maps and characterization of circulatory structures solely from NCCT images, potentially eliminating the requirement for the injection of an intravenous contrast agent and elevated radiation exposure during perfusion imaging. This makes MAGIC a beneficial tool in a clinical scenario increasing the overall safety, accessibility, and efficiency of cerebral perfusion and facilitating better patient outcomes. Acknowledgements: This work was partially supported by the National Science Foundation, IIS-1908299 III: Small: Modeling Multi-Level Connectivity of Brain Dynamics + REU Supplement, to the University of Florida. 
    more » « less