Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Choices made by individuals have widespread impacts—for instance, people choose between political candidates to vote for, between social media posts to share, and between brands to purchase—moreover, data on these choices are increasingly abundant.Discrete choice modelsare a key tool for learning individual preferences from such data. Additionally, social factors like conformity and contagion influence individual choice. Traditional methods for incorporating these factors into choice models do not account for the entire social network and require hand-crafted features. To overcome these limitations, we use graph learning to study choice in networked contexts. We identify three ways in which graph learning techniques can be used for discrete choice: learning chooser representations, regularizing choice model parameters, and directly constructing predictions from a network. We design methods in each category and test them on real-world choice datasets, including county-level 2016 US election results and Android app installation and usage data. We show that incorporating social network structure can improve the predictions of the standard econometric choice model, the multinomial logit. We provide evidence that app installations are influenced by social context, but we find no such effect on app usage among the same participants, which instead is habit-driven. In the election data, we highlight the additional insights a discrete choice framework provides over classification or regression, the typical approaches. On synthetic data, we demonstrate the sample complexity benefit of using social information in choice models.more » « less
-
Abstract Here we assess the applicability of graph neural networks (GNNs) for predicting the grain-scale elastic response of polycrystalline metallic alloys. Using GNN surrogate models, grain-averaged stresses during uniaxial elastic tension in low solvus high-refractory (LSHR) Ni Superalloy and Ti 7 wt%Al (Ti-7Al) are predicted as example face-centered cubic and hexagonal closed packed alloys, respectively. A transfer learning approach is taken in which GNN surrogate models are trained using crystal elasticity finite element method (CEFEM) simulations and then the trained surrogate models are used to predict the mechanical response of microstructures measured using high-energy X-ray diffraction microscopy (HEDM). The performance of using various microstructural and micromechanical descriptors for input nodal features to the GNNs is explored through comparisons to traditional mean-field theory predictions, reserved full-field CEFEM data, and measured far-field HEDM data. The effects of elastic anisotropy on GNN model performance and outlooks for the extension of the framework are discussed.more » « less
-
Koyejo, S.; Mohamed, S.; Agarwal, A.; Belgrave, D.; Cho, K.; Oh, A. (Ed.)After training complex deep learning models, a common task is to compress the model to reduce compute and storage demands. When compressing, it is desirable to preserve the original model's per-example decisions (e.g., to go beyond top-1 accuracy or preserve robustness), maintain the network's structure, automatically determine per-layer compression levels, and eliminate the need for fine tuning. No existing compression methods simultaneously satisfy these criteria---we introduce a principled approach that does by leveraging interpolative decompositions. Our approach simultaneously selects and eliminates channels (analogously, neurons), then constructs an interpolation matrix that propagates a correction into the next layer, preserving the network's structure. Consequently, our method achieves good performance even without fine tuning and admits theoretical analysis. Our theoretical generalization bound for a one layer network lends itself naturally to a heuristic that allows our method to automatically choose per-layer sizes for deep networks. We demonstrate the efficacy of our approach with strong empirical performance on a variety of tasks, models, and datasets---from simple one-hidden-layer networks to deep networks on ImageNet.more » « less
An official website of the United States government

Full Text Available