skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Improving Results by Improving Densities: Density-Corrected Density Functional Theory
Award ID(s):
1856165
PAR ID:
10330288
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Journal of the American Chemical Society
Volume:
144
Issue:
15
ISSN:
0002-7863
Page Range / eLocation ID:
6625 to 6639
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT We present LyMAS2, an improved version of the ‘Lyman-α Mass Association Scheme’ aiming at predicting the large-scale 3D clustering statistics of the Lyman-α forest (Ly α) from moderate-resolution simulations of the dark matter (DM) distribution, with prior calibrations from high-resolution hydrodynamical simulations of smaller volumes. In this study, calibrations are derived from the Horizon-AGN suite simulations, (100 Mpc h)−3 comoving volume, using Wiener filtering, combining information from DM density and velocity fields (i.e. velocity dispersion, vorticity, line-of-sight 1D-divergence and 3D-divergence). All new predictions have been done at z = 2.5 in redshift space, while considering the spectral resolution of the SDSS-III BOSS Survey and different DM smoothing (0.3, 0.5, and 1.0 Mpc h−1 comoving). We have tried different combinations of DM fields and found that LyMAS2, applied to the Horizon-noAGN DM fields, significantly improves the predictions of the Ly α 3D clustering statistics, especially when the DM overdensity is associated with the velocity dispersion or the vorticity fields. Compared to the hydrodynamical simulation trends, the two-point correlation functions of pseudo-spectra generated with LyMAS2 can be recovered with relative differences of ∼5 per cent even for high angles, the flux 1D power spectrum (along the light of sight) with ∼2 per cent and the flux 1D probability distribution function exactly. Finally, we have produced several large mock BOSS spectra (1.0 and 1.5 Gpc h−1) expected to lead to much more reliable and accurate theoretical predictions. 
    more » « less
  2. Sparsity is a central aspect of interpretability in machine learning. Typically, sparsity is measured in terms of the size of a model globally, such as the number of variables it uses. However, this notion of sparsity is not particularly relevant for decision-making; someone subjected to a decision does not care about variables that do not contribute to the decision. In this work, we dramatically expand a notion of decision sparsity called the Sparse Explanation Value(SEV) so that its explanations are more meaningful. SEV considers movement along a hypercube towards a reference point. By allowing flexibility in that reference and by considering how distances along the hypercube translate to distances in feature space, we can derive sparser and more meaningful explanations for various types of function classes. We present cluster-based SEV and its variant tree-based SEV, introduce a method that improves credibility of explanations, and propose algorithms that optimize decision sparsity in machine learning models. 
    more » « less
  3. Sparsity is a central aspect of interpretability in machine learning. Typically, sparsity is measured in terms of the size of a model globally, such as the number of variables it uses. However, this notion of sparsity is not particularly relevant for decision-making; someone subjected to a decision does not care about variables that do not contribute to the decision. In this work, we dramatically expand a notion of decision sparsity called the Sparse Explanation Value(SEV) so that its explanations are more meaningful. SEV considers movement along a hypercube towards a reference point. By allowing flexibility in that reference and by considering how distances along the hypercube translate to distances in feature space, we can derive sparser and more meaningful explanations for various types of function classes. We present cluster-based SEV and its variant tree-based SEV, introduce a method that improves credibility of explanations, and propose algorithms that optimize decision sparsity in machine learning models. 
    more » « less