Abstract Integrated hydrological modeling is an effective method for understanding interactions between parts of the hydrologic cycle, quantifying water resources, and furthering knowledge of hydrologic processes. However, these models are dependent on robust and accurate datasets that physically represent spatial characteristics as model inputs. This study evaluates multiple data‐driven approaches for estimating hydraulic conductivity and subsurface properties at the continental‐scale, constructed from existing subsurface dataset components. Each subsurface configuration represents upper (unconfined) hydrogeology, lower (confined) hydrogeology, and the presence of a vertical flow barrier. Configurations are tested in two large‐scale U.S. watersheds using an integrated model. Model results are compared to observed streamflow and steady state water table depth (WTD). We provide model results for a range of configurations and show that both WTD and surface water partitioning are important indicators of performance. We also show that geology data source, total subsurface depth, anisotropy, and inclusion of a vertical flow barrier are the most important considerations for subsurface configurations. While a range of configurations proved viable, we provide a recommended Selected National Configuration 1 km resolution subsurface dataset for use in distributed large‐and continental‐scale hydrologic modeling.
more »
« less
State of the Art in Time‐Dependent Flow Topology: Interpreting Physical Meaningfulness Through Mathematical Properties
Abstract We present a state‐of‐the‐art report on time‐dependent flow topology. We survey representative papers in visualization and provide a taxonomy of existing approaches that generalize flow topology from time‐independent to time‐dependent settings. The approaches are classified based upon four categories: tracking of steady topology, reference frame adaption, pathline classification or clustering, and generalization of critical points. Our unique contributions include introducing a set of desirable mathematical properties to interpret physical meaningfulness for time‐dependent flow visualization, inferring mathematical properties associated with selective research papers, and utilizing such properties for classification. The five most important properties identified in the existing literature include coincidence with the steady case, induction of a partition within the domain, Lagrangian invariance, objectivity, and Galilean invariance.
more »
« less
- PAR ID:
- 10378539
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Computer Graphics Forum
- Volume:
- 39
- Issue:
- 3
- ISSN:
- 0167-7055
- Page Range / eLocation ID:
- p. 811-835
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Neurodegenerative diseases, like Alzheimer’s, are associated with the presence of neurofibrillary lesions formed by tau protein filaments in the cerebral cortex. While it is known that different morphologies of tau filaments characterize different neurodegenerative diseases, there are few metrics of global and local structure complexity that enable to quantify their structural diversity rigorously. In this manuscript, we employ for the first time mathematical topology and geometry to classify neurodegenerative diseases by using cryo-electron microscopy structures of tau filaments that are available in the Protein Data Bank. By employing mathematical topology metrics (Gauss linking integral, writhe and second Vassiliev measure) we achieve a consistent, but more refined classification of tauopathies, than what was previously observed through visual inspection. Our results reveal a hierarchy of classification from global to local topology and geometry characteristics. In particular, we find that tauopathies can be classified with respect to the handedness of their global conformations and the handedness of the relative orientations of their repeats. Progressive supranuclear palsy is identified as an outlier, with a more complex structure than the rest, reflected by a small, but observable knotoid structure (a diagrammatic structure representing non-trivial topology). This topological characteristic can be attributed to a pattern in the beginning of the R3 repeat that is present in all tauopathies but at different extent. Moreover, by comparing single filament to paired filament structures within tauopathies we find a consistent change in the side-chain orientations with respect to the alpha carbon atoms at the area of interaction.more » « less
-
The Flow matrix is a novel method to describe and extrapolate transitions among categories. The Flow matrix extrapolates a constant transition size per unit of time on a time continuum with a maximum of one incident per observation during the extrapolation. The Flow matrix extrapolates linearly until the persistence of a category shrinks to zero. The Flow matrix has concepts and mathematics that are more straightforward than the Markov matrix. However, many scientists apply the Markov matrix by default because popular software packages offer no alternative to the Markov matrix, despite the conceptual and mathematical challenges that the Markov matrix poses. The Markov matrix extrapolates a constant transition proportion per time interval during whole-number multiples of the duration of the calibration time interval. The Markov extrapolation allows at most one incident per observation during each time interval but allows repeated incidents per observation through sequential time intervals. Many Markov extrapolations approach a steady state asymptotically through time as each category size approaches a constant. We use case studies concerning land change to illustrate the characteristics of the Flow and Markov matrices. The Flow and Markov extrapolations both deviate from the reference data during a validation time interval, implying there is no reason to prefer one matrix to the other in terms of correspondence with the processes that we analyzed. The two matrices differ substantially in terms of their underlying concepts and mathematical behaviors. Scientists should consider the ease of use and interpretation for each matrix when extrapolating transitions among categories.more » « less
-
Abstract MotivationCryo-Electron Tomography (cryo-ET) is a 3D imaging technology that enables the visualization of subcellular structures in situ at near-atomic resolution. Cellular cryo-ET images help in resolving the structures of macromolecules and determining their spatial relationship in a single cell, which has broad significance in cell and structural biology. Subtomogram classification and recognition constitute a primary step in the systematic recovery of these macromolecular structures. Supervised deep learning methods have been proven to be highly accurate and efficient for subtomogram classification, but suffer from limited applicability due to scarcity of annotated data. While generating simulated data for training supervised models is a potential solution, a sizeable difference in the image intensity distribution in generated data as compared with real experimental data will cause the trained models to perform poorly in predicting classes on real subtomograms. ResultsIn this work, we present Cryo-Shift, a fully unsupervised domain adaptation and randomization framework for deep learning-based cross-domain subtomogram classification. We use unsupervised multi-adversarial domain adaption to reduce the domain shift between features of simulated and experimental data. We develop a network-driven domain randomization procedure with ‘warp’ modules to alter the simulated data and help the classifier generalize better on experimental data. We do not use any labeled experimental data to train our model, whereas some of the existing alternative approaches require labeled experimental samples for cross-domain classification. Nevertheless, Cryo-Shift outperforms the existing alternative approaches in cross-domain subtomogram classification in extensive evaluation studies demonstrated herein using both simulated and experimental data. Availabilityand implementationhttps://github.com/xulabs/aitom. Supplementary informationSupplementary data are available at Bioinformatics online.more » « less
-
Visualization and topic modeling are widely used approaches for text analysis. Traditional visualization methods find low-dimensional representations of documents in the visualization space (typically 2D or 3D) that can be displayed using a scatterplot. In contrast, topic modeling aims to discover topics from text, but for visualization, one needs to perform a post-hoc embedding using dimensionality reduction methods. Recent approaches propose using a generative model to jointly find topics and visualization, allowing the semantics to be infused in the visualization space for a meaningful interpretation. A major challenge that prevents these methods from being used practically is the scalability of their inference algorithms. We present, to the best of our knowledge, the first fast Auto-Encoding Variational Bayes based inference method for jointly inferring topics and visualization. Since our method is black box, it can handle model changes efficiently with little mathematical rederivation effort. We demonstrate the efficiency and effectiveness of our method on real-world large datasets and compare it with existing baselines.",more » « less
An official website of the United States government
