Abstract. Depressions – inwardly draining regions of digital elevation models – present difficulties for terrain analysis and hydrological modeling. Analogous “depressions” also arise in image processing and morphological segmentation, where they may represent noise, features of interest, or both. Here we provide a new data structure – the depression hierarchy – that captures the full topologic and topographic complexity of depressions in a region. We treat depressions as networks in a way that is analogous to surface-water flow paths, in which individual sub-depressions merge together to form meta-depressions in a process that continues until they begin to drain externally. This hierarchy can be used to selectively fill or breach depressions or to accelerate dynamic models of hydrological flow. Complete, well-commented, open-source code and correctness tests are available on GitHub and Zenodo.
more »
« less
Computing water flow through complex landscapes – Part 3: Fill–Spill–Merge: flow routing in depression hierarchies
Abstract. Depressions – inwardly draining regions – are common to many landscapes. When there is sufficient moisture, depressions take the form of lakes and wetlands; otherwise, they may be dry. Hydrological flow models used in geomorphology, hydrology, planetary science, soil and water conservation, and other fields often eliminate depressions through filling or breaching; however, this can produce unrealistic results. Models that retain depressions, on the other hand, are often undesirably expensive to run. In previous work we began to address this by developing a depression hierarchy data structure to capture the full topographic complexity of depressions in a region. Here, we extend this work by presenting the Fill–Spill–Merge algorithm that utilizes our depression hierarchy data structure to rapidly process and distribute runoff. Runoff fills depressions, which then overflow and spill into their neighbors. If both a depression and its neighbor fill, they merge. We provide a detailed explanation of the algorithm and results from two sample study areas. In these case studies, the algorithm runs 90–2600 times faster (with a reduction in compute time of 2000–63 000 times) than the commonly used Jacobi iteration and produces a more accurate output. Complete, well-commented, open-source code with 97 % test coverage is available on GitHub and Zenodo.
more »
« less
- Award ID(s):
- 1903606
- PAR ID:
- 10263903
- Date Published:
- Journal Name:
- Earth Surface Dynamics
- Volume:
- 9
- Issue:
- 1
- ISSN:
- 2196-632X
- Page Range / eLocation ID:
- 105 to 121
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract. Calculating flow routing across a landscape is a routine process in geomorphology, hydrology, planetary science, and soil and water conservation. Flow-routing calculations often require a preprocessing step to remove depressions from a DEM to create a “flow-routing surface” that can host a continuous, integrated drainage network. However, real landscapes contain natural depressions that trap water. These are an important part of the hydrologic system and should be represented in flow-routing surfaces. Historically, depressions (or “pits”) in DEMs have been viewed as data errors, but the rapid expansion of high-resolution, high-precision DEM coverage increases the likelihood that depressions are real-world features. To address this long-standing problem of emerging significance, we developed FlowFill, an algorithm that routes a prescribed amount of runoff across the surface in order to flood depressions if enough water is available. This mass-conserving approach typically floods smaller depressions and those in wet areas, integrating drainage across them, while permitting internal drainage and disruptions to hydrologic connectivity. We present results from two sample study areas to which we apply a range of uniform initial runoff depths and report the resulting filled and unfilled depressions, the drainage network structure, and the required compute time. For the reach- to watershed-scale examples that we ran, FlowFill compute times ranged from approximately 1 to 30 min, with compute times per cell of 0.0001 to 0.006 s.more » « less
-
Abstract Glacial kettles are surficial depressions that form in formerly glaciated terrain when buried stagnant ice melts within pro‐glacial sediments, often deposited by meltwater streams. Kettles, like other glacial landforms, provide insight into the impact of climate on landscape evolution, such as the extent and timing of glaciations. The geometry of kettle features is variable, but existing theory does not explain the range of observed morphologies. Our study aims to establish a quantitative relationship between the depth of ice burial and the resulting morphology of terrain collapse in kettle depressions. To do so, we simulated kettle formation in the laboratory by burying ice spheres of four sizes in well‐sorted coarse sand at four different depths. As the spheres melt at room temperature, a glacial kettle analog forms at the surface. We scanned the resulting kettle topography with a portable LiDAR scanner to produce 3D digital elevation models of each depression, from which we measured each depression's depth and width and, in one instance, the time series of kettle formation. Using this data, we quantified the relationship between the sphere diameter, burial depth and resulting dimensions of the kettle by developing a set of equations, which we then applied to full‐scale features. Our results indicate that ice burial deeper than one sphere diameter corresponds to a decrease in depression depth and an increase in depression width. This application offers insight into the interdependence of ice burial depth and kettle geometry.more » « less
-
null (Ed.)The prevalence of mobile phones and wearable devices enables the passive capturing and modeling of human behavior at an unprecedented resolution and scale. Past research has demonstrated the capability of mobile sensing to model aspects of physical health, mental health, education, and work performance, etc. However, most of the algorithms and models proposed in previous work follow a one-size-fits-all (i.e., population modeling) approach that looks for common behaviors amongst all users, disregarding the fact that individuals can behave very differently, resulting in reduced model performance. Further, black-box models are often used that do not allow for interpretability and human behavior understanding. We present a new method to address the problems of personalized behavior classification and interpretability, and apply it to depression detection among college students. Inspired by the idea of collaborative-filtering, our method is a type of memory-based learning algorithm. It leverages the relevance of mobile-sensed behavior features among individuals to calculate personalized relevance weights, which are used to impute missing data and select features according to a specific modeling goal (e.g., whether the student has depressive symptoms) in different time epochs, i.e., times of the day and days of the week. It then compiles features from epochs using majority voting to obtain the final prediction. We apply our algorithm on a depression detection dataset collected from first-year college students with low data-missing rates and show that our method outperforms the state-of-the-art machine learning model by 5.1% in accuracy and 5.5% in F1 score. We further verify the pipeline-level generalizability of our approach by achieving similar results on a second dataset, with an average improvement of 3.4% across performance metrics. Beyond achieving better classification performance, our novel approach is further able to generate personalized interpretations of the models for each individual. These interpretations are supported by existing depression-related literature and can potentially inspire automated and personalized depression intervention design in the futuremore » « less
-
Attribute hierarchy, the underlying prerequisite relationship among attributes, plays an important role in applying cognitive diagnosis models (CDM) for designing efficient cognitive diagnostic assessments. However, there are limited statistical tools to directly estimate attribute hierarchy from response data. In this study, we proposed a Bayesian formulation for attribute hierarchy within CDM framework and developed an efficient Metropolis within Gibbs algorithm to estimate the underlying hierarchy along with the specified CDM parameters. Our proposed estimation method is flexible and can be adapted to a general class of CDMs. We demonstrated our proposed method via a simulation study, and the results from which show that the proposed method can fully recover or estimate at least a subgraph of the underlying structure across various conditions under a specified CDM model. The real data application indicates the potential of learning attribute structure from data using our algorithm and validating the existing attribute hierarchy specified by content experts.more » « less
An official website of the United States government

