skip to main content


Title: Circuitscape in Julia: Empowering Dynamic Approaches to Connectivity Assessment
The conservation field is experiencing a rapid increase in the amount, variety, and quality of spatial data that can help us understand species movement and landscape connectivity patterns. As interest grows in more dynamic representations of movement potential, modelers are often limited by the capacity of their analytic tools to handle these datasets. Technology developments in software and high-performance computing are rapidly emerging in many fields, but uptake within conservation may lag, as our tools or our choice of computing language can constrain our ability to keep pace. We recently updated Circuitscape, a widely used connectivity analysis tool developed by Brad McRae and Viral Shah, by implementing it in Julia, a high-performance computing language. In this initial re-code (Circuitscape 5.0) and later updates, we improved computational efficiency and parallelism, achieving major speed improvements, and enabling assessments across larger extents or with higher resolution data. Here, we reflect on the benefits to conservation of strengthening collaborations with computer scientists, and extract examples from a collection of 572 Circuitscape applications to illustrate how through a decade of repeated investment in the software, applications have been many, varied, and increasingly dynamic. Beyond empowering continued innovations in dynamic connectivity, we expect that faster run times will play an important role in facilitating co-production of connectivity assessments with stakeholders, increasing the likelihood that connectivity science will be incorporated in land use decisions.  more » « less
Award ID(s):
1835443
NSF-PAR ID:
10308796
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Land
Volume:
10
Issue:
3
ISSN:
2073-445X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.

     
    more » « less
  2. Connectivity has long played a central role in ecological and evolutionary theory and is increasingly emphasized for conserving biodiversity. Nonetheless, connectivity assessments often focus on individual species even though understanding and preserving connectivity for entire communities is urgently needed. Here we derive and test a framework that harnesses the well-known allometric scaling of animal movement to predict community-level connectivity across protected area networks. We used a field translocation experiment involving 39 species of southern African birds to quantify movement capacity, scaled this relationship to realized dispersal distances determined from ring-and-recovery banding data, and used allometric scaling equations to quantify community-level connectivity based on multilayer network theory. The translocation experiment explained observed dispersal distances from ring-recovery data and emphasized allometric scaling of dispersal based on morphology. Our community-level networks predicted that larger-bodied species had a relatively high potential for connectivity, while small-bodied species had lower connectivity. These community networks explained substantial variation in observed bird diversity across protected areas. Our results highlight that harnessing allometric scaling can be an effective way of determining large-scale community connectivity. We argue that this trait-based framework founded on allometric scaling provides a means to predict connectivity for entire communities, which can foster empirical tests of community theory and contribute to biodiversity conservation strategies aimed at mitigating the effects of environmental change.

     
    more » « less
  3. Vision Transformers (ViTs) have achieved state-of-the-art performance on various vision tasks. However, ViTs’ self-attention module is still arguably a major bottleneck, limiting their achievable hardware efficiency and more extensive applications to resource constrained platforms. Meanwhile, existing accelerators dedicated to NLP Transformers are not optimal for ViTs. This is because there is a large difference between ViTs and Transformers for natural language processing (NLP) tasks: ViTs have a relatively fixed number of input tokens, whose attention maps can be pruned by up to 90% even with fixed sparse patterns, without severely hurting the model accuracy (e.g., <=1.5% under 90% pruning ratio); while NLP Transformers need to handle input sequences of varying numbers of tokens and rely on on-the-fly predictions of dynamic sparse attention patterns for each input to achieve a decent sparsity (e.g., >=50%). To this end, we propose a dedicated algorithm and accelerator co-design framework dubbed ViTCoD for accelerating ViTs. Specifically, on the algorithm level, ViTCoD prunes and polarizes the attention maps to have either denser or sparser fixed patterns for regularizing two levels of workloads without hurting the accuracy, largely reducing the attention computations while leaving room for alleviating the remaining dominant data movements; on top of that, we further integrate a lightweight and learnable auto-encoder module to enable trading the dominant high-cost data movements for lower-cost computations. On the hardware level, we develop a dedicated accelerator to simultaneously coordinate the aforementioned enforced denser and sparser workloads for boosted hardware utilization, while integrating on-chip encoder and decoder engines to leverage ViTCoD’s algorithm pipeline for much reduced data movements. Extensive experiments and ablation studies validate that ViTCoD largely reduces the dominant data movement costs, achieving speedups of up to 235.3×, 142.9×, 86.0×, 10.1×, and 6.8× over general computing platforms CPUs, EdgeGPUs, GPUs, and prior-art Transformer accelerators SpAtten and Sanger under an attention sparsity of 90%, respectively. Our code implementation is available at https://github.com/GATECH-EIC/ViTCoD. 
    more » « less
  4. null (Ed.)
    Landscape connectivity is increasingly promoted as a conservation tool to combat the negative effects of habitat loss, fragmentation, and climate change. Given its importance as a key conservation strategy, connectivity science is a rapidly growing discipline. However, most landscape connectivity models consider connectivity for only a single snapshot in time, despite the widespread recognition that landscapes and ecological processes are dynamic. In this paper, we discuss the emergence of dynamic connectivity and the importance of including dynamism in connectivity models and assessments. We outline dynamic processes for both structural and functional connectivity at multiple spatiotemporal scales and provide examples of modeling approaches at each of these scales. We highlight the unique challenges that accompany the adoption of dynamic connectivity for conservation management and planning in the context of traditional conservation prioritization approaches. With the increased availability of time series and species movement data, computational capacity, and an expanding number of empirical examples in the literature, incorporating dynamic processes into connectivity models is more feasible than ever. Here, we articulate how dynamism is an intrinsic component of connectivity and integral to the future of connectivity science. 
    more » « less
  5. Efficient exploitation of exascale architectures requires rethinking of the numerical algorithms used in many large-scale applications. These architectures favor algorithms that expose ultra fine-grain parallelism and maximize the ratio of floating point operations to energy intensive data movement. One of the few viable approaches to achieve high efficiency in the area of PDE discretizations on unstructured grids is to use matrix-free/partially assembled high-order finite element methods, since these methods can increase the accuracy and/or lower the computational time due to reduced data motion. In this paper we provide an overview of the research and development activities in the Center for Efficient Exascale Discretizations (CEED), a co-design center in the Exascale Computing Project that is focused on the development of next-generation discretization software and algorithms to enable a wide range of finite element applications to run efficiently on future hardware. CEED is a research partnership involving more than 30 computational scientists from two US national labs and five universities, including members of the Nek5000, MFEM, MAGMA and PETSc projects. We discuss the CEED co-design activities based on targeted benchmarks, miniapps and discretization libraries and our work on performance optimizations for large-scale GPU architectures. We also provide a broad overview of research and development activities in areas such as unstructured adaptive mesh refinement algorithms, matrix-free linear solvers, high-order data visualization, and list examples of collaborations with several ECP and external applications. 
    more » « less