skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Simulating 3-D Stellar Hydrodynamics using PPM and PPB Multifluid Gas Dynamics on CPU and CPU+GPU Nodes
The special computational challenges of simulating 3-D hydrodynamics in deep stellar interiors are discussed, and numerical algorithmic responses described. Results of recent simulations carried out at scale on the NSF's Blue Waters machine at the University of Illinois are presented, with a special focus on the computational challenges they address. Prospects for future work using GPU-accelerated nodes such as those on the DoE's new Summit machine at Oak Ridge National Laboratory are described, with a focus on numerical algorithmic accommodations that we believe will be necessary.  more » « less
Award ID(s):
1814181 1413548 1713200
PAR ID:
10101327
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of physics. Conference series
Volume:
1225
ISSN:
1742-6596
Page Range / eLocation ID:
012020
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The connectivity of networks has been widely studied in many high-impact applications, ranging from immunization, critical infrastructure analysis, social network mining, to bioinformatic system studies. Regardless of the end application domains, connectivity minimization has always been a fundamental task to effectively control the functioning of the underlying system. The combinatorial nature of the connectivity minimization problem imposes an exponential computational complexity to find the optimal solution, which is intractable in large systems. To tackle the computational barrier, greedy algorithm is extensively used to ensure a near-optimal solution by exploiting the diminishing returns property of the problem. Despite the empirical success, the theoretical and algorithmic challenges of the problems still remain wide open. On the theoretical side, the intrinsic hardness and the approximability of the general connectivity minimization problem are still unknown except for a few special cases. On the algorithmic side, existing algorithms are hard to balance between the optimization quality and computational efficiency. In this article, we address the two challenges by (1) proving that the general connectivity minimization problem is NP-hard and is the best approximation ratio for any polynomial algorithms, and (2) proposing the algorithm CONTAIN and its variant CONTAIN + that can well balance optimization effectiveness and computational efficiency for eigen-function based connectivity minimization problems in large networks. 
    more » « less
  2. Graph is a ubiquitous type of data that appears in many real-world applications, including social network analysis, recommendations and financial security. Important as it is, decades of research have developed plentiful computational models to mine graphs. Despite its prosperity, concerns with respect to the potential algorithmic discrimination have been grown recently. Algorithmic fairness on graphs, which aims to mitigate bias introduced or amplified during the graph mining process, is an attractive yet challenging research topic. The first challenge corresponds to the theoretical challenge, where the non-IID nature of graph data may not only invalidate the basic assumption behind many existing studies in fair machine learning, but also introduce new fairness definition(s) based on the inter-correlation between nodes rather than the existing fairness definition(s) in fair machine learning. The second challenge regarding its algorithmic aspect aims to understand how to balance the trade-off between model accuracy and fairness. This tutorial aims to (1) comprehensively review the state-of-the-art techniques to enforce algorithmic fairness on graphs and (2) enlighten the open challenges and future directions. We believe this tutorial could benefit researchers and practitioners from the areas of data mining, artificial intelligence and social science. 
    more » « less
  3. Network connectivity optimization, which aims to manipulate network connectivity by changing its underlying topology, is a fundamental task behind a wealth of high-impact data mining applications, ranging from immunization, critical infrastructure construction, social collaboration mining, bioinformatics analysis, to intelligent transportation system design. To tackle its exponential computation complexity, greedy algorithms have been extensively used for network connectivity optimization by exploiting its diminishing returns property. Despite the empirical success, two key challenges largely remain open. First, on the theoretic side, the hardness, as well as the approximability of the general network connectivity optimization problem are still nascent except for a few special instances. Second, on the algorithmic side, current algorithms are often hard to balance between the optimization quality and the computational efficiency. In this paper, we systematically address these two challenges for the network connectivity optimization problem. First, we reveal some fundamental limits by proving that, for a wide range of network connectivity optimization problems, (1) they are NP-hard and (2) (1-1/e) is the optimal approximation ratio for any polynomial algorithms. Second, we propose an effective, scalable and general algorithm (CONTAIN) to carefully balance the optimization quality and the computational efficiency. 
    more » « less
  4. Computational biology has made powerful advances. Among these, trends in human health have been uncovered through heterogeneous ‘big data’ integration, and disease-associated genes were identified and classified. Along a different front, the dynamic organization of chromatin is being elucidated to gain insight into the fundamental question of genome regulation. Powerful conformational sampling methods have also been developed to yield a detailed molecular view of cellular processes. when combining these methods with the advancements in the modeling of supramolecular assemblies, including those at the membrane, we are finally able to get a glimpse into how cells’ actions are regulated. Perhaps most intriguingly, a major thrust is on to decipher the mystery of how the brain is coded. Here, we aim to provide a broad, yet concise, sketch of modern aspects of computational biology, with a special focus on computational structural biology. We attempt to forecast the areas that computational structural biology will embrace in the future and the challenges that it may face. We skirt details, highlight successes, note failures, and map directions. 
    more » « less
  5. When an optical beam propagates through a turbulent medium such as the atmosphere or ocean, the beam will become distorted. It is then natural to seek the best or optimal beam that is distorted least, under some metric such as intensity or scintillation. We seek to maximize the light intensity at the receiver using the paraxial wave equation with weak-fluctuation as the model. In contrast to classical results that typically confine original laser beams to be from a special class, we allow the beam to be general, which leads to an eigenvalue problem of a large-sized matrix with each entry being a multi-dimensional integral. This is an expensive and sometimes infeasible computational task in many practically reasonable settings. To overcome this expense, in a change from past calculations of optimal beams, we transform the calculation from physical space to Fourier space. Since the structure of the turbulence is commonly described in Fourier space, the computational cost is significantly reduced. This also allows us to incorporate some optional turbulence assumptions, such as homogeneous-statistics assumption, small-length-scale cutoff assumption, and Markov assumption, to further reduce the dimension of the numerical integral. The proposed methods provide a computational strategy that is numerically feasible, and results are demonstrated in several numerical examples. These results provide further evidence that special beams can be defined to have beam divergence that is small. 
    more » « less