Abstract Sulfate-reducing bacteria Candidatus Desulforudis audaxviator (CDA) were originally discovered in deep fracture fluids accessed via South African gold mines and have since been found in geographically widespread deep subsurface locations. In order to constrain models for subsurface microbial evolution, we compared CDA genomes from Africa, North America and Eurasia using single cell genomics. Unexpectedly, 126 partial single amplified genomes from the three continents, a complete genome from of an isolate from Eurasia, and metagenome-assembled genomes from Africa and Eurasia shared >99.2% average nucleotide identity, low frequency of SNP’s, and near-perfectly conserved prophages and CRISPRs. Our analyses reject sample cross-contamination, recent natural dispersal, and unusually strong purifying selection as likely explanations for these unexpected results. We therefore conclude that the analyzed CDA populations underwent only minimal evolution since their physical separation, potentially as far back as the breakup of Pangea between 165 and 55 Ma ago. High-fidelity DNA replication and repair mechanisms are the most plausible explanation for the highly conserved genome of CDA. CDA presents a stark contrast to the current model organisms in microbial evolutionary studies, which often develop adaptive traits over far shorter periods of time.
more »
« less
Revisiting Credit Distribution Algorithms for Distributed Termination Detection
This paper revisits distributed termination detection algorithms in the context of High-Performance Computing (HPC) applications. We introduce an efficient variant of the Credit Distribution Algorithm (CDA) and compare it to the original algorithm (HCDA) as well as to its two primary competitors: the Four Counters algorithm (4C) and the Efficient Delay-Optimal Distributed algorithm (EDOD). We analyze the behavior of each algorithm for some simplified task-based kernels and show the superiority of CDA in terms of the number of control messages.
more »
« less
- Award ID(s):
- 1931384
- PAR ID:
- 10301753
- Date Published:
- Journal Name:
- IEEE
- Page Range / eLocation ID:
- 611 to 620
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract This article studies two particular algorithms, a relaxation least squares algorithm and a relaxation Newton iteration scheme, for reconstructing unknown parameters in dissipative dynamical systems. Both algorithms are based on a continuous data assimilation (CDA) algorithm for state reconstruction of Azouaniet al(2014J. Nonlinear Sci.24277–304). Due to the CDA origins of these parameter recovery algorithms, these schemes provide on-the-fly reconstruction, that is, as data is collected, of unknown state and parameters simultaneously. It is shown how both algorithms give way to a robust general framework for simultaneous state and parameter estimation. In particular, we develop a general theory, applicable to a large class of dissipative dynamical systems, which identifies structural and algorithmic conditions under which the proposed algorithms achieve reconstruction of the true parameters. The algorithms are implemented on a high-dimensional two-layer Lorenz 96 model, where the theoretical conditions of the general framework are explicitly verifiable. They are also implemented on the two-dimensional Rayleigh–Bénard convection system to demonstrate the applicability of the algorithms beyond the finite-dimensional setting. In each case, systematic numerical experiments are carried out probing the efficacy of the proposed algorithms, in addition to the apparent benefits and drawbacks between them.more » « less
-
The continuous double auction (CDA) is the predominant mechanism in modern securities markets. Many agent-based analyses of CDA environments rely on simple non-adaptive trading strategies like Zero Intelligence (ZI), which (as their name suggests) are quite limited. We examine the viability of this reliance through empirical game-theoretic analysis in a plausible market environment. Specifically, we evaluate the strategic stability of equilibria defined over a small set of ZI traders with respect to strategies found by reinforcement learning (RL) applied over a much larger policy space. RL can indeed find beneficial deviations from equilibria of ZI traders, by conditioning on signals of the likelihood a trade will execute or the favorability of the current bid and ask. Nevertheless, the surplus earned by well-calibrated ZI policies is empirically observed to be nearly as great as what the adaptive strategies can earn, despite their much more expressive policy space. Our findings generally support the use of equilibrated ZI traders in CDA studies.more » « less
-
The effective resistance between a pair of nodes in a weighted undirected graph is defined as the potential difference induced between them when a unit current is injected at the first node and extracted at the second node, treating edge weights as the conductance values of edges. The effective resistance is a key quantity of interest in many applications and fields including solving linear systems, Markov Chains and continuous-time averaging networks. We develop an efficient linearly convergent distributed algorithm for computing effective resistances and demonstrate its performance through numerical studies. We also apply our algorithm to the consensus problem where the aim is to compute the average of node values in a distributed manner. We show that the distributed algorithm we developed for effective resistances can be used to accelerate the convergence of the classical consensus iterations considerably by a factor depending on the network structure.more » « less
-
Hierarchical clustering is a fundamental tool in data mining, machine learning and statistics. Popular hierarchical clustering algorithms include top-down divisive approaches such as bisecting k-means, k-median, and k-center and bottom-up agglomerative approaches such as single- linkage, average-linkage, and centroid-linkage. Unfortunately, only a few scalable hierarchical clustering algorithms are known, mostly based on the single-linkage algorithm. So, as datasets increase in size every day, there is a pressing need to scale other popular methods. We introduce efficient distributed algorithms for bisecting k-means, k- median, and k-center as well as centroid-linkage. In particular, we first formalize a notion of closeness for a hierarchical clustering algorithm, and then we use this notion to design new scalable distributed methods with strong worst case bounds on the running time and the quality of the solutions. Finally, we show experimentally that the introduced algorithms are efficient and close to their sequential variants in practice.more » « less
An official website of the United States government

