The viviparitydriven conflict hypothesis postulates that the evolution of matrotrophy (postfertilization maternal provisioning) will result in a shift from a pre to postcopulatory mate choice and thus accelerate the evolution of postcopulatory reproductive isolation. Here, we perform artificial insemination experiments on Heterandria formosa, a matrotrophic poeciliid fish, to probe for evidence of postcopulatory female choice. We established laboratory populations from Wacissa River (WR) and Lake Jackson (LJ). The WR females normally produce larger offspring than the LJ females. We artificially inseminated females with sperm from each population or from both populations simultaneously. When LJ females were inseminated with sperm from WR and LJ males, they allocated fewer resources to WRsired offspring than when they were inseminated with WR sperm alone. The LJ females carrying developing offspring sired by males from different populations were thus able to discriminate against nonresident males when allocating resources to developing young. The WR females, which normally produce larger offspring than LJ females, did not discriminate among males from different localities. These findings provide insights into the ability of females from one population to exercise a form of postcopulatory mate selection.
Machine learning is an important tool in the study of the phase behavior from molecular simulations. In this work, we use unsupervised machine learning methods to study the phase behavior of two offlattice models, a binary LennardJones (LJ) mixture and the Widom–Rowlinson (WR) nonadditive hardsphere mixture. The majority of previous work has focused on lattice models, such as the 2D Ising model, where the values of the spins are used as the feature vector that is input into the machine learning algorithm, with considerable success. For these two offlattice models, we find that the choice of the feature vector is crucial to the ability of the algorithm to predict a phase transition, and this depends on the particular model system being studied. We consider two feature vectors, one where the elements are distances of the particles of a given species from a probe (distancebased feature) and one where the elements are +1 if there is an excess of particles of the same species within a cutoff distance and −1 otherwise (affinitybased feature). We use principal component analysis and tdistributed stochastic neighbor embedding to investigate the phase behavior at a critical composition. We find that the choice of the feature vector is the key to the success of the unsupervised machine learning algorithm in predicting the phase behavior, and the sophistication of the machine learning algorithm is of secondary importance. In the case of the LJ mixture, both feature vectors are adequate to accurately predict the critical point, but in the case of the WR mixture, the affinitybased feature vector provides accurate estimates of the critical point, but the distancebased feature vector does not provide a clear signature of the phase transition. The study suggests that physical insight into the choice of input features is an important aspect for implementing machine learning methods.
more » « less Award ID(s):
 1856595
 NSFPAR ID:
 10370673
 Publisher / Repository:
 American Institute of Physics
 Date Published:
 Journal Name:
 The Journal of Chemical Physics
 Volume:
 157
 Issue:
 9
 ISSN:
 00219606
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract 
Studies in atomicscale modeling of surface phase equilibria often focus on temperatures near zero Kelvin due to the challenges in calculating the free energy of surfaces at finite temperatures. The Bayesianinferencebased nested sampling (NS) algorithm allows for modeling phase equilibria at arbitrary temperatures by directly and efficiently calculating the partition function, whose relationship with free energy is well known. This work extends NS to calculate adsorbate phase diagrams, incorporating all relevant configurational contributions to the free energy. We apply NS to the adsorption of LennardJones (LJ) gas particles on lowindex and vicinal LJ solid surfaces and construct the canonical partition function from these recorded energies to calculate ensemble averages of thermodynamic properties, such as the constantvolume heat capacity and order parameters that characterize the structure of adsorbate phases. Key results include determining the nature of phase transitions of adsorbed LJ particles on flat and stepped LJ surfaces, which typically feature an enthalpydriven condensation at higher temperatures and an entropydriven reordering process at lower temperatures, and the effect of surface geometry on the presence of triple points in the phase diagrams. Overall, we demonstrate the ability and potential of NS for surface modeling.more » « less

Singh, M. ; Williamson, D. (Ed.)Birkhoff’s representation theorem defines a bijection between elements of a distributive lattice L and the family of upper sets of an associated poset B. When elements of L are the stable matchings in an instance of Gale and Shapley’s marriage model, Irving et al. showed how to use B to devise a combinatorial algorithm for maximizing a linear function over the set of stable matchings. In this paper, we introduce a general property of distributive lattices, which we term as affine representability, and show its role in efficiently solving linear optimization problems over the elements of a distributive lattice, as well as describing the convex hull of the characteristic vectors of lattice elements. We apply this concept to the stable matching model with pathindependent quotafilling choice functions, thus giving efficient algorithms and a compact polyhedral description for this model. To the best of our knowledge, this model generalizes all models from the literature for which similar results were known, and our paper is the first that proposes efficient algorithms for stable matchings with choice functions, beyond extension of the Deferred Acceptance algorithm.more » « less

Implicit solvent models divide solvation free energies into polar and nonpolar additive contributions, whereas polar and nonpolar interactions are inseparable and nonadditive. We present a feature functional theory (FFT) framework to break this
ad hoc division. The essential ideas of FFT are as follows: (i) representability assumption: there exists a microscopic feature vector that can uniquely characterize and distinguish one molecule from another; (ii) feature‐function relationship assumption: the macroscopic features, including solvation free energy, of a molecule is a functional of microscopic feature vectors; and (iii) similarity assumption: molecules with similar microscopic features have similar macroscopic properties, such as solvation free energies. Based on these assumptions, solvation free energy prediction is carried out in the following protocol. First, we construct a molecular microscopic feature vector that is efficient in characterizing the solvation process using quantum mechanics and Poisson–Boltzmann theory. Microscopic feature vectors are combined with macroscopic features, that is, physical observable, to form extended feature vectors. Additionally, we partition a solvation dataset into queries according to molecular compositions. Moreover, for each target molecule, we adopt a machine learning algorithm for its nearest neighbor search, based on the selected microscopic feature vectors. Finally, from the extended feature vectors of obtained nearest neighbors, we construct a functional of solvation free energy, which is employed to predict the solvation free energy of the target molecule. The proposed FFT model has been extensively validated via a large dataset of 668 molecules. The leave‐one‐out test gives an optimal root‐mean‐square error (RMSE) of 1.05 kcal/mol. FFT predictions of SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 challenge sets deliver the RMSEs of 0.61, 1.86, 1.64, 0.86, and 1.14 kcal/mol, respectively. Using a test set of 94 molecules and its associated training set, the present approach was carefully compared with a classic solvation model based on weighted solvent accessible surface area. © 2017 Wiley Periodicals, Inc. 
null (Ed.)Traditional network embedding primarily focuses on learning a continuous vector representation for each node, preserving network structure and/or node content information, such that offtheshelf machine learning algorithms can be easily applied to the vectorformat node representations for network analysis. However, the learned continuous vector representations are inefficient for largescale similarity search, which often involves finding nearest neighbors measured by distance or similarity in a continuous vector space. In this article, we propose a search efficient binary network embedding algorithm called BinaryNE to learn a binary code for each node, by simultaneously modeling node context relations and node attribute relations through a threelayer neural network. BinaryNE learns binary node representations using a stochastic gradient descentbased online learning algorithm. The learned binary encoding not only reduces memory usage to represent each node, but also allows fast bitwise comparisons to support faster node similarity search than using Euclidean or other distance measures. Extensive experiments and comparisons demonstrate that BinaryNE not only delivers more than 25 times faster search speed, but also provides comparable or better search quality than traditional continuous vector based network embedding methods. The binary codes learned by BinaryNE also render competitive performance on node classification and node clustering tasks. The source code of the BinaryNE algorithm is available at https://github.com/daokunzhang/BinaryNE.more » « less