The transport of particles and fluids through multichannel microfluidic networks is influenced by details of the channels. Because channels have micro-scale textures and macro-scale geometries, this transport can differ from the case of ideally smooth channels. Surfaces of real channels have irregular boundary conditions to which streamlines adapt and with which particle interact. In low-Reynolds number flows, particles may experience inertial forces that result in trans-streamline movement and the reorganization of particle distributions. Such transport is intrinsically 3D and an accurate measurement must capture movement in all directions. To measure the effects of non-ideal surface textures on particle transport through complex networks, we developed an extended field-of-view 3D macroscope for high-resolution tracking across large volumes (
The modeling of multi-scale and multi-physics complex systems typically involves the use of scientific software that can optimally leverage extreme scale computing. Despite major developments in recent years, these simulations continue to be computationally intensive and time consuming. Here we explore the use of AI to accelerate the modeling of complex systems at a fraction of the computational cost of classical methods, and present the first application of physics informed neural operators (NOs) (PINOs) to model 2D incompressible magnetohydrodynamics (MHD) simulations. Our AI models incorporate tensor Fourier NOs as their backbone, which we implemented with the
- NSF-PAR ID:
- 10429345
- Publisher / Repository:
- IOP Publishing
- Date Published:
- Journal Name:
- Machine Learning: Science and Technology
- Volume:
- 4
- Issue:
- 3
- ISSN:
- 2632-2153
- Page Range / eLocation ID:
- Article No. 035002
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract ) and investigated a model multichannel microfluidic network. A topographical profile of the microfluidic surfaces provided lattice Boltzmann simulations with a detailed feature map to precisely reconstruct the experimental environment. Particle distributions from simulations closely reproduced those observed experimentally and both measurements were sensitive to the effects of surface roughness. Under the conditions studied, inertial focusing organized large particles into an annular distribution that limited their transport throughout the network while small particles were transported uniformly to all regions.$$25\,\hbox {mm} \times 25\,\hbox {mm} \times 2\,\hbox {mm}$$ -
Cosmological simulations of galaxy formation are limited by finite computational resources. We draw from the ongoing rapid advances in artificial intelligence (AI; specifically deep learning) to address this problem. Neural networks have been developed to learn from high-resolution (HR) image data and then make accurate superresolution (SR) versions of different low-resolution (LR) images. We apply such techniques to LR cosmological N-body simulations, generating SR versions. Specifically, we are able to enhance the simulation resolution by generating 512 times more particles and predicting their displacements from the initial positions. Therefore, our results can be viewed as simulation realizations themselves, rather than projections, e.g., to their density fields. Furthermore, the generation process is stochastic, enabling us to sample the small-scale modes conditioning on the large-scale environment. Our model learns from only 16 pairs of small-volume LR-HR simulations and is then able to generate SR simulations that successfully reproduce the HR matter power spectrum to percent level up to
and the HR halo mass function to within down to . We successfully deploy the model in a box 1,000 times larger than the training simulation box, showing that high-resolution mock surveys can be generated rapidly. We conclude that AI assistance has the potential to revolutionize modeling of small-scale galaxy-formation physics in large cosmological volumes. -
Abstract Mass measurements from low-mass black hole X-ray binaries (LMXBs) and radio pulsars have been used to identify a gap between the most massive neutron stars (NSs) and the least massive black holes (BHs). BH mass measurements in LMXBs are typically only possible for transient systems: outburst periods enable detection via all-sky X-ray monitors, while quiescent periods enable radial velocity measurements of the low-mass donor. We quantitatively study selection biases due to the requirement of transient behavior for BH mass measurements. Using rapid population synthesis simulations (
COSMIC ), detailed binary stellar-evolution models (MESA ), and the disk instability model of transient behavior, we demonstrate that transient LMXB selection effects introduce observational biases, and can suppress mass-gap BHs in the observed sample. However, we find a population of transient LMXBs with mass-gap BHs form through accretion-induced collapse of an NS during the LMXB phase, which is inconsistent with observations. These results are robust against variations of binary evolution prescriptions. The significance of this accretion-induced collapse population depends upon the maximum NS birth mass . To reflect the observed dearth of low-mass BHs,COSMIC andMESA models favor . In the absence of further observational biases against LMXBs with mass-gap BHs, our results indicate the need for additional physics connected to the modeling of LMXB formation and evolution. -
Abstract We investigate the stellar mass–black hole mass (
) relation with type 1 active galactic nuclei (AGNs) down to , corresponding to a ≃ −21 absolute magnitude in rest-frame ultraviolet, atz = 2–2.5. Exploiting the deep and large-area spectroscopic survey of the Hobby–Eberly Telescope Dark Energy Experiment (HETDEX), we identify 66 type 1 AGNs with ranging from 107–1010M ⊙that are measured with single-epoch virial method using Civ emission lines detected in the HETDEX spectra. of the host galaxies are estimated from optical to near-infrared photometric data taken with Spitzer, the Wide-field Infrared Survey Explorer, and ground-based 4–8 m class telescopes byCIGALE spectral energy distribution (SED) fitting. We further assess the validity of SED fitting in two cases by host-nuclear decomposition performed through surface brightness profile fitting on spatially resolved host galaxies with the James Webb Space Telescope/NIRCam CEERS data. We obtain the relation covering the unexplored low-mass ranges of , and conduct forward modeling to fully account for the selection biases and observational uncertainties. The intrinsic relation atz ∼ 2 has a moderate positive offset of 0.52 ± 0.14 dex from the local relation, suggestive of more efficient black hole growth at higher redshift even in the low-mass regime of . Our relation is inconsistent with the suppression at the low- regime predicted by recent hydrodynamic simulations at a 98% confidence level, suggesting that feedback in the low-mass systems may be weaker than those produced in hydrodynamic simulations. -
Abstract We prove that
-depth local random quantum circuits with two qudit nearest-neighbor gates on a$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$ D -dimensional lattice withn qudits are approximatet -designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was due to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$${{\,\textrm{poly}\,}}(t)\cdot n$$ . We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($$D=1$$ ) is infinite and that certain counting problems are$${{\,\mathrm{\textsf{PH}}\,}}$$ -hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anti-concentration”, meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Our result improves the required depth for this level of anti-concentration from linear depth to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a two-dimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$\#{\textsf{P}}$$ depth suffices for anti-concentration. The proof is based on a previous construction of$$O(\sqrt{n})$$ t -designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasi-orthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anti-concentration and to establish equivalence between these various norms for low-depth circuits. For random circuits with long-range gates, we use different methods to show that anti-concentration happens at circuit size corresponding to depth$$O(n\ln ^2 n)$$ . We also show a lower bound of$$O(\ln ^3 n)$$ for the size of such circuit in this case. We also prove that anti-concentration is possible in depth$$\Omega (n \ln n)$$ (size$$O(\ln n \ln \ln n)$$ ) using a different model.$$O(n \ln n \ln \ln n)$$