The transport of particles and fluids through multichannel microfluidic networks is influenced by details of the channels. Because channels have microscale textures and macroscale geometries, this transport can differ from the case of ideally smooth channels. Surfaces of real channels have irregular boundary conditions to which streamlines adapt and with which particle interact. In lowReynolds number flows, particles may experience inertial forces that result in transstreamline movement and the reorganization of particle distributions. Such transport is intrinsically 3D and an accurate measurement must capture movement in all directions. To measure the effects of nonideal surface textures on particle transport through complex networks, we developed an extended fieldofview 3D macroscope for highresolution tracking across large volumes (
The modeling of multiscale and multiphysics complex systems typically involves the use of scientific software that can optimally leverage extreme scale computing. Despite major developments in recent years, these simulations continue to be computationally intensive and time consuming. Here we explore the use of AI to accelerate the modeling of complex systems at a fraction of the computational cost of classical methods, and present the first application of physics informed neural operators (NOs) (PINOs) to model 2D incompressible magnetohydrodynamics (MHD) simulations. Our AI models incorporate tensor Fourier NOs as their backbone, which we implemented with the
 NSFPAR ID:
 10429345
 Publisher / Repository:
 IOP Publishing
 Date Published:
 Journal Name:
 Machine Learning: Science and Technology
 Volume:
 4
 Issue:
 3
 ISSN:
 26322153
 Page Range / eLocation ID:
 Article No. 035002
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract ) and investigated a model multichannel microfluidic network. A topographical profile of the microfluidic surfaces provided lattice Boltzmann simulations with a detailed feature map to precisely reconstruct the experimental environment. Particle distributions from simulations closely reproduced those observed experimentally and both measurements were sensitive to the effects of surface roughness. Under the conditions studied, inertial focusing organized large particles into an annular distribution that limited their transport throughout the network while small particles were transported uniformly to all regions.$$25\,\hbox {mm} \times 25\,\hbox {mm} \times 2\,\hbox {mm}$$ $25\phantom{\rule{0ex}{0ex}}\text{mm}\times 25\phantom{\rule{0ex}{0ex}}\text{mm}\times 2\phantom{\rule{0ex}{0ex}}\text{mm}$ 
Cosmological simulations of galaxy formation are limited by finite computational resources. We draw from the ongoing rapid advances in artificial intelligence (AI; specifically deep learning) to address this problem. Neural networks have been developed to learn from highresolution (HR) image data and then make accurate superresolution (SR) versions of different lowresolution (LR) images. We apply such techniques to LR cosmological Nbody simulations, generating SR versions. Specifically, we are able to enhance the simulation resolution by generating 512 times more particles and predicting their displacements from the initial positions. Therefore, our results can be viewed as simulation realizations themselves, rather than projections, e.g., to their density fields. Furthermore, the generation process is stochastic, enabling us to sample the smallscale modes conditioning on the largescale environment. Our model learns from only 16 pairs of smallvolume LRHR simulations and is then able to generate SR simulations that successfully reproduce the HR matter power spectrum to percent level up to
$16\hspace{0.17em}{h}^{1}\mathrm{\text{Mpc}}$ and the HR halo mass function to within$10\%$ down to$1{0}^{11}\hspace{0.17em}{M}_{\odot}$ . We successfully deploy the model in a box 1,000 times larger than the training simulation box, showing that highresolution mock surveys can be generated rapidly. We conclude that AI assistance has the potential to revolutionize modeling of smallscale galaxyformation physics in large cosmological volumes. 
Abstract Mass measurements from lowmass black hole Xray binaries (LMXBs) and radio pulsars have been used to identify a gap between the most massive neutron stars (NSs) and the least massive black holes (BHs). BH mass measurements in LMXBs are typically only possible for transient systems: outburst periods enable detection via allsky Xray monitors, while quiescent periods enable radial velocity measurements of the lowmass donor. We quantitatively study selection biases due to the requirement of transient behavior for BH mass measurements. Using rapid population synthesis simulations (
COSMIC ), detailed binary stellarevolution models (MESA ), and the disk instability model of transient behavior, we demonstrate that transient LMXB selection effects introduce observational biases, and can suppress massgap BHs in the observed sample. However, we find a population of transient LMXBs with massgap BHs form through accretioninduced collapse of an NS during the LMXB phase, which is inconsistent with observations. These results are robust against variations of binary evolution prescriptions. The significance of this accretioninduced collapse population depends upon the maximum NS birth mass . To reflect the observed dearth of lowmass BHs, ${M}_{\mathrm{NS},\mathrm{birth}\mathrm{max}}$COSMIC andMESA models favor . In the absence of further observational biases against LMXBs with massgap BHs, our results indicate the need for additional physics connected to the modeling of LMXB formation and evolution. ${M}_{\mathrm{NS},\mathrm{birth}\mathrm{max}}\lesssim 2{M}_{\odot}$ 
Abstract We investigate the stellar mass–black hole mass (
) relation with type 1 active galactic nuclei (AGNs) down to ${\mathit{\ue239}}_{*}\u2013{\mathit{\ue239}}_{\mathrm{BH}}$ , corresponding to a ≃ −21 absolute magnitude in restframe ultraviolet, at ${\mathit{\ue239}}_{\mathrm{BH}}={10}^{7\phantom{\rule{0.11em}{0ex}}}{M}_{\odot}$z = 2–2.5. Exploiting the deep and largearea spectroscopic survey of the Hobby–Eberly Telescope Dark Energy Experiment (HETDEX), we identify 66 type 1 AGNs with ranging from 10^{7}–10^{10} ${\mathit{\ue239}}_{\mathrm{BH}}$M _{⊙}that are measured with singleepoch virial method using Civ emission lines detected in the HETDEX spectra. of the host galaxies are estimated from optical to nearinfrared photometric data taken with Spitzer, the Widefield Infrared Survey Explorer, and groundbased 4–8 m class telescopes by ${\mathit{\ue239}}_{*}$CIGALE spectral energy distribution (SED) fitting. We further assess the validity of SED fitting in two cases by hostnuclear decomposition performed through surface brightness profile fitting on spatially resolved host galaxies with the James Webb Space Telescope/NIRCam CEERS data. We obtain the relation covering the unexplored lowmass ranges of ${\mathit{\ue239}}_{*}\u2013{\mathit{\ue239}}_{\mathrm{BH}}$ , and conduct forward modeling to fully account for the selection biases and observational uncertainties. The intrinsic ${\mathit{\ue239}}_{\mathrm{BH}}\phantom{\rule{0.25em}{0ex}}\sim \phantom{\rule{0.25em}{0ex}}{10}^{7}\u2013{10}^{8}\phantom{\rule{0.25em}{0ex}}{M}_{\odot}$ relation at ${\mathit{\ue239}}_{*}\u2013{\mathit{\ue239}}_{\mathrm{BH}}$z ∼ 2 has a moderate positive offset of 0.52 ± 0.14 dex from the local relation, suggestive of more efficient black hole growth at higher redshift even in the lowmass regime of . Our ${\mathit{\ue239}}_{\mathrm{BH}}\phantom{\rule{0.25em}{0ex}}\sim \phantom{\rule{0.25em}{0ex}}{10}^{7}\u2013{10}^{8}\phantom{\rule{0.25em}{0ex}}{M}_{\odot}$ relation is inconsistent with the ${\mathit{\ue239}}_{*}\u2013{\mathit{\ue239}}_{\mathrm{BH}}$ suppression at the low ${\mathit{\ue239}}_{\mathrm{BH}}$ regime predicted by recent hydrodynamic simulations at a 98% confidence level, suggesting that feedback in the lowmass systems may be weaker than those produced in hydrodynamic simulations. ${\mathit{\ue239}}_{*}$ 
Abstract We prove that
depth local random quantum circuits with two qudit nearestneighbor gates on a$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$ $\phantom{\rule{0ex}{0ex}}\text{poly}\phantom{\rule{0ex}{0ex}}\left(t\right)\xb7{n}^{1/D}$D dimensional lattice withn qudits are approximatet designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was due to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$${{\,\textrm{poly}\,}}(t)\cdot n$$ $\phantom{\rule{0ex}{0ex}}\text{poly}\phantom{\rule{0ex}{0ex}}\left(t\right)\xb7n$ . We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($$D=1$$ $D=1$ ) is infinite and that certain counting problems are$${{\,\mathrm{\textsf{PH}}\,}}$$ $\phantom{\rule{0ex}{0ex}}\mathrm{PH}\phantom{\rule{0ex}{0ex}}$ hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constantdepth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anticoncentration”, meaning roughly that the output has nearmaximal entropy. Unitary 2designs have the desired anticoncentration property. Our result improves the required depth for this level of anticoncentration from linear depth to a sublinear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a twodimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$\#{\textsf{P}}$$ $\#P$ depth suffices for anticoncentration. The proof is based on a previous construction of$$O(\sqrt{n})$$ $O\left(\sqrt{n}\right)$t designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasiorthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anticoncentration and to establish equivalence between these various norms for lowdepth circuits. For random circuits with longrange gates, we use different methods to show that anticoncentration happens at circuit size corresponding to depth$$O(n\ln ^2 n)$$ $O\left(n{ln}^{2}n\right)$ . We also show a lower bound of$$O(\ln ^3 n)$$ $O\left({ln}^{3}n\right)$ for the size of such circuit in this case. We also prove that anticoncentration is possible in depth$$\Omega (n \ln n)$$ $\Omega (nlnn)$ (size$$O(\ln n \ln \ln n)$$ $O(lnnlnlnn)$ ) using a different model.$$O(n \ln n \ln \ln n)$$ $O(nlnnlnlnn)$