Thin film evaporation is a widelyused thermal management solution for micro/nanodevices with high energy densities. Local measurements of the evaporation rate at a liquidvapor interface, however, are limited. We present a continuous profile of the evaporation heat transfer coefficient (
The differential cross section for the quasifree photoproduction reaction
 NSFPAR ID:
 10472580
 Author(s) / Creator(s):
 ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
 Publisher / Repository:
 Springer Science + Business Media
 Date Published:
 Journal Name:
 The European Physical Journal A
 Volume:
 59
 Issue:
 11
 ISSN:
 1434601X
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract ) in the submicron thin film region of a water meniscus obtained through local measurements interpreted by a machine learned surrogate of the physical system. Frequency domain thermoreflectance (FDTR), a noncontact laserbased method with micrometer lateral resolution, is used to induce and measure the meniscus evaporation. A neural network is then trained using finite element simulations to extract the$$h_{\text {evap}}$$ ${h}_{\text{evap}}$ profile from the FDTR data. For a substrate superheat of 20 K, the maximum$$h_{\text {evap}}$$ ${h}_{\text{evap}}$ is$$h_{\text {evap}}$$ ${h}_{\text{evap}}$ MW/$$1.0_{0.3}^{+0.5}$$ $1.{0}_{0.3}^{+0.5}$ K at a film thickness of$$\text {m}^2$$ ${\text{m}}^{2}$ nm. This ultrahigh$$15_{3}^{+29}$$ ${15}_{3}^{+29}$ value is two orders of magnitude larger than the heat transfer coefficient for singlephase forced convection or evaporation from a bulk liquid. Under the assumption of constant wall temperature, our profiles of$$h_{\text {evap}}$$ ${h}_{\text{evap}}$ and meniscus thickness suggest that 62% of the heat transfer comes from the region lying 0.1–1 μm from the meniscus edge, whereas just 29% comes from the next 100 μm.$$h_{\text {evap}}$$ ${h}_{\text{evap}}$ 
Abstract In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in finegrained complexity. In several cases our proof systems have optimal running time. Our main results include:
Certifying that a list of
n integers has no 3SUM solution can be done in Merlin–Arthur time . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n)$$ $\stackrel{~}{O}\left(n\right)$ time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ time).$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$Counting the number of
k cliques with total edge weight equal to zero in ann node graph can be done in Merlin–Arthur time (where$${\tilde{O}}(n^{\lceil k/2\rceil })$$ $\stackrel{~}{O}\left({n}^{\lceil k/2\rceil}\right)$ ). For odd$$k\ge 3$$ $k\ge 3$k , this bound can be further improved for sparse graphs: for example, counting the number of zeroweight triangles in anm edge graph can be done in Merlin–Arthur time . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only count$${\tilde{O}}(m)$$ $\stackrel{~}{O}\left(m\right)$k cliques in unweighted graphs, and had worse running times for smallk .Computing the AllPairs Shortest Distances matrix for an
n node graph can be done in Merlin–Arthur time . Note this is optimal, as the matrix can have$$\tilde{O}(n^2)$$ $\stackrel{~}{O}\left({n}^{2}\right)$ nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\Omega (n^2)$$ $\Omega \left({n}^{2}\right)$ nondeterministic time algorithm.$$\tilde{O}(n^{2.94})$$ $\stackrel{~}{O}\left({n}^{2.94}\right)$Certifying that an
n variablek CNF is unsatisfiable can be done in Merlin–Arthur time . We also observe an algebrization barrier for the previous$$2^{n/2  n/O(k)}$$ ${2}^{n/2n/O\left(k\right)}$ time Merlin–Arthur protocol of R. Williams [CCC’16] for$$2^{n/2}\cdot \textrm{poly}(n)$$ ${2}^{n/2}\xb7\text{poly}\left(n\right)$ SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for$$\#$$ $\#$k UNSAT running in time. Therefore we have to exploit nonalgebrizing properties to obtain our new protocol.$$2^{n/2}/n^{\omega (1)}$$ ${2}^{n/2}/{n}^{\omega \left(1\right)}$ Due to the centrality of these problems in finegrained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution toCertifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time
. Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{4n/5}\cdot \textrm{poly}(n)$$ ${2}^{4n/5}\xb7\text{poly}\left(n\right)$ time.$$2^{2n/3}\cdot \textrm{poly}(n)$$ ${2}^{2n/3}\xb7\text{poly}\left(n\right)$n integers can be done in Merlin–Arthur time , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{n/3}\cdot \textrm{poly}(n)$$ ${2}^{n/3}\xb7\text{poly}\left(n\right)$ time.$$2^{0.49991n}\cdot \textrm{poly}(n)$$ ${2}^{0.49991n}\xb7\text{poly}\left(n\right)$ 
Abstract The search for neutrino events in correlation with gravitational wave (GW) events for three observing runs (O1, O2 and O3) from 09/2015 to 03/2020 has been performed using the Borexino dataset of the same period. We have searched for signals of neutrinoelectron scattering and inverse betadecay (IBD) within a time window of
s centered at the detection moment of a particular GW event. The search was done with three visible energy thresholds of 0.25, 0.8 and 3.0 MeV. Two types of incoming neutrino spectra were considered: the monoenergetic line and the supernovalike spectrum. GW candidates originated by merging binaries of black holes (BHBH), neutron stars (NSNS) and neutron star and black hole (NSBH) were analyzed separately. Additionally, the subset of most intensive BHBH mergers at closer distances and with larger radiative mass than the rest was considered. In total, followups of 74 out of 93 gravitational waves reported in the GWTC3 catalog were analyzed and no statistically significant excess over the background was observed. As a result, the strongest upper limits on GWassociated neutrino and antineutrino fluences for all flavors ($$\pm \, 1000$$ $\pm \phantom{\rule{0ex}{0ex}}1000$ ) at the level$$\nu _e, \nu _\mu , \nu _\tau $$ ${\nu}_{e},{\nu}_{\mu},{\nu}_{\tau}$ have been obtained in the 0.5–5 MeV neutrino energy range.$$10^9{}10^{15}~\textrm{cm}^{2}\,\textrm{GW}^{1}$$ ${10}^{9}{10}^{15}\phantom{\rule{0ex}{0ex}}{\text{cm}}^{2}\phantom{\rule{0ex}{0ex}}{\text{GW}}^{1}$ 
Abstract Let
be a positive map from the$$\phi $$ $\varphi $ matrices$$n\times n$$ $n\times n$ to the$$\mathcal {M}_n$$ ${M}_{n}$ matrices$$m\times m$$ $m\times m$ . It is known that$$\mathcal {M}_m$$ ${M}_{m}$ is 2positive if and only if for all$$\phi $$ $\varphi $ and all strictly positive$$K\in \mathcal {M}_n$$ $K\in {M}_{n}$ ,$$X\in \mathcal {M}_n$$ $X\in {M}_{n}$ . This inequality is not generally true if$$\phi (K^*X^{1}K) \geqslant \phi (K)^*\phi (X)^{1}\phi (K)$$ $\varphi \left({K}^{\ast}{X}^{1}K\right)\u2a7e\varphi {\left(K\right)}^{\ast}\varphi {\left(X\right)}^{1}\varphi \left(K\right)$ is merely a Schwarz map. We show that the corresponding tracial inequality$$\phi $$ $\varphi $ holds for a wider class of positive maps that is specified here. We also comment on the connections of this inequality with various monotonicity statements that have found wide use in mathematical physics, and apply it, and a close relative, to obtain some new, definitive results.$${{\,\textrm{Tr}\,}}[\phi (K^*X^{1}K)] \geqslant {{\,\textrm{Tr}\,}}[\phi (K)^*\phi (X)^{1}\phi (K)]$$ $\phantom{\rule{0ex}{0ex}}\text{Tr}\phantom{\rule{0ex}{0ex}}\left[\varphi \left({K}^{\ast}{X}^{1}K\right)\right]\u2a7e\phantom{\rule{0ex}{0ex}}\text{Tr}\phantom{\rule{0ex}{0ex}}\left[\varphi {\left(K\right)}^{\ast}\varphi {\left(X\right)}^{1}\varphi \left(K\right)\right]$ 
Abstract It has been recently established in David and Mayboroda (Approximation of green functions and domains with uniformly rectifiable boundaries of all dimensions.
arXiv:2010.09793 ) that on uniformly rectifiable sets the Green function is almost affine in the weak sense, and moreover, in some scenarios such Green function estimates are equivalent to the uniform rectifiability of a set. The present paper tackles a strong analogue of these results, starting with the “flagship degenerate operators on sets with lower dimensional boundaries. We consider the elliptic operators associated to a domain$$L_{\beta ,\gamma } = {\text {div}}D^{d+1+\gamma n} \nabla $$ ${L}_{\beta ,\gamma}=\text{div}{D}^{d+1+\gamma n}\nabla $ with a uniformly rectifiable boundary$$\Omega \subset {\mathbb {R}}^n$$ $\Omega \subset {R}^{n}$ of dimension$$\Gamma $$ $\Gamma $ , the now usual distance to the boundary$$d < n1$$ $d<n1$ given by$$D = D_\beta $$ $D={D}_{\beta}$ for$$D_\beta (X)^{\beta } = \int _{\Gamma } Xy^{d\beta } d\sigma (y)$$ ${D}_{\beta}{\left(X\right)}^{\beta}={\int}_{\Gamma}{Xy}^{d\beta}d\sigma \left(y\right)$ , where$$X \in \Omega $$ $X\in \Omega $ and$$\beta >0$$ $\beta >0$ . In this paper we show that the Green function$$\gamma \in (1,1)$$ $\gamma \in (1,1)$G for , with pole at infinity, is well approximated by multiples of$$L_{\beta ,\gamma }$$ ${L}_{\beta ,\gamma}$ , in the sense that the function$$D^{1\gamma }$$ ${D}^{1\gamma}$ satisfies a Carleson measure estimate on$$\big  D\nabla \big (\ln \big ( \frac{G}{D^{1\gamma }} \big )\big )\big ^2$$ $D\nabla (ln(\frac{G}{{D}^{1\gamma}})){}^{2}$ . We underline that the strong and the weak results are different in nature and, of course, at the level of the proofs: the latter extensively used compactness arguments, while the present paper relies on some intricate integration by parts and the properties of the “magical distance function from David et al. (Duke Math J, to appear).$$\Omega $$ $\Omega $