Melt pool dynamics in metal additive manufacturing (AM) is critical to process stability, microstructure formation, and final properties of the printed materials. Physicsbased simulation, including computational fluid dynamics (CFD), is the dominant approach to predict melt pool dynamics. However, the physicsbased simulation approaches suffer from the inherent issue of very high computational cost. This paper provides a physicsinformed machine learning method by integrating the conventional neural networks with the governing physical laws to predict the melt pool dynamics, such as temperature, velocity, and pressure, without using any training data on velocity and pressure. This approach avoids solving the nonlinear Navier–Stokes equation numerically, which significantly reduces the computational cost (if including the cost of velocity data generation). The difficulttodetermine parameters' values of the governing equations can also be inferred through datadriven discovery. In addition, the physicsinformed neural network (PINN) architecture has been optimized for efficient model training. The dataefficient PINN model is attributed to the extra penalty by incorporating governing PDEs, initial conditions, and boundary conditions in the PINN model.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

Abstract Free, publiclyaccessible full text available August 1, 2025 
Selfsupervised learning through contrastive representations is an emergent and promising avenue, aiming at alleviating the availability of labeled data. Recent research in the field also demonstrates its viability for several downstream tasks, henceforth leading to works that implement the contrastive principle through inno vative loss functions and methods. However, despite achieving impressive progress, most methods depend on prohibitively large batch sizes and compute requirements for good performance. In this work, we propose the AUCContrastive Learning, a new approach to contrastive learning that demonstrates robust and competitive performance in computelimited regimes. We propose to incorporate the contrastive objective within the AUCmaximization framework, by noting that the AUC metric is maximized upon enhancing the probability of the network’s binary prediction difference between positive and negative samples which inspires adequate embed ding space arrangements in representation learning. Unlike standard contrastive methods, when performing stochastic optimization, our method maintains unbiased stochastic gradients and thus is more robust to batchsizes as opposed to standard stochastic optimization problems. Remarkably, our method with a batch size of 256, outperforms several stateoftheart methods that may need much larger batch sizes (e.g., 4096), on ImageNet and other standard datasets. Experiments on transfer learning and fewshot learning tasks also demonstrate the downstream viability of our method. Code is available at AUCCL.more » « less

Free, publiclyaccessible full text available August 1, 2025

Free, publiclyaccessible full text available July 1, 2025

Free, publiclyaccessible full text available May 1, 2025

Free, publiclyaccessible full text available April 1, 2025

Free, publiclyaccessible full text available April 1, 2025

A<sc>bstract</sc> A search for the fully reconstructed
$$ {B}_s^0 $$ ${B}_{s}^{0}$→ μ ^{+}μ ^{−}γ decay is performed at the LHCb experiment using protonproton collisions at = 13 TeV corresponding to an integrated luminosity of 5$$ \sqrt{s} $$ $\sqrt{s}$. 4 fb^{−1}. No significant signal is found and upper limits on the branching fraction in intervals of the dimuon mass are set$$ {\displaystyle \begin{array}{cc}\mathcal{B}\left({B}_s^0\to {\mu}^{+}{\mu}^{}\gamma \right)<4.2\times {10}^{8},& m\left({\mu}^{+}{\mu}^{}\right)\in \left[2{m}_{\mu },1.70\right]\textrm{GeV}/{c}^2,\\ {}\mathcal{B}\left({B}_s^0\to {\mu}^{+}{\mu}^{}\gamma \right)<7.7\times {10}^{8},&\ m\left({\mu}^{+}{\mu}^{}\right)\in \left[\textrm{1.70,2.88}\right]\textrm{GeV}/{c}^2,\\ {}\mathcal{B}\left({B}_s^0\to {\mu}^{+}{\mu}^{}\gamma \right)<4.2\times {10}^{8},& m\left({\mu}^{+}{\mu}^{}\right)\in \left[3.92,{m}_{B_s^0}\right]\textrm{GeV}/{c}^2,\end{array}} $$ $\begin{array}{cc}B\left({B}_{s}^{0}\to {\mu}^{+}{\mu}^{}\gamma \right)<4.2\times {10}^{8},& m\left({\mu}^{+}{\mu}^{}\right)\in \left(2{m}_{\mu},1.70\right)\mathrm{GeV}/{c}^{2},\\ B\left({B}_{s}^{0}\to {\mu}^{+}{\mu}^{}\gamma \right)<7.7\times {10}^{8},& \phantom{\rule{0ex}{0ex}}m\left({\mu}^{+}{\mu}^{}\right)\in \left(\mathrm{1.70,\; 2.88}\right)\mathrm{GeV}/{c}^{2},\\ B\left({B}_{s}^{0}\to {\mu}^{+}{\mu}^{}\gamma \right)<4.2\times {10}^{8},& m\left({\mu}^{+}{\mu}^{}\right)\in \left(3.92,{m}_{{B}_{s}^{0}}\right)\mathrm{GeV}/{c}^{2},\end{array}$at 95% confidence level. Additionally, upper limits are set on the branching fraction in the [2
m _{μ}, 1. 70] GeV/c ^{2}dimuon mass region excluding the contribution from the intermediateϕ (1020) meson, and in the region combining all dimuonmass intervals.Free, publiclyaccessible full text available July 1, 2025 
Free, publiclyaccessible full text available February 1, 2025