<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Journal Article</dc:product_type><dc:title>Variationally correct neural residual regression for parametric PDEs: on the viability of controlled accuracy</dc:title><dc:creator>Bachmayr, Markus; Dahmen, Wolfgang; Oster, Mathias</dc:creator><dc:corporate_author/><dc:editor/><dc:description>&lt;title&gt;Abstract&lt;/title&gt; &lt;p&gt;This paper is about learning the parameter-to-solution map for systems of partial differential equations (PDEs) that depend on a potentially large number of parameters covering all PDE types for which a stable variational formulation (SVF) can be found. A central constituent is the notion of variationally correct residual loss function, meaning that its value is always uniformly proportional to the squared solution error in the norm determined by the SVF, hence facilitating rigorous a posteriori accuracy control. It is based on a single variational problem, associated with the family of parameter-dependent fibre problems, employing the notion of direct integrals of Hilbert spaces. Since in its original form the loss function is given as a dual test norm of the residual; a central objective is to develop equivalent computable expressions. The first critical role is played by hybrid hypothesis classes, whose elements are piecewise polynomial in (low-dimensional) spatio-temporal variables with parameter-dependent coefficients that can be represented, for example, by neural networks. Second, working with first-order SVFs we distinguish two scenarios: (i) the test space can be chosen as an $L_{2}$-space (such as for elliptic or parabolic problems) so that residuals can be evaluated directly as elements of $L_{2}$; (ii) when trial and test spaces for the fibre problems depend on the parameters (as for transport equations) we use ultra-weak formulations. In combination with discontinuous Petrov–Galerkin concepts the hybrid format is then instrumental to arrive at variationally correct computable residual loss functions. Our findings are illustrated by numerical experiments representing (i) and (ii), namely elliptic boundary value problems with piecewise constant diffusion coefficients and pure transport equations with parameter-dependent convection fields.&lt;/p&gt;</dc:description><dc:publisher>IMA Journal of Numerical Analysis</dc:publisher><dc:date>2025-10-02</dc:date><dc:nsf_par_id>10643356</dc:nsf_par_id><dc:journal_name>IMA Journal of Numerical Analysis</dc:journal_name><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation>1-50</dc:page_range_or_elocation><dc:issn>0272-4979</dc:issn><dc:isbn/><dc:doi>https://doi.org/10.1093/imanum/draf073</dc:doi><dcq:identifierAwardId>2038080; 2245097; 2012469</dcq:identifierAwardId><dc:subject>parametric partial differential equations</dc:subject><dc:subject>solution manifolds</dc:subject><dc:subject>physics-informed learning</dc:subject><dc:subject>deep neural networks</dc:subject><dc:subject>stable variational formulations</dc:subject><dc:subject>least-squares variational formulations</dc:subject><dc:subject>discontinuous Petrov–Galerkin methods.</dc:subject><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>