<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Adaptive parameter selection in nudging based data assimilation</title></titleStmt>
			<publicationStmt>
				<publisher>Computer Methods in Applied Mechanics and Engineering</publisher>
				<date>01/01/2025</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10603935</idno>
					<idno type="doi">10.1016/j.cma.2024.117526</idno>
					<title level='j'>Computer Methods in Applied Mechanics and Engineering</title>
<idno>0045-7825</idno>
<biblScope unit="volume">433</biblScope>
<biblScope unit="issue">PB</biblScope>					

					<author>Aytekin Çıbık</author><author>Rui Fang</author><author>William Layton</author><author>Farjana Siddiqua</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[Data assimilation combines (imperfect) knowledge of a flow's physical laws with (noisy, timelagged, and otherwise imperfect) observations to produce a more accurate prediction of flow statistics. Assimilation by nudging (from 1964), while non-optimal, is easy to implement and its analysis is clear and well-established. Nudging's uniform in time accuracy has even been established under conditions on the nudging parameter and the density of observational locations, , Larios et al. ( 2019). One remaining issue is that nudging requires the user to select a key parameter. The conditions required for this parameter, derived through á priori (worst case) analysis are severe (Section 2.1 herein) and far beyond those found to be effective in computational experience. One resolution, developed herein, is self-adaptive parameter selection. This report develops, analyzes, tests, and compares two methods of self-adaptation of nudging parameters. One combines analysis and response to local flow behavior. The other is based only on response to flow behavior. The comparison finds both are easily implemented and yields effective values of the nudging parameter much smaller than those of á priori analysis.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Predictions of the future state of a flow, here internal 2D or 3D flow of an incompressible viscous fluid in a domain , + &#231; &#8711; -+ &#8711; = ( ), and &#8711; &#231; = 0, in , 0 &lt; d , (</p><p>= 0 on and ( , 0) = 0 ( ),</p><p>are improved, Kalnay <ref type="bibr">[1]</ref>, by incorporating / assimilating observations of the flow ( , ) for 0 &lt; d .</p><p>The goal of data assimilation is to combine incomplete, sparse, noisy, and possibly time-delayed observations, , with incomplete and approximate knowledge of a flow's dynamic laws to produce a more accurate prediction of flow statistics. Nudging-based assimilation, from 1964 Luenberger <ref type="bibr">[2]</ref>, is amenable to both analysis, e.g. Biswas and Price <ref type="bibr">[3]</ref>, Azouani, Olson and Titi <ref type="bibr">[4]</ref>, Cao, Giorgini, Jolly and Pakzad <ref type="bibr">[5]</ref>, Larios, Rebholz, and Zerfas <ref type="bibr">[6]</ref> (among hundreds of papers), and straightforward implementation. However, nudging is non-optimal since no criterion is minimized, Lakshmivarahan and Lewis <ref type="bibr">[7]</ref>. Uniform in-time convergence has even been proven <ref type="bibr">[6]</ref> under parameter conditions reviewed in Section 2. A difficulty remaining is that users must select effective nudging parameters and the analytical theory places parameter restrictions, see Section 2.1, far beyond ranges found effective in computational experience, summarized in Kalnay <ref type="bibr">[1]</ref>. Since &#225; priori analysis develops conditions from a series of worst-case estimates, these restrictions may be pessimistic, and self-adaptive parameter selection, developed herein, is natural.</p><p>To develop adaptive-methods, we adopt the simplest interesting setting. Let the true velocity be denoted ( , ) and its observed values = be an 2 projection of the true values on a finite-dimensional subspace (associated with a length-scale denoted ). The continuum, nudged approximation ( , ) satisfies select parameter , set ( , 0) = 0 ( ) and solve + &#231; &#8711; --( -) + &#8711; = ( ), &#8711; &#231; = 0, in .</p><p>(1.</p><p>3)</p><p>The initial and boundary conditions are = 0 on , = 0 on , ( , 0) = 0 ( ), and ( , 0) = 0 ( ).</p><p>To support the necessity for adapting , in Section 2 we summarized error analysis (inspired by the analysis in <ref type="bibr">[6]</ref>) showing that for small enough and large enough nudging is accurate uniformly in time. Specifically, the conditions from this analysis are</p><p>2D -condition: -2 1</p><p>) 3D -condition: 2 [ -2048 19683 -3 1 + 0 &#8214;&#8711; &#8214; 4 ] e 0 &gt; 0. (1.6) With explicit time discretization, the nudging term alters the CFL stability condition and a third parameter constraint emerges, detailed in Farhat, Larios, Martinez, and Whitehead [8]. For high Reynolds number flows in 2D and 3D, in Section 2.1 we investigate what turbulence phenomenology suggests large enough and small enough in (1.4)-(1.6) mean. There results in</p><p>in 3D: &#8819; &#253;(&#256; 5 ) and &#8818; &#253;(&#256; -3 ).</p><p>(</p><p>While sufficient (in theory), these are impractical (in practice). Extensive practical experience, Kalnay <ref type="bibr">[1]</ref>, shows that the nudging parameter is best chosen for intermediate size, balancing data errors with model errors and numerical errors. Thus the parameter requirements (1.4)-(1.6) are far beyond parameter values in problems where assimilation is used. The gap is likely because of conditions (1.4)-(1.6) emerge from analysis through a series of worst-case estimates which may not occur simultaneously and (1.7), (1.8) emerge from the assumption that the flow is fully developed turbulence. To address this issue, in Section 3 we present 2 low complexity algorithms for self-adaptive parameter selection. These act to find a smaller effective value in response to the local flow behavior and are only slightly more expensive in operations and memory than &#225; priori selection.</p><p>Adaptivity exploits -sensitivity to improve approximations. It is based on 3 principles: localization (of the global criteria), estimation (of the deviation from a desired state), and decision (of how to choose the next parameter value). The condition that &#8214; (&#231;, ) -(&#231;, )&#8214; is decreasing is localized to decreasing every time step. The estimator is &#8214; ( -)&#8214; and the decision tree is to increase for large estimates and decrease for very small estimators. We compare two decision strategies based on derived analytical criteria for decrease and simply asking if a reduction is observed.</p><p>Section 4 presents tests of the adaptive nudging methods for problems with model errors with both &#225; priori and self-adaptive parameter selections. We find both adaptive-methods produce effective values smaller than (1.7)-(1.8) and accurate velocities. We also observe that an H-condition seems to be necessary for accurate, long-time approximations. Addressing the H-condition is an open problem. In many applications, depends on the density of sensors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.">Related work</head><p>Nudging for data assimilation stems from the 1964 work of Luenberger <ref type="bibr">[2]</ref> and has extensive tests in geophysical flow simulations, e.g., <ref type="bibr">[1,</ref><ref type="bibr">7,</ref><ref type="bibr">[9]</ref><ref type="bibr">[10]</ref><ref type="bibr">[11]</ref><ref type="bibr">[12]</ref>. Its form also arises under the terms various types of damping and time relaxation, e.g., <ref type="bibr">[13]</ref>, with large numbers of papers for each realization. The analytical work on nudging is similarly enormous; we mention three that we (and most others) build upon. Azouani, Olson, and Titi <ref type="bibr">[4]</ref> gave an early and possibly even the first complete convergence analysis of a nudging algorithm. Their analysis produced a similar -condition as herein in Theorem 2.1 p. 254. Biswas and Price <ref type="bibr">[3]</ref> gave a remarkable analysis in 3D without assuming &#225; priori strong solutions. Their analysis recovered similar conditions as herein on (their equation <ref type="bibr">(3.27)</ref>) and (their (3.8)). Biswas and Price <ref type="bibr">[3]</ref> also provided an early algorithm for adaptation of which retained a Reynolds number dependent lower bound on in their equation <ref type="bibr">(4.4)</ref>. In 2019 Larios, Rebholz, and Zerfas <ref type="bibr">[6]</ref> proved uniform in time error bounds in a way that narrowed the gap between analysis and computational tests. In Zerfas, Rebholz, Schneier, and Iliescu <ref type="bibr">[14]</ref>, the nudging parameter (here ) is adapted when the nudged system's kinetic energy is out of the bounds that the NSE energy inequality imposes. See also Garc&#237;a-Archilla and Novo <ref type="bibr">[15]</ref> for discrete-time error analysis.</p><p>None of these papers sought an optimal . Nudging emerged from Luenberger's work on control theory was then developed in independent directions. Along its path, several reconnected nudging to control and optimization, determining methods for optimization of the nudging parameter. For work on this approach (not considered herein) involving solving the adjoint problem, see Hoke and Anthes <ref type="bibr">[9]</ref>, Stauffer and Bao <ref type="bibr">[11]</ref>, Zou, Navon, LeDimet <ref type="bibr">[12]</ref> and Navon <ref type="bibr">[10]</ref>. Carlson, Hudson, and Larios <ref type="bibr">[16]</ref> and Martinez <ref type="bibr">[17]</ref> have studied recovering the unknown viscosity parameter for the 2D Navier-Stokes equations (NSE). Parameter estimates for the Lorenz equation are studied in Carlson, Hudson, Larios, Martinez, Ng, and Whitehead <ref type="bibr">[18]</ref>. Finally, after this report's submission, two interesting reports exploring other approaches to adapting have become available on ArXiv, Carlson, Farhat, Martinez, and Victor <ref type="bibr">[19,</ref><ref type="bibr">20]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2.">Preliminaries</head><p>The notation for the 2 ( ) norm and inner product is &#8214; &#231; &#8214; and (&#231;, &#231;), respectively. Using &#8214; &#231; &#8214; , we indicate the ( ) norm. The numerical tests use a standard finite element discretization, see <ref type="bibr">[21]</ref> for details. We also use the inequalities derived by Ladyzhenskaya <ref type="bibr">[22]</ref>.</p><p>Theorem 1.1 (The Ladyzhenskaya Inequalities, See Ladyzhenskaya <ref type="bibr">[22]</ref>). For any vector function &#8758; R &#179; R with compact support and with the indicated norms finite,</p><p>, ( = 2),</p><p>&#8214;&#8711; &#8214;, ( = 3).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Continuum nudging</head><p>To motivate the necessity of adaptivity, for completeness, we review the (now standard since <ref type="bibr">[3,</ref><ref type="bibr">4,</ref><ref type="bibr">6]</ref>) proof that the error &#179; 0 uniformly in as &#179; @ and uniformly in as &#179; @. The (direct) proof shows how conditions small enough and large enough emerge organically for the structure of the NSE and it motivates one adaptive algorithm of Section 3. This conditions small enough and large enough are interpreted under 2D and 3D turbulence phenomenology in Section 2.1 below, leading to (1.7). We assume here that is an 2 projection and satisfies</p><p>For the 3D case, we assume 1 that &#8214;&#8711; &#8214; 4 * 1 (0, ). This assumption is not necessary (since the work of Biswas and Price <ref type="bibr">[3]</ref>) but shortens the analysis.</p><p>Proposition 2.1. Let = -. Suppose is an 2 projection with the approximation property (2.1) above. In 2D let the parameter conditions (1.4), (1.5) hold and in 3D (1.4), <ref type="bibr">(1.6)</ref>. Then, the error &#179; 0 uniformly in as &#179; @ and uniformly in as &#179; @. In particular,</p><p>Proof. The 2D case: By subtracting (1.3) from (1.1) and taking the inner product with , there follows</p><p>By the 2D Ladyzhenskaya and arithmetic-geometric mean inequalities</p><p>Thus,</p><p>Provided the term in the first bracket is non-negative, the condition (1.4), we proceed as follows. Denote</p><p>. Thus, provided (1.5), i.e., 1 One can assume a stronger condition, such as &#8214;&#8711; &#8214; * @ (0, ), and obtain an estimate that superficially appears better behaved concerning the Reynolds number or a weaker condition, such as Prodi-Serrin, and obtain an estimate that has a worse explicit dependence. This is because some dependence is subsumed within the assumption.</p><p>Computer Methods in Applied Mechanics and Engineering 433 (2025) 117526</p><p>] and the error &#179; 0 uniformly in as &#179; @ and uniformly in as &#179; @.</p><p>The 3D case: Starting at</p><p>By the 3D Ladyzhenskaya inequality and an arithmetic-geometric mean inequality(with exponents 4 and 4/3) 2</p><p>Thus,</p><p>Following the 2D case, the error satisfies</p><p>] e 0 &gt; 0. &#166;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Interpreting the conditions</head><p>In the analysis, conditions on and arise naturally for uniform in convergence. The question we now investigate for higher Reynolds number, turbulent flows is How severe are these conditions for practical problems? Let the large scale length and velocity be denoted , , e.g.,</p><p>has units of length and of -1 . In 2D, the conditions are -2 2   1 2 e 0 and -2 1</p><p>For fully developed turbulent flows in 2D with periodic boundary conditions, Alexakis and Doering <ref type="bibr">[23]</ref> have proven the following, consistent with phenomenology,</p><p>where &#256; = and = average energy-input wave number,</p><p>In 2D, phenomenology interprets the -condition to mean</p><p>The -condition, -2 2</p><p>1 2 e 0, can be rearranged to read</p><p>Since &#8819; &#253;(&#256; 3&#8725;2 ) the -condition becomes ( &#8725; ) 2 d &#253;(&#256; -5&#8725;2 ). These conditions on and are severe. In 3D, the scaling becomes worse. For fully developed turbulent flows in 3D with periodic boundary conditions, the following has been proved by Doering and Foias <ref type="bibr">[24]</ref>, again consistent with turbulent phenomenology in the large limit</p><p>Under the (often called a spherical cow assumption) condition that 2 The equality ( &#231; &#8711;&#231; &#8711; , ) = ( &#231; &#8711; , ) requires skew symmetry of the trilinear form ( &#231; &#8711; , ). After space discretization, an analogous equality holds with explicitly skew symmetrized trilinear forms or when divergence-free elements are used.</p><p>Computer Methods in Applied Mechanics and Engineering 433 (2025) 117526</p><p>we then estimate</p><p>The condition (1.6), big enough, has the interpretation</p><p>The small enough condition then becomes</p><p>1</p><p>The 3D condition * &#8819; &#253;(&#256; 5 ) and &#8725; &#8818; &#253;(&#256; -3 ) are severe restrictions. For problems with smooth solutions (e.g., test problems constructed by the method of manufactured solutions) the -condition can be altered as follows. For a vector function ( ) define ( ) as</p><p>Here represents an average length-scale of the solution, analogous to the Taylor micro-scale (which would also involve time averaging). Then &#8214;&#8711; &#8214; d -1 &#8214; &#8214;. The term that gives rise to the -condition is now estimated by</p><p>It can now be subsumed into the -condition (in 3D here). This gives long time estimates when is small with respect to ( ) (rather than &#256; ) from the inequality.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Self-adaptive parameter selection</head><p>The severe constraints, in 2D &#8819; &#253;(&#256; 3&#8725;2 ) and &#8819; &#253;(&#256; 5 ) in 3D can possibly be improved since analysis gives a sufficient (but possibly not sharp), worst-case lower bound. It is therefore interesting to develop methods for self-adaptive parameter selection that respond to the local state of the flow. The condition on is linked to the condition on ; improving the latter improves the former. Further, changing requires changing observation locations rather than just picking a new user-defined parameter. We thus focus on adapting . Reformulating nudging to improve the H-condition is an open problem.</p><p>The first algorithm is not grounded in the &#225; priori analysis other than the result that a decrease of &#8214; &#8214; is possible and &#8214; ( )&#8214; d &#8214; &#8214;. It uses the fact that &#8214; &#8214; is computable in a reasonable but heuristic way as follows.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 1</head><p>Choose 0 &gt; 0. Set upper safety factor, Factor e 1, and lower tolerance, The second algorithm is grounded in the &#225; priori analysis that led to conditions (1.5), <ref type="bibr">(1.6)</ref>. We first develop it in 2D. Changing the parameter values at each time step means the nudging term is no longer autonomous. Thus we set = ( ). Since we are not addressing the -condition the analysis is shortened by setting = . Alternatively, the analysis below could proceed (with more terms) with the -condition present and assumed satisfied. For the tests in Section 4, . We begin with</p><p>For the nonlinear term we rearrange and decompose on ( &#231; &#8711;&#231; &#8711; , ) = ( &#231; &#8711; , ) rather than ( &#231; &#8711; , ) since is computed. Following the subsequent steps (making this one change) we obtain</p><p>Denote ( ) &#8758;= 2 ( ) --1 &#8214;&#8711; &#8214; 2 , 0 d ( ) = &#8214; &#8214; 2 then 2 + ( ) d 0. Thus, the error at time satisfies and the contribution to the evolution of the error of one-time step of size satisfies</p><p>This suggests the following algorithm with piecewise constant .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 2</head><p>Choose 0 &gt; 0. Given such that</p><p>&#8214;&#8711; ( )&#8214; 2 e 0 &gt; 0.</p><p>Proceed to the next step after resetting</p><p>[Too small value] If</p><p>Repeat step after resetting</p><p>In 3D the only change is to replace</p><p>In both cases, the integrals over &lt; &lt; +1 are evaluated by quadrature using a rule with accuracy compatible with the time discretization employed. In our tests the trapezoidal approximation is used with the approximate velocities at , +1 . The 3D multiplier -3 reflects the greater complexity of 3D turbulence, Section 2.1. Users can define a maximum number of retries, denoted as , for both algorithms. In our tests in Section 4, we set = 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Numerical tests</head><p>In this section, we conduct three tests to validate the adaptive algorithms. In the first test, we calculate the rate of convergence with a test with exact solutions. Then we test the adaptive algorithms with a complex flow between offset cylinders from <ref type="bibr">[25]</ref> with a higher Reynolds number. In the third test, we use the test case from <ref type="bibr">[26]</ref>, which involves flows past a flat plate with a of 50.</p><p>We note that the two adaptive strategies have similar but different aims. The first aims to enforce &#8214; ( ( + ))&#8214; d &#8214; ( ( ))&#8214;. The second aims to enforce &#8214; ( + )&#8214; d &#8214; ( )&#8214;. Since &#8214; ( )&#8214; d &#8214; &#8214; we expect the first to yield smaller -values than the second.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Test of accuracy</head><p>In the numerical example, we verify the theoretical temporal convergence results with the BDF2 (second-order backward differentiation formula) time discretization described and analyzed in Section 3 of Larios, Rebholz, and Zerfas, <ref type="bibr">[6]</ref>. The following exact solution for the NSE is considered in the domain = (0, 1) &#215; (0, 1). The velocity and the pressure are as follows:</p><p>( , , ) = (cos , sin ), and ( , , ) = ( -)(1 + ).</p><p>We insert these in the NSE to calculate the body force ( , ). We choose = 1. We create a fine mesh resolution that consists of 55554 degrees of freedom (dof) to isolate the convergence rates for of the errors by making the spacial errors small. We run the code in the time interval [0, 2]. We use a Scott-Vogelius finite element pair with a barycenter refined mesh. We observed similar results with Taylor-Hood elements and a skew-symmetrized nonlinearity. The measurements were made by 2 ( ) norm. In Algorithm 1, we set a safety factor and a lower Tolerance as follows. The factor is user-defined to avoid growing too big when the projection error grows slightly.</p><p>Doubling when &#8214; ( ( + ))&#8214; &gt; Factor &#8214; ( ( ))&#8214;.</p><p>We set Factor = 1.3.</p><p>where Tol is also defined by the user and Tol = 0.2 in this test.</p><p>In Table <ref type="table">1</ref>, we observe second-order convergence in for velocity. This is an optimal rate and is evidence that the implementation is correct. We also present the values of the maximum value of through adaptation to understand how large the generated values become. The small values seem to be in response to the solution's smaller . According to the results of this accuracy test, theoretical expectations are compatible with numerical results, except that, oddly, the -values of Algorithm 1 are larger than Algorithm 2. This effect reverses over a longer time. Due to this reversal of the expected values, we monitor the behavior of the relative errors and values for a longer time, final time = 10, see Figs. <ref type="figure">1</ref> and <ref type="figure">2</ref>. The relative errors are &#253;(10 -6 ) for a long time. Then they start growing, but the growth speed is slowing down and the errors appear to be saturating around 10 -6 similar to logistic growth. Logistic growth is a long-accepted model of error growth in fluid dynamics, e.g., Lorenz <ref type="bibr">[27]</ref>. For Algorithm 1, the adapted value is less than 50, and for Algorithm 2, is growing until it reaches the maximum value 10 6 . Algorithm 1 provided smaller -values is consistent with expectations, based on &#8214; ( )&#8214; d &#8214; &#8214;.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Flow between offset cylinders</head><p>In this example, we are testing the adaptive nudging algorithms with a higher flow with complex features and without exact solutions. The domain is a disk with a smaller off-center obstacle inside. Let the outer circle radius 1 = 1 and the inner circle radius  The computational domain with its triangulation is presented in Fig. <ref type="figure">3</ref>. A counterclockwise rotation drives the flow with the body force given below:</p><p>with no-slip boundary conditions. The outer circle remains stationary. The Delaunay algorithm generates the mesh with 75 mesh points on the outer circle and 60 mesh points on the inner circle. The final time = 10, time step size = 0.01, = 10 -3 , = 1, = 1, and = . Initial condition ( , , 0) = 0 and the Dirichlet boundary condition = 0 on .  The relative error is defined as &#8214; -&#8214; &#8214; &#8214; starting at * = 1. In Fig. <ref type="figure">6</ref>, we can see the adaptive algorithms and the constant nudging effectively decrease the relative error from = 1 to = 2. After = 2, the relative error grows fast until it reaches &#253;(1) as in the previous test, again suggesting -condition violation. In Fig. <ref type="figure">7</ref>, the values of Algorithms 1 and 2 grow with time until they reach the maximum value. We observe that the of Algorithm 1 grows fast at first, then the growth rate gets smaller, and for Algorithm 2 at the beginning barely changes and sharply grows after = 2. These two tests suggest that both a -condition and an H-condition are essential for long-time convergence. This test also suggests that data from a turbulence model does not alleviate non-satisfaction of nudging's -condition.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Flow over a flat obstacle</head><p>We adopt this test from <ref type="bibr">[26]</ref>. The computational domain is a [-7, 20] &#215; [0, 20] rectangular channel with 0.125 &#215; 1 flat plate obstacle so that | | H 540. The statistics are relative to solution size which also makes them area independent. The inflow velocity is set with = &#239;1, 0&#240; and no force is applied. No-slip boundary conditions are applied on walls and the plate whereas a weak zero-traction boundary condition is enforced on the outflow boundary as in <ref type="bibr">[26]</ref>. We choose &#256; = 50, = 1  &#256; , = 0.02 and the final time = 81, three times turnover time. The flow is at rest at = 0. We approximate the exact velocity via finer mesh as seen in Fig. <ref type="figure">8</ref> and solve the equation with coarser mesh. The total dof for the fine mesh is 27373 and for coarse mesh is 15037. Since this test is a through-flow problem, it interrogates errors over 1 through-flow time, not over longer times. Still, it is a complex flow with many interesting features and an accepted test problem.</p><p>We compare Algorithms 1, 2 and the constant with different initial values, where = 1, 10, 100, and 1000. We set Factor = 1 and = 0.2, and the maximum of = 10 6 . We define the relative error &#8214; -&#8214; &#8214; &#8214; , and relative projection error</p><p>. In Figs. 10-16, both the relative error and the relative projection error decrease with larger values for Algorithm 1 and the constant . Algorithm  1 performs better when the initial value is small because it effectively adapts to reduce the projection error. On the other hand, Algorithm 2 is robust to initial values, adjusting to obtain the optimal value for smaller errors at each time step. We observe that the projection error is smaller than the error. Furthermore, the value for Algorithm 1 is smaller than Algorithm 2 in the first turnover time, and it is bigger than Algorithm 2 in long times (see Fig. <ref type="figure">9</ref>). We obtain smaller relative error and projection errors for Algorithm 2 than for Algorithm 1 and the constant . We would like to point out here that if and are solved on the same mesh (as in, e.g., <ref type="bibr">[26]</ref>) instead of fine mesh coarse mesh convention as has been done here, the error is negligible, &#253;(machine epsilon).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>Two adaptive-strategies were derived and tested: enforcing a decrease in the observed error, &#8214; ( -)&#8214;, and enforcing a computable sufficient condition for the full error, &#8214; -&#8214;, decrease. The second, based on a stronger condition, generally yielded larger values. The tests at high &#256; have the adaptive algorithm steadily increasing . This means the large error analysis in Diegel, Li, and Rebholz <ref type="bibr">[32]</ref> is of special interest. In one test (see Fig. <ref type="figure">6</ref>) the second responded more quickly to transient behavior and  resulted in significantly smaller errors. Adaptive selection of the nudging parameter improves performance. It does not, however, eliminate the necessity of the -condition related to the density of observations. This means revising the standard formulation of nudging to improve the -condition is an important open problem. Other critical open problems include the effect of time delays in the nudging control term and the effectiveness of nudging for correction of model errors. The precise realization of Algorithm 2 and its supporting discrete-time analysis is also an open problem for higher-order linear multi-step methods.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration of competing interest</head><p>The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: William Layton reports financial support was provided by National Science Foundation. Rui Fang reports financial support was provided by National Science Foundation. Farjana Siddiqua reports financial support was provided by National Science Foundation. Aytekin Cibik reports financial support was provided by TUBITAK. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0"><p>Computer Methods in Applied Mechanics and Engineering 433 (2025) 117526</p></note>
		</body>
		</text>
</TEI>
