We present an adjoint-based optimization method to invert for stress and frictional parameters used in earthquake modeling. The forward problem is linear elastodynamics with nonlinear rate-and-state frictional faults. The misfit functional quantifies the difference between simulated and measured particle displacements or velocities at receiver locations. The misfit may include windowing or filtering operators. We derive the corresponding adjoint problem, which is linear elasticity with linearized rate-and-state friction and, for forward problems involving fault normal stress changes, nonzero fault opening, with time-dependent coefficients derived from the forward solution. The gradient of the misfit is efficiently computed by convolving forward and adjoint variables on the fault. The method thus extends the framework of full-waveform inversion to include frictional faults with rate-and-state friction. In addition, we present a space-time dual-consistent discretization of a dynamic rupture problem with a rough fault in antiplane shear, using high-order accurate summation-by-parts finite differences in combination with explicit Runge–Kutta time integration. The dual consistency of the discretization ensures that the discrete adjoint-based gradient is the exact gradient of the discrete misfit functional as well as a consistent approximation of the continuous gradient. Our theoretical results are corroborated by inversions with synthetic data. We anticipate that adjoint-based inversion of seismic and/or geodetic data will be a powerful tool for studying earthquake source processes; it can also be used to interpret laboratory friction experiments.
more »
« less
This content will become publicly available on August 1, 2026
Geometry of Continuous Adjoint Newton's Method for Bivariate Quadratics
Newton's method is a classical iterative approach for computing solutions to nonlinear equations. To overcome some of its drawbacks, one often considers a continuous adjoint form of Newton's method. This paper investigates the geometric structure of the trajectories produced by the continuous adjoint Newton's method for bivariate quadratics, a system of two quadratic polynomials in two variables, via eigenanalysis at its equilibrium points. The main ideas are illustrated using plots generated by a Maple program.
more »
« less
- PAR ID:
- 10652988
- Publisher / Repository:
- Maple Transactions
- Date Published:
- Journal Name:
- Maple Transactions
- Volume:
- 5
- Issue:
- 3
- ISSN:
- 2564-3029
- Page Range / eLocation ID:
- 22493
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper considers a set of multiple independent control systems that are each connected over a nonstationary wireless channel. The goal is to maximize control performance over all the systems through the allocation of transmitting power within a fixed budget. This can be formulated as a constrained optimization problem examined using Lagrangian duality. By taking samples of the unknown wireless channel at every time instance, the resulting problem takes on the form of empirical risk minimization, a well-studied problem in machine learning. Due to the nonstationarity of wireless channels, optimal allocations must be continuously learned and updated as the channel evolves. The quadratic convergence property of Newton's method motivates its use in learning approximately optimal power allocation policies over the sampled dual function as the channel evolves over time. Conditions are established under which Newton's method learns approximate solutions with a single update, and the subsequent suboptimality of the control problem is further characterized. Numerical simulations illustrate the near-optimal performance of the method and resulting stability on a wireless control problem.more » « less
-
SUMMARY This paper revisits and extends the adjoint theory for glacial isostatic adjustment (GIA) of Crawford et al. (2018). Rotational feedbacks are now incorporated, and the application of the second-order adjoint method is described for the first time. The first-order adjoint method provides an efficient means for computing sensitivity kernels for a chosen objective functional, while the second-order adjoint method provides second-derivative information in the form of Hessian kernels. These latter kernels are required by efficient Newton-type optimization schemes and within methods for quantifying uncertainty for non-linear inverse problems. Most importantly, the entire theory has been reformulated so as to simplify its implementation by others within the GIA community. In particular, the rate-formulation for the GIA forward problem introduced by Crawford et al. (2018) has been replaced with the conventional equations for modelling GIA in laterally heterogeneous earth models. The implementation of the first- and second-order adjoint problems should be relatively easy within both existing and new GIA codes, with only the inclusions of more general force terms being required.more » « less
-
In parallel simulation, convergence and parallelism are often seen as inherently conflicting objectives. Improved parallelism typically entails lighter local computation and weaker coupling, which unavoidably slow the global convergence. This paper presents a novel GPU algorithm that achieves convergence rates comparable to fullspace Newton's method while maintaining good parallelizability just like the Jacobi method. Our approach is built on a key insight into the phenomenon ofovershoot.Overshoot occurs when a local solver aggressively minimizes its local energy without accounting for the global context, resulting in a local update that undermines global convergence. To address this, we derive a theoretically second-order optimal solution to mitigate overshoot. Furthermore, we adapt this solution into a pre-computable form. Leveraging Cubature sampling, our runtime cost is only marginally higher than the Jacobi method, yet our algorithm converges nearly quadratically as Newton's method. We also introduce a novel full-coordinate formulation for more efficient pre-computation. Our method integrates seamlessly with the incremental potential contact method and achieves second-order convergence for both stiff and soft materials. Experimental results demonstrate that our approach delivers high-quality simulations and outperforms state-of-the-art GPU methods with 50× to 100× better convergence.more » « less
-
Newton's method is usually preferred when solving optimization problems due to its superior convergence properties compared to gradient-based or derivative-free optimization algorithms. However, deriving and computing second-order derivatives needed by Newton's method often is not trivial and, in some cases, not possible. In such cases quasi-Newton algorithms are a great alternative. In this paper, we provide a new derivation of well-known quasi-Newton formulas in an infinite-dimensional Hilbert space setting. It is known that quasi-Newton update formulas are solutions to certain variational problems over the space of symmetric matrices. In this paper, we formulate similar variational problems over the space of bounded symmetric operators in Hilbert spaces. By changing the constraints of the variational problem we obtain updates (for the Hessian and Hessian inverse) not only for the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method but also for Davidon--Fletcher--Powell (DFP), Symmetric Rank One (SR1), and Powell-Symmetric-Broyden (PSB). In addition, for an inverse problem governed by a partial differential equation (PDE), we derive DFP and BFGS ``structured" secant formulas that explicitly use the derivative of the regularization and only approximates the second derivative of the misfit term. We show numerical results that demonstrate the desired mesh-independence property and superior performance of the resulting quasi-Newton methods.more » « less
An official website of the United States government
