We are concerned with free boundary problems arising from the analysis of multidimensional transonic shock waves for the Euler equations in compressible fluid dynamics. In this expository paper, we survey some recent developments in the analysis of multidimensional transonic shock waves and corresponding free boundary problems for the compressible Euler equations and related nonlinear partial differential equations (PDEs) of mixed type. The nonlinear PDEs under our analysis include the steady Euler equations for potential flow, the steady full Euler equations, the unsteady Euler equations for potential flow, and related nonlinear PDEs of mixed elliptic–hyperbolic type. The transonic shock problems include the problem of steady transonic flow past solid wedges, the von Neumann problem for shock reflection–diffraction, and the Prandtl–Meyer problem for unsteady supersonic flow onto solid wedges. We first show how these longstanding multidimensional transonic shock problems can be formulated as free boundary problems for the compressible Euler equations and related nonlinear PDEs of mixed type. Then we present an effective nonlinear method and related ideas and techniques to solve these free boundary problems. The method, ideas, and techniques should be useful to analyze other longstanding and newly emerging free boundary problems for nonlinear PDEs. 
                        more » 
                        « less   
                    
                            
                            PARCv2: Physics-aware Recurrent Convolutional Neural Networks for Spatiotemporal Dynamics Modeling
                        
                    
    
            Modeling unsteady, fast transient, and advection-dominated physics problems is a pressing challenge for physics-aware deep learning (PADL). The physics of complex systems is governed by large systems of partial differential equations (PDEs) and ancillary constitutive models with nonlinear structures, as well as evolving state fields exhibiting sharp gradients and rapidly deforming material interfaces. Here, we investigate an inductive bias approach that is versatile and generalizable to model generic nonlinear field evolution problems. Our study focuses on the recent physics-aware recurrent convolutions (PARC), which incorporates a differentiator-integrator architecture that inductively models the spatiotemporal dynamics of generic physical systems. We extend the capabilities of PARC to simulate unsteady, transient, and advection-dominant systems. The extended model, referred to as PARCv2, is equipped with differential operators to model advection-reaction-diffusion equations, as well as a hybrid integral solver for stable, long-time predictions. PARCv2 is tested on both standard benchmark problems in fluid dynamics, namely Burgers and Navier-Stokes equations, and then applied to more complex shock-induced reaction problems in energetic materials. We evaluate the behavior of PARCv2 in comparison to other physics-informed and learning bias models and demonstrate its potential to model unsteady and advection-dominant dynamics regimes. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2203580
- PAR ID:
- 10545319
- Publisher / Repository:
- Proceedings of Machine Learning Research
- Date Published:
- Format(s):
- Medium: X
- Location:
- Vienna, Austria
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract The modeling of nonlinear dynamical systems subject to strong and evolving nonsmooth nonlinearities is typically approached via integer-order differential equations. In this study, we present the possible application of variable-order (VO) fractional operators to a class of nonlinear lumped parameter models that have great practical relevance in mechanics and dynamics. Fractional operators are intrinsically multiscale operators that can act on both space- and time-dependent variables. Contrarily to their integer-order counterpart, fractional operators can have either fixed or VO. In the latter case, the order can be function of either independent or state variables. We show that when using VO equations to describe the response of dynamical systems, the order can evolve as a function of the response itself; therefore, allowing a natural and seamless transition between widely dissimilar dynamics. Such an intriguing characteristic allows defining governing equations for dynamical systems that are evolutionary in nature. Within this context, we present a physics-driven strategy to define VO operators capable of capturing complex and evolutionary phenomena. Specific examples include hysteresis in discrete oscillators and contact problems. Despite using simplified models to illustrate the applications of VO operators, we show numerical evidence of their unique modeling capabilities as well as their connection to more complex dynamical systems.more » « less
- 
            We develop data-driven methods incorporating geometric and topological information to learn parsimonious representations of nonlinear dynamics from observations. The approaches learn nonlinear state-space models of the dynamics for general manifold latent spaces using training strategies related to Variational Autoencoders (VAEs). Our methods are referred to as Geometric Dynamic (GD) Variational Autoencoders (GD-VAEs). We learn encoders and decoders for the system states and evolution based on deep neural network architectures that include general Multilayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and other architectures. Motivated by problems arising in parameterized PDEs and physics, we investigate the performance of our methods on tasks for learning reduced dimensional representations of the nonlinear Burgers Equations, Constrained Mechanical Systems, and spatial fields of Reaction-Diffusion Systems. GD-VAEs provide methods that can be used to obtain representations in manifold latent spaces for diverse learning tasks involving dynamics.more » « less
- 
            Matni, Nikolai; Morari, Manfred; Pappas, George J (Ed.)Many dynamical systems—from robots interacting with their surroundings to large-scale multi-physics systems—involve a number of interacting subsystems. Toward the objective of learning composite models of such systems from data, we present i) a framework for compositional neural networks, ii) algorithms to train these models, iii) a method to compose the learned models, iv) theoretical results that bound the error of the resulting composite models, and v) a method to learn the composition itself, when it is not known a priori. The end result is a modular approach to learning: neural network submodels are trained on trajectory data generated by relatively simple subsystems, and the dynamics of more complex composite systems are then predicted without requiring additional data generated by the composite systems themselves. We achieve this compositionality by representing the system of interest, as well as each of its subsystems, as a port-Hamiltonian neural network (PHNN)—a class of neural ordinary differential equations that uses the port-Hamiltonian systems formulation as inductive bias. We compose collections of PHNNs by using the system’s physics-informed interconnection structure, which may be known a priori, or may itself be learned from data. We demonstrate the novel capabilities of the proposed framework through numerical examples involving interacting spring-mass-damper systems. Models of these systems, which include nonlinear energy dissipation and control inputs, are learned independently. Accurate compositions are learned using an amount of training data that is negligible in comparison with that required to train a new model from scratch. Finally, we observe that the composite PHNNs enjoy properties of port-Hamiltonian systems, such as cyclo-passivity—a property that is useful for control purposes.more » « less
- 
            The inception of physics-constrained or physics-informed machine learning represents a paradigm shift, addressing the challenges associated with data scarcity and enhancing model interpretability. This innovative approach incorporates the fundamental laws of physics as constraints, guiding the training process of machine learning models. In this work, the physics-constrained convolutional recurrent neural network is further extended for solving spatial-temporal partial differential equations with arbitrary boundary conditions. Two notable advancements are introduced: the implementation of boundary conditions as soft constraints through finite difference-based differentiation, and the establishment of an adaptive weighting mechanism for the optimal allocation of weights to various losses. These enhancements significantly augment the network's ability to manage intricate boundary conditions and expedite the training process. The efficacy of the proposed model is validated through its application to two-dimensional phase transition, fluid dynamics, and reaction-diffusion problems, which are pivotal in materials modeling. Compared to traditional physics-constrained neural networks, the physics-constrained convolutional recurrent neural network demonstrates a tenfold increase in prediction accuracy within a similar computational budget. Moreover, the model's exceptional performance in extrapolating solutions for the Burgers' equation underscores its utility. Therefore, this research establishes the physics-constrained recurrent neural network as a viable surrogate model for sophisticated spatial-temporal PDE systems, particularly beneficial in scenarios plagued by sparse and noisy datasets.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    