- Award ID(s):
- 1720297
- NSF-PAR ID:
- 10157969
- Date Published:
- Journal Name:
- Computational Methods in Applied Mathematics
- Volume:
- 19
- Issue:
- 3
- ISSN:
- 1609-4840
- Page Range / eLocation ID:
- 431 to 464
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
The phase field method is becoming the de facto choice for the numerical analysis of complex problems that involve multiple initiating, propagating, interacting, branching and merging fractures. However, within the context of finite element modelling, the method requires a fine mesh in regions where fractures will propagate, in order to capture sharp variations in the phase field representing the fractured/damaged regions. This means that the method can become computationally expensive when the fracture propagation paths are not known a priori. This paper presents a 2D hp-adaptive discontinuous Galerkin finite element method for phase field fracture that includes a posteriori error estimators for both the elasticity and phase field equations, which drive mesh adaptivity for static and propagating fractures. This combination means that it is possible to be reliably and efficiently solve phase field fracture problems with arbitrary initial meshes, irrespective of the initial geometry or loading conditions. This ability is demonstrated on several example problems, which are solved using a light-BFGS (Broyden–Fletcher–Goldfarb–Shanno) quasi-Newton algorithm. The examples highlight the importance of driving mesh adaptivity using both the elasticity and phase field errors for physically meaningful, yet computationally tractable, results. They also reveal the importance of including p-refinement, which is typically not included in existing phase field literature. The above features provide a powerful and general tool for modelling fracture propagation with controlled errors and degree-of-freedom optimised meshes.more » « less
-
We are concerned with free boundary problems arising from the analysis of multidimensional transonic shock waves for the Euler equations in compressible fluid dynamics. In this expository paper, we survey some recent developments in the analysis of multidimensional transonic shock waves and corresponding free boundary problems for the compressible Euler equations and related nonlinear partial differential equations (PDEs) of mixed type. The nonlinear PDEs under our analysis include the steady Euler equations for potential flow, the steady full Euler equations, the unsteady Euler equations for potential flow, and related nonlinear PDEs of mixed elliptic–hyperbolic type. The transonic shock problems include the problem of steady transonic flow past solid wedges, the von Neumann problem for shock reflection–diffraction, and the Prandtl–Meyer problem for unsteady supersonic flow onto solid wedges. We first show how these longstanding multidimensional transonic shock problems can be formulated as free boundary problems for the compressible Euler equations and related nonlinear PDEs of mixed type. Then we present an effective nonlinear method and related ideas and techniques to solve these free boundary problems. The method, ideas, and techniques should be useful to analyze other longstanding and newly emerging free boundary problems for nonlinear PDEs.more » « less
-
Abstract Based on a posteriori error estimator with hierarchical bases, an adaptive weak Galerkin finite element method (WGFEM) is proposed for the elliptic problem with mixed boundary conditions. For
the posteriori error estimator, we are only required to solve a linear algebraic system with diagonal entries corresponding to the degree of freedoms, which significantly reduces the computational cost. The upper and lower bounds of the error estimator are shown to addresses the reliability and efficiency of the adaptive approach. Numerical simulations are provided to demonstrate the effectiveness and robustness of the proposed method. -
Abstract Much attention in constructionism has focused on designing tools and activities that support learners in designing fully finished and functional applications and artefacts to be shared with others. But helping students learn to debug their applications often takes on a surprisingly more instructionist stance by giving them checklists, teaching them strategies or providing them with test programmes. The idea of designing bugs for learning—or
debugging by design —makes learners agents of their own learning and, more importantly, of making and solving mistakes. In this paper, we report on our implementation of ‘Debugging by Design’ activities in a high school classroom over a period of 8 hours as part of an electronic textiles unit. Students were tasked to craft the electronic textile artefacts with problems or bugs for their peers to solve. Drawing on observations and interviews, we answer the following research questions: (1) How did students participate in making bugs for others? (2) What did students gain from designing and solving bugs for others? In the discussion, we address the opportunities and challenges that designing personally and socially meaningful failure artefacts provides for becoming objects‐to‐think‐with and objects‐to‐share‐with in student learning and promoting new directions in constructionism.Practitioner notes What is already known about this topic
There is substantial evidence for the benefits of learning programming and debugging in the context of constructing personally relevant and complex artefacts, including electronic textiles.
Related, work on productive failure has demonstrated that providing learners with strategically difficult problems (in which they ‘fail’) equips them to better handle subsequent challenges.
What this paper adds
In this paper, we argue that designing bugs or ‘failure artefacts’ is as much a constructionist approach to learning as is designing fully functional artefacts.
We consider how ‘failure artefacts’ can be both objects‐to‐learn‐with and objects‐to‐share‐with.
We introduce the concept of ‘Debugging by Design’ (DbD) as a means to expand application of constructionism to the context of developing ‘failure artifacts’.
Implications for practice and/or policy
We conceptualise a new way to enable and empower students in debugging—by designing creative, multimodal buggy projects for others to solve.
The DbD approach may support students in near‐transfer of debugging and the beginning of a more systematic approach to debugging in later projects and should be explored in other domains beyond e‐textiles.
New studies should explore learning, design and teaching that empower students to design bugs in projects in mischievous and creative ways.
-
Locally Decodable Codes (LDCs) are error-correcting codes for which individual message symbols can be quickly recovered despite errors in the codeword. LDCs for Hamming errors have been studied extensively in the past few decades, where a major goal is to understand the amount of redundancy that is necessary and sufficient to decode from large amounts of error, with small query complexity. Despite exciting progress, we still don't have satisfactory answers in several important parameter regimes. For example, in the case of 3-query LDCs, the gap between existing constructions and lower bounds is superpolynomial in the message length. In this work we study LDCs for insertion and deletion errors, called Insdel LDCs. Their study was initiated by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), who gave a reduction from Hamming LDCs to Insdel LDCs with a small blowup in the code parameters. On the other hand, the only known lower bounds for Insdel LDCs come from those for Hamming LDCs, thus there is no separation between them. Here we prove new, strong lower bounds for the existence of Insdel LDCs. In particular, we show that 2-query linear Insdel LDCs do not exist, and give an exponential lower bound for the length of all q-query Insdel LDCs with constant q. For q ≥ 3 our bounds are exponential in the existing lower bounds for Hamming LDCs. Furthermore, our exponential lower bounds continue to hold for adaptive decoders, and even in private-key settings where the encoder and decoder share secret randomness. This exhibits a strict separation between Hamming LDCs and Insdel LDCs. Our strong lower bounds also hold for the related notion of Insdel LCCs (except in the private-key setting), due to an analogue to the Insdel notions of a reduction from Hamming LCCs to LDCs. Our techniques are based on a delicate design and analysis of hard distributions of insertion and deletion errors, which depart significantly from typical techniques used in analyzing Hamming LDCs.more » « less