Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Deep‐focus earthquakes at 350–660 km are presumably caused by olivine‐spinel phase transformation (PT). This cannot, however, explain the observed high seismic strain rate, which requires PT to complete within seconds, while metastable olivine does not transform for over a million years. Recent theory quantitatively describes how severe plastic deformations (SPD) can solve this dilemma but lacking experimental proof. Here, we introduce dynamic rotational diamond anvil cell with rough diamond anvils to impose SPD on San Carlos olivine. While olivine never transformed to spinel at room temperature, we obtained reversible olivine‐ringwoodite PT under SPD at 15–28 GPa within tens of seconds. The PT pressure reduces with increasing dislocation density, microstrain, plastic strain, and decreasing crystallite size. Results demonstrate a new strain‐induced PT mechanism compared to a pressure/temperature‐induced one. Combined with SPD during olivine subduction, this mechanism can accelerate olivine‐ringwoodite PT from millions of years to timescales relevant to earthquakes.more » « less
-
null (Ed.)Garnet is an important mineral phase in the upper mantle as it is both a key component in bulk mantle rocks, and a primary phase at high-pressure within subducted basalt. Here, we focus on the strength of garnet and the texture that develops within garnet during accommodation of differential deformational strain. We use X-ray diffraction in a radial geometry to analyze texture development in situ in three garnet compositions under pressure at 300 K: a natural garnet (Prp60Alm37) to 30 GPa, and two synthetic majorite-bearing compositions (Prp59Maj41 and Prp42Maj58) to 44 GPa. All three garnets develop a modest (100) texture at elevated pressure under axial compression. Elasto-viscoplastic self-consistent (EVPSC) modeling suggests that two slip systems are active in the three garnet compositions at all pressures studied: {110}<1-21 11> and {001}<110>. We determine a flow strength of ~5 GPa at pressures between 10 to 15 GPa for all three garnets; these values are higher than previously measured yield strengths measured on natural and majoritic garnets. Strengths calculated using the experimental lattice strain differ from the strength generated from those calculated using EVPSC. Prp67Alm33, Prp59Maj41 and Prp42Maj58 are of comparable strength to each other at room temperature, which indicates that majorite substitution does not greatly affect the strength of garnets. Additionally, all three garnets are of similar strength as lower mantle phases such as bridgmanite and ferropericlase, suggesting that garnet may not be notably stronger than the surrounding lower mantle/deep upper mantle phases at the base of the upper mantle.more » « less
-
This work considers the sample and computational complexity of obtaining an $$\epsilon$$-optimal policy in a discounted Markov Decision Process (MDP), given only access to a generative model. In this model, the learner accesses the underlying transition model via a sampling oracle that provides a sample of the next state, when given any state-action pair as input. We are interested in a basic and unresolved question in model based planning: is this naïve “plug-in” approach — where we build the maximum likelihood estimate of the transition model in the MDP from observations and then find an optimal policy in this empirical MDP — non-asymptotically, minimax optimal? Our main result answers this question positively. With regards to computation, our result provides a simpler approach towards minimax optimal planning: in comparison to prior model-free results, we show that using \emph{any} high accuracy, black-box planning oracle in the empirical model suffices to obtain the minimax error rate. The key proof technique uses a leave-one-out analysis, in a novel “absorbing MDP” construction, to decouple the statistical dependency issues that arise in the analysis of model-based planning; this construction may be helpful more generally.more » « less
-
Modern deep learning methods provide effective means to learn good representations. However, is a good representation itself sufficient for sample efficient reinforcement learning? This question has largely been studied only with respect to (worst-case) approximation error, in the more classical approximate dynamic programming literature. With regards to the statistical viewpoint, this question is largely unexplored, and the extant body of literature mainly focuses on conditions which permit sample efficient reinforcement learning with little understanding of what are necessary conditions for efficient reinforcement learning. This work shows that, from the statistical viewpoint, the situation is far subtler than suggested by the more traditional approximation viewpoint, where the requirements on the representation that suffice for sample efficient RL are even more stringent. Our main results provide sharp thresholds for reinforcement learning methods, showing that there are hard limitations on what constitutes good function approximation (in terms of the dimensionality of the representation), where we focus on natural representational conditions relevant to value-based, model-based, and policy-based learning. These lower bounds highlight that having a good (value-based, model-based, or policy-based) representation in and of itself is insufficient for efficient reinforcement learning, unless the quality of this approximation passes certain hard thresholds. Furthermore, our lower bounds also imply exponential separations on the sample complexity between 1) value-based learning with perfect representation and value-based learning with a good-but-not-perfect representation, 2) value-based learning and policy-based learning, 3) policy-based learning and supervised learning and 4) reinforcement learning and imitation learning.more » « less
An official website of the United States government

Full Text Available