skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Progressive and Punctuated Magnetic Mineral Diagenesis: The Rock Magnetic Record of Multiple Fluid Inputs and Progressive Pyritization in a Volcano‐Bounded Basin, IODP Site U1437, Izu Rear Arc
Award ID(s):
1642268
PAR ID:
10129473
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Geophysical Research: Solid Earth
Volume:
124
Issue:
6
ISSN:
2169-9313
Page Range / eLocation ID:
5357 to 5378
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A single seller faces a sequence of buyers with unit demand. The buyers are forward‐looking and long‐lived. Each buyer has private information about his arrival time and valuation where the latter evolves according to a geometric Brownian motion. Any incentive‐compatible mechanism has to induce truth‐telling about the arrival time and the evolution of the valuation. We establish that the optimal stationary allocation policy can be implemented by a simple posted price. The truth‐telling constraint regarding the arrival time can be represented as an optimal stopping problem that determines the first time at which the buyer participates in the mechanism. The optimal mechanism thus induces progressive participation by each buyer: he either participates immediately or at a future random time. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
  4. null (Ed.)
    As machine learning methods see greater adoption and implementation in high-stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical. Classical approaches that assess feature importance (e.g., saliency maps) do not explain how and why a particular region of an image is relevant to the prediction. We propose a method that explains the outcome of a classification black-box by gradually exaggerating the semantic effect of a given class. Given a query input to a classifier, our method produces a progressive set of plausible variations of that query, which gradually changes the posterior probability from its original class to its negation. These counter-factually generated samples preserve features unrelated to the classification decision, such that a user can employ our method as a “tuning knob” to traverse a data manifold while crossing the decision boundary. Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input. 
    more » « less