skip to main content


Search for: All records

Creators/Authors contains: "Kwon, Heeyoung"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Magnetic skyrmions are topologically nontrivial spin textures with envisioned applications in energy-efficient magnetic information storage. Toggling the presence of magnetic skyrmions via writing/deleting processes is essential for spintronics applications, which usually require the application of a magnetic field, a gate voltage or an electric current. Here we demonstrate the reversible field-free writing/deleting of skyrmions at room temperature, via hydrogen chemisorption/desorption on the surface of Ni and Co films. Supported by Monte-Carlo simulations, the skyrmion creation/annihilation is attributed to the hydrogen-induced magnetic anisotropy change on ferromagnetic surfaces. We also demonstrate the role of hydrogen and oxygen on magnetic anisotropy and skyrmion deletion on other magnetic surfaces. Our results open up new possibilities for designing skyrmionic and magneto-ionic devices.

     
    more » « less
  2. Question Answering (QA) naturally reduces to an entailment problem, namely, verifying whether some text entails the answer to a question. However, for multi-hop QA tasks, which require reasoning with \textit{multiple} sentences, it remains unclear how best to utilize entailment models pre-trained on large scale datasets such as SNLI, which are based on sentence pairs. We introduce Multee, a general architecture that can effectively use entailment models for multi-hop QA tasks. Multee uses (i) a local module that helps locate important sentences, thereby avoiding distracting information, and (ii) a global module that aggregates information by effectively incorporating importance weights. Importantly, we show that both modules can use entailment functions pre-trained on a large scale NLI datasets. We evaluate performance on MultiRC and OpenBookQA, two multihop QA datasets. When using an entailment function pre-trained on NLI datasets, Multee outperforms QA models trained only on the target QA datasets and the OpenAI transformer models. 
    more » « less