skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Language-Model-Based Parsing and English Generation for Unscoped Episodic Logical Forms
Unscoped Episodic Logical Forms (ULF) is a semantic representation for English sentences which captures semantic type structure, allows for linguistic inferences, and provides a basis for further resolution into Episodic Logic (EL). We present an application of pre-trained autoregressive language models to the task of rendering ULFs into English, and show that ULF's properties reduce the required training data volume for this approach when compared to AMR. We also show that the same system, when applied in reverse, performs well as an English-to-ULF parser.  more » « less
Award ID(s):
1940981
PAR ID:
10359412
Author(s) / Creator(s):
;
Date Published:
Journal Name:
The International FLAIRS Conference Proceedings
Volume:
35
ISSN:
2334-0762
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Unscoped Logical Form (ULF) of Episodic Logic is a meaning representation format that captures the overall semantic type structure of natural language while leaving certain finer details, such as word sense and quantifier scope, underspecified for ease of parsing and annotation. While a learned parser exists to convert English to ULF, its performance is severely limited by the lack of a large dataset to train the system. We present a ULF dataset augmentation method that samples type-coherent ULF expressions using the ULF semantic type system and filters out samples corresponding to implausible English sentences using a pretrained language model. Our data augmentation method is configurable with parameters that trade off between plausibility of samples with sample novelty and augmentation size. We find that the best configuration of this augmentation method substantially improves parser performance beyond using the existing unaugmented dataset. 
    more » « less
  2. null (Ed.)
    “Episodic Logic: Unscoped Logical Form” (EL-ULF) is a semantic representation capturing predicate-argument structure as well as more challenging aspects of language within the Episodic Logic formalism. We present the first learned approach for parsing sentences into ULFs, using a growing set of annotated examples. The results provide a strong baseline for future improvement. Our method learns a sequence-to-sequence model for predicting the transition action sequence within a modified cache transition system. We evaluate the efficacy of type grammar-based constraints, a word-to-symbol lexicon, and transition system state features in this task. Our system is availableat https://github.com/genelkim/ ulf-transition-parser. We also present the first official annotated ULF dataset at https://www.cs.rochester.edu/u/ gkim21/ulf/resources/. 
    more » « less
  3. null (Ed.)
    We implement the formalization of natural logic-like monotonic inference using Unscoped Episodic Logical Forms (ULFs) by Kim et al. (2020). We demonstrate this system’s capacity to handle a variety of challenging semantic phenomena using the FraCaS dataset (Cooper et al., 1996).These results give empirical evidence for prior claims that ULF is an appropriate representation to mediate natural logic-like inferences. 
    more » « less
  4. Some people exhibit impressive memory for a wide array of semantic knowledge. What makes these trivia experts better able to learn and retain novel facts? We hypothesized that new semantic knowledge may be more strongly linked to its episodic context in trivia experts. We designed a novel online task in which 132 participants varying in trivia expertise encoded “exhibits” of naturalistic facts with related photos in one of two “museums.” Afterward, participants were tested on cued recall of facts and recognition of the associated photo and museum. Greater trivia expertise predicted higher cued recall for novel facts. Critically, trivia experts but not non-experts showed superior fact recall when they remembered both features (photo and museum) of the encoding context. These findings illustrate enhanced links between episodic memory and new semantic learning in trivia experts, and show the value of studying trivia experts as a special population that can shed light on the mechanisms of memory. 
    more » « less
  5. abstract A growing body of research shows that both signed and spoken languages display regular patterns of iconicity in their vocabularies. We compared iconicity in the lexicons of American Sign Language (ASL) and English by combining previously collected ratings of ASL signs (Caselli, Sevcikova Sehyr, Cohen-Goldberg, & Emmorey, 2017) and English words (Winter, Perlman, Perry, & Lupyan, 2017) with the use of data-driven semantic vectors derived from English. Our analyses show that models of spoken language lexical semantics drawn from large text corpora can be useful for predicting the iconicity of signs as well as words. Compared to English, ASL has a greater number of regions of semantic space with concentrations of highly iconic vocabulary. There was an overall negative relationship between semantic density and the iconicity of both English words and ASL signs. This negative relationship disappeared for highly iconic signs, suggesting that iconic forms may be more easily discriminable in ASL than in English. Our findings contribute to an increasingly detailed picture of how iconicity is distributed across different languages. 
    more » « less