skip to main content


Title: Language-Model-Based Parsing and English Generation for Unscoped Episodic Logical Forms
Unscoped Episodic Logical Forms (ULF) is a semantic representation for English sentences which captures semantic type structure, allows for linguistic inferences, and provides a basis for further resolution into Episodic Logic (EL). We present an application of pre-trained autoregressive language models to the task of rendering ULFs into English, and show that ULF's properties reduce the required training data volume for this approach when compared to AMR. We also show that the same system, when applied in reverse, performs well as an English-to-ULF parser.  more » « less
Award ID(s):
1940981
NSF-PAR ID:
10359412
Author(s) / Creator(s):
;
Date Published:
Journal Name:
The International FLAIRS Conference Proceedings
Volume:
35
ISSN:
2334-0762
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Unscoped Logical Form (ULF) of Episodic Logic is a meaning representation format that captures the overall semantic type structure of natural language while leaving certain finer details, such as word sense and quantifier scope, underspecified for ease of parsing and annotation. While a learned parser exists to convert English to ULF, its performance is severely limited by the lack of a large dataset to train the system. We present a ULF dataset augmentation method that samples type-coherent ULF expressions using the ULF semantic type system and filters out samples corresponding to implausible English sentences using a pretrained language model. Our data augmentation method is configurable with parameters that trade off between plausibility of samples with sample novelty and augmentation size. We find that the best configuration of this augmentation method substantially improves parser performance beyond using the existing unaugmented dataset. 
    more » « less
  2. null (Ed.)
    “Episodic Logic: Unscoped Logical Form” (EL-ULF) is a semantic representation capturing predicate-argument structure as well as more challenging aspects of language within the Episodic Logic formalism. We present the first learned approach for parsing sentences into ULFs, using a growing set of annotated examples. The results provide a strong baseline for future improvement. Our method learns a sequence-to-sequence model for predicting the transition action sequence within a modified cache transition system. We evaluate the efficacy of type grammar-based constraints, a word-to-symbol lexicon, and transition system state features in this task. Our system is availableat https://github.com/genelkim/ ulf-transition-parser. We also present the first official annotated ULF dataset at https://www.cs.rochester.edu/u/ gkim21/ulf/resources/. 
    more » « less
  3. null (Ed.)
    We implement the formalization of natural logic-like monotonic inference using Unscoped Episodic Logical Forms (ULFs) by Kim et al. (2020). We demonstrate this system’s capacity to handle a variety of challenging semantic phenomena using the FraCaS dataset (Cooper et al., 1996).These results give empirical evidence for prior claims that ULF is an appropriate representation to mediate natural logic-like inferences. 
    more » « less
  4. Some people exhibit impressive memory for a wide array of semantic knowledge. What makes these trivia experts better able to learn and retain novel facts? We hypothesized that new semantic knowledge may be more strongly linked to its episodic context in trivia experts. We designed a novel online task in which 132 participants varying in trivia expertise encoded “exhibits” of naturalistic facts with related photos in one of two “museums.” Afterward, participants were tested on cued recall of facts and recognition of the associated photo and museum. Greater trivia expertise predicted higher cued recall for novel facts. Critically, trivia experts but not non-experts showed superior fact recall when they remembered both features (photo and museum) of the encoding context. These findings illustrate enhanced links between episodic memory and new semantic learning in trivia experts, and show the value of studying trivia experts as a special population that can shed light on the mechanisms of memory. 
    more » « less
  5. null (Ed.)
    How related is skin to a quilt or door to worry? Here, we show that linguistic experience strongly informs people’s judgments of such word pairs. We asked Chinese-speakers, English-speakers, and Chinese-English bilinguals to rate semantic and visual similarity between pairs of Chinese words and of their English translation equivalents. Some pairs were unrelated, others were also unrelated but shared a radical (e.g., “expert” and “dolphin” share the radical meaning “pig”), others also shared a radical which invokes a metaphorical relationship. For example, a quilt covers the body like skin; understand, with a sun radical, invokes understanding as illumination. Importantly, the shared radicals are not part of the pronounced word form. Chinese speakers rated word pairs with metaphorical connections as more similar than other pairs. English speakers did not even though they were sensitive to shared radicals. Chinese-English bilinguals showed sensitivity to the metaphorical connections even when tested with English words. 
    more » « less