skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Overtly Headed XPs and Irish Syntax-Prosody Mapping
Analyses of Irish phonological phrasing (Elfner 2012 et seq.) have been influential in shaping Match Theory (Selkirk 2011), an OT approach to mapping syntactic to prosodic structure. We solve two constraint ranking paradoxes concerning the relative ranking of Match and StrongStart. Irish data indicate that while XPs with silent heads can fail to map to phonological phrases in certain circumstances, overtly headed XPs cannot. They also indicate that rebracketing due to the constraint StrongStart occurs only sentence-initially, contrary to predictions. We account for these puzzles by invoking Van Handel's (2019) Match constraint which sees only XPs with overt heads, and by positing a new version of StrongStart which only applies to material at the left edge of the intonational phrase. Our analysis is developed using the Syntax-Prosody in Optimality Theory application (SPOT) and OTWorkplace.  more » « less
Award ID(s):
1749368
PAR ID:
10252191
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the Annual Meetings on Phonology
Volume:
9
ISSN:
2377-3324
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. When faced with complex phrasal phonological patterns, linguists are faced with a dilemma: since complex phenomena rarely lend themselves to simple analyses, where is the analytical complexity best justified? This talk explores the question using the test case of argument-head tone sandhi in Seenku (Western Mande, Burkina Faso), arguing that a morphological approach with a hierarchical lexicon offers a fuller account of the data than a complex phonological one. In Seenku, internal arguments trigger sandhi on their following heads. Like Taiwanese, tone changes are largely paradigmatic, but unlike most Sinitic sandhi systems, each base tone has more than one sandhi tone, depending on the argument's tone and whether it is pronominal or non-pronominal. A phonological account would necessitate mechanisms like anti-faithfulness or contrast preservation, but just a single underlying form could be maintained. A morphological account treats the alternations as allomorph selection, which requires a hierarchical lexicon with paradigms of subcategorization frames. Both approaches introduce complexity, but the phonological approach fails to account for several data patterns, including differences between pronominal and non-pronominal arguments, the immutability of multi-tonal heads, and lexical exceptions. Further, the single underlying form would be necessarily abstract, since certain heads appear obligatorily with an argument and hence always undergo sandhi. The allomorph selection approach addresses each of these complications and more naturally characterizes the linguistic competence of Seenku speakers. This result suggests that the lexicon may play a more powerful role than is often assumed, especially in cases where sound change has obscured once transparent phonological motivations. 
    more » « less
  2. null (Ed.)
    Abstract Phonological alternations are often specific to morphosyntactic context. For example, stress shift in English occurs in the presence of some suffixes, -al , but not others, -ing : "Equation missing" , "Equation missing" , "Equation missing" . In some cases a phonological process applies only in words of certain lexical categories. Previous theories have stipulated that such morphosyntactically conditioned phonology is word-bounded. In this paper we present a number of long-distance morphologically conditioned phonological effects, cases where phonological processes within one word are conditioned by another word or the presence of a morpheme in another word. We provide a model, Cophonologies by Phase, which extends Cophonology Theory, intended to capture word-internal and lexically specified phonological alternations, to cyclically generated syntactic constituents. We show that Cophonologies by Phase makes better predictions about the long-distance morphologically conditioned phonological effects we find across languages than previous frameworks. Furthermore, Cophonologies by Phase derives such effects without requiring the phonological component to directly reference syntactic features or structure. 
    more » « less
  3. null (Ed.)
    Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems. 
    more » « less
  4. Knowledge distillation aims at reducing model size without compromising much performance. Recent work has applied it to large vision-language (VL) Transformers, and has shown that attention maps in the multi-head attention modules of vision-language Transformers contain extensive intra-modal and cross-modal co-reference relations to be distilled. The standard approach is to apply a one-to-one attention map distillation loss, i.e. the Teacher’s first attention head instructs the Student’s first head, the second teaches the second, and so forth, but this only works when the numbers of attention heads in the Teacher and Student are the same. To remove this constraint, we propose a new Attention Map Alignment Distillation (AMAD) method for Transformers with multi-head attention, which works for a Teacher and a Student with different numbers of attention heads. Specifically, we soft-align different heads in Teacher and Student attention maps using a cosine similarity weighting. The Teacher head contributes more to the Student heads for which it has a higher similarity weight. Each Teacher head contributes to all the Student heads by minimizing the divergence between the attention activation distributions for the soft-aligned heads. No head is left behind. This distillation approach operates like cross-attention. We experiment on distilling VL-T5 and BLIP, and apply AMAD loss on their T5, BERT, and ViT sub-modules. We show, under vision-language setting, that AMAD outperforms conventional distillation methods on VQA-2.0, COCO captioning, and Multi30K translation datasets. We further show that even without VL pre-training, the distilled VL-T5 models outperform corresponding VL pre-trained VL-T5 models that are further fine-tuned by ground-truth signals, and that fine-tuning distillation can also compensate to some degree for the absence of VL pre-training for BLIP models. 
    more » « less
  5. Soo, Rachel; Chow, Una Y.; Nederveen, Sander (Ed.)
    In Harmonic Grammar and Optimality Theory, well-formed representations are those that optimally satisfy a set of violable constraints, as determined by candidate comparison under a given weighting or ranking. This paper develops variants of HG/OT in which conflict among violable constraints plays out locally, within each part of a representation, rather than through optimization over alternatives. Static HG/OT has a simple formal definition that has important precedents in classic and modern neural networks and that restricts the logic expressivity of constraint-interaction grammars. The static approach to constraint conflict is illustrated for local segmental phonology (the distribution of vowel height in Cochabamba Quechua) and unbounded feature spreading (nasal harmony as in Johore Malay). A Python implementation of the theory and several demonstrations are available at https://github.com/colincwilson/statgram. 
    more » « less