skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, November 14 until 2:00 AM ET on Saturday, November 15 due to maintenance. We apologize for the inconvenience.


Title: Cloud-based Platform for Indigenous Language Sound Education
Blackfoot is challenging for English-speaking instructors and learners to acquire because it exhibits unique pitch patterns. This study presents MeTILDA (Melodic Transcription in Language Documentation and Application) as a solution to teaching pitch patterns distinct from English. Specifically, we explore ways to improve data visualization through a visualized pronunciation teaching guide called Pitch Art. The working materials can be downloaded or stored in the cloud for further use and collaboration. These features are aimed to facilitate teachers in developing a curriculum for learning pronunciation and provide students with an interactive and integrative learning environment to better understand Blackfoot language and pronunciation.  more » « less
Award ID(s):
2109437
PAR ID:
10533058
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Association for Computational Linguistics
Date Published:
ISSN:
979-8-89176-086-8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Moeller, Sarah; Agyapong, Godfred; Arppe, Antti; Chaudhary, Aditi; Rijhwani, Shruti; Cox, Christopher; Henke, Ryan; Palmer, Alexis; Rosenblum, Daisy; Schwartz, Lane (Ed.)
    Blackfoot is challenging for English speaking instructors and learners to acquire because it exhibits unique pitch patterns. This study presents MeTILDA (Melodic Transcription in Language Documentation and Application) as a solution to teaching pitch patterns distinct from English. Specifically, we explore ways to improve data visualization through a visualized pronunciation teaching guide called Pitch Art. The working materials can be downloaded or stored in the cloud for further use and collaboration. These features are aimed to facilitate teachers in developing curriculum for learning pronunciation, and provide students with an interactive and integrative learning environment to better understand Blackfoot language and pronunciation. 
    more » « less
  2. Abstract Issues of intelligibility may arise amongst English learners when acquiring new words and phrases in North American academic settings, perhaps in part due to limited linguistic data available to the learner for understanding language use patterns. To this end, this paper examines the effects of Data‐Driven Learning for Pronunciation (DDLfP) on lexical stress and prominence in the US academic context. 65 L2 English learners in North American universities completed a diagnostic and pretest with listening and speaking items before completing four online lessons and a posttest on academic words and formulas (i.e., multi‐word sequences). Experimental group participants (n = 40) practiced using an audio corpus of highly proficient L2 speakers while comparison group participants (n = 25) were given teacher‐created pronunciation materials. Logistic regression results indicated that the group who used the corpus significantly increased their recognition of prominence in academic formulas. In the spoken tasks, both groups improved in their lexical stress pronunciation, but only the DDLfP learners improved their production of prominence in academic formulas. Learners reported that they valued DDLfP efforts for pronunciation learning across contexts and speakers. Findings have implications for teachers of L2 pronunciation and support the use of corpora for language teaching and learning. 
    more » « less
  3. Patterns, types, and causes of errors in children’s pronunciation can be more variable than in adults’ speech. In school settings, different specialists work with children depending on their needs, including speech-language pathology (SLP) professionals and English as a second language (ESL) teachers. Because children’s speech is so variable, it is often difficult to identify which specialist is better suited to address a child’s needs. Computers excel at pattern recognition and can be quickly trained to identify a wide array of pronunciation issues, making them strong candidates to help with the difficult problem of identifying the appropriate specialist. As part of a larger project to create an automated pronunciation diagnostic tool to help identify which specialist a child may need, we created a pronunciation test for children between 5 and 7 years old. We recorded 26 children with a variety of language backgrounds and SLP needs and then compared automatic evaluations of their pronunciation to human evaluations. While the human evaluations showed high agreement, the automatic mispronunciation detection (MPD) system agreed on less than 50% of phonemes overall. However, the MPD showed consistent, albeit low, agreement across four subgroups of participants with no clear biases. Due to this performance, we recommend further research on children’s pronunciation and on specialized MPD systems that account for their unique speech characteristics and developmental patterns. 
    more » « less
  4. N/A (Ed.)
    Automatic pronunciation assessment (APA) plays an important role in providing feedback for self-directed language learners in computer-assisted pronunciation training (CAPT). Several mispronunciation detection and diagnosis (MDD) systems have achieved promising performance based on end-to-end phoneme recognition. However, assessing the intelligibility of second language (L2) remains a challenging problem. One issue is the lack of large-scale labeled speech data from non-native speakers. Additionally, relying only on one aspect (e.g., accuracy) at a phonetic level may not provide a sufficient assessment of pronunciation quality and L2 intelligibility. It is possible to leverage segmental/phonetic-level features such as goodness of pronunciation (GOP), however, feature granularity may cause a discrepancy in prosodic-level (suprasegmental) pronunciation assessment. In this study, Wav2vec 2.0-based MDD and Goodness Of Pronunciation feature-based Transformer are employed to characterize L2 intelligibility. Here, an L2 speech dataset, with human-annotated prosodic (suprasegmental) labels, is used for multi-granular and multi-aspect pronunciation assessment and identification of factors important for intelligibility in L2 English speech. The study provides a transformative comparative assessment of automated pronunciation scores versus the relationship between suprasegmental features and listener perceptions, which taken collectively can help support the development of instantaneous assessment tools and solutions for L2 learners. 
    more » « less
  5. Abstract To efficiently recognize words, children learning an intonational language like English should avoid interpreting pitch‐contour variation as signaling lexical contrast, despite the relevance of pitch at other levels of structure. Thus far, the developmental time‐course with which English‐learning children rule out pitch as a contrastive feature has been incompletely characterized. Prior studies have tested diverse lexical contrasts and have not tested beyond 30 months. To specify the developmental trajectory over a broader age range, we extended a prior study (Quam & Swingley, 2010), in which 30‐month‐olds and adults disregarded pitch changes, but attended to vowel changes, in newly learned words. Using the same phonological contrasts, we tested 3‐ to 5‐year‐olds, 24‐month‐olds, and 18‐month‐olds. The older two groups were tested using the language‐guided‐looking method. The oldest group attended to vowels but not pitch. Surprisingly, 24‐month‐olds ignored not just pitch but sometimes vowels as well—conflicting with prior findings of phonological constraint at 24 months. The youngest group was tested using the Switch habituation method, half with additional phonetic variability in training. Eighteen‐month‐olds learned both pitch‐contrasted and vowel‐contrasted words, whether or not additional variability was present. Thus, native‐language phonological constraint was not evidenced prior to 30 months (Quam & Swingley, 2010). We contextualize our findings within other recent work in this area. 
    more » « less