Issues of intelligibility may arise amongst English learners when acquiring new words and phrases in North American academic settings, perhaps in part due to limited linguistic data available to the learner for understanding language use patterns. To this end, this paper examines the effects of Data‐Driven Learning for Pronunciation (DDLfP) on lexical stress and prominence in the US academic context. 65 L2 English learners in North American universities completed a diagnostic and pretest with listening and speaking items before completing four online lessons and a posttest on academic words and formulas (i.e., multi‐word sequences). Experimental group participants (
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract n = 40) practiced using an audio corpus of highly proficient L2 speakers while comparison group participants (n = 25) were given teacher‐created pronunciation materials. Logistic regression results indicated that the group who used the corpus significantly increased their recognition of prominence in academic formulas. In the spoken tasks, both groups improved in their lexical stress pronunciation, but only the DDLfP learners improved their production of prominence in academic formulas. Learners reported that they valued DDLfP efforts for pronunciation learning across contexts and speakers. Findings have implications for teachers of L2 pronunciation and support the use of corpora for language teaching and learning. -
This paper extends the use of Generalizability Theory to the measurement of extemporaneous L2 speech through the lens of speech perception. Using six datasets of previous studies, it reports on G studies–a method of breaking down measurement variance–and D studies–a predictive study of the impact on reliability when modifying the number of raters, items, or other facets that assist the field in adopting measurement designs that include comprehensibility, accentedness, and intelligibility. When data from a single audio sample per learner were subjected to D-studies, we find that both semantic differential and rubric scales for comprehensibility were reliable at the .90 level with about 15 trained raters or 50 untrained crowdsourced raters. In order to offer generalizable and dependable evaluations, empirically informed recommendations are given, including considerations for the number of speech samples rated, or the granularity of the scales for various assessment and research purposes.
-
N/A (Ed.)Automatic pronunciation assessment (APA) plays an important role in providing feedback for self-directed language learners in computer-assisted pronunciation training (CAPT). Several mispronunciation detection and diagnosis (MDD) systems have achieved promising performance based on end-to-end phoneme recognition. However, assessing the intelligibility of second language (L2) remains a challenging problem. One issue is the lack of large-scale labeled speech data from non-native speakers. Additionally, relying only on one aspect (e.g., accuracy) at a phonetic level may not provide a sufficient assessment of pronunciation quality and L2 intelligibility. It is possible to leverage segmental/phonetic-level features such as goodness of pronunciation (GOP), however, feature granularity may cause a discrepancy in prosodic-level (suprasegmental) pronunciation assessment. In this study, Wav2vec 2.0-based MDD and Goodness Of Pronunciation feature-based Transformer are employed to characterize L2 intelligibility. Here, an L2 speech dataset, with human-annotated prosodic (suprasegmental) labels, is used for multi-granular and multi-aspect pronunciation assessment and identification of factors important for intelligibility in L2 English speech. The study provides a transformative comparative assessment of automated pronunciation scores versus the relationship between suprasegmental features and listener perceptions, which taken collectively can help support the development of instantaneous assessment tools and solutions for L2 learners.more » « less
-
Various aspects of second language (L2) speakers’ pronunciation can be considered in the oral assessment of speaker proficiency. Over time, both segmentals and suprasegmentals have been examined for their roles in judgments of accented speech. Descriptors in the rating criteria often include speaker’s intelligibility (i.e., the actual understanding of the utterance) or comprehensibility (i.e., easy of understanding) (Derwing & Munro, 2005). This paper discusses the current issues and rating criteria in L2 pronunciation assessment, and describes the prominent characteristics of L2 intelligibility. It also offers recommendations to inform assessment practices and curriculum development in L2 classrooms in the context of Global Englishes.more » « less
-
This study investigates the relationships of learner background variables of adult English for Speakers of Other Languages (ESOL) learners and a mobile App designed to promote pronunciation skills targeting features known to contribute to intelligibility. Recruited from free evening classes for English learners, 34 adult ESOL learners of mixed ESOL learning experiences, ages, lengths of residency, and first languages (L1s) completed six phoneme pair lessons on a mobile App along with a background questionnaire and technology acceptance survey (Venkatesh et al., 2012). A series of Linear Mixed-Effect Model (LMEM) analyses were performed on learner background variables, technology acceptance, learner effort, and accuracy. The results found a minimal relationship between age, technology acceptance, and effort (7.68%) but a moderate to large relationship between age, technology acceptance and accuracy of consonants (39.70%) and vowels (64.26%). The implications are that learner use of mobile devices for L2 pronunciation training is moderated by various learner-related factors and the findings offer supportive evidence for designing mobile-based applications for a wide variety of backgrounds.more » « less
-
Current leading mispronunciation detection and diagnosis (MDD) systems achieve promising performance via end-to-end phoneme recognition. One challenge of such end-to-end solutions is the scarcity of human-annotated phonemes on natural L2 speech. In this work, we leverage unlabeled L2 speech via a pseudo-labeling (PL) procedure and extend the fine-tuning approach based on pre-trained self-supervised learning (SSL) models. Specifically, we use Wav2vec 2.0 as our SSL model, and fine-tune it using original labeled L2 speech samples plus the created pseudo-labeled L2 speech samples. Our pseudo labels are dynamic and are produced by an ensemble of the online model on-the-fly, which ensures that our model is robust to pseudo label noise. We show that fine-tuning with pseudo labels achieves a 5.35% phoneme error rate reduction and 2.48% MDD F1 score improvement over a labeled-samples-only finetuning baseline. The proposed PL method is also shown to outperform conventional offline PL methods. Compared to the state-of-the-art MDD systems, our MDD solution produces a more accurate and consistent phonetic error diagnosis. In addition, we conduct an open test on a separate UTD-4Accents dataset, where our system recognition outputs show a strong correlation with human perception, based on accentedness and intelligibility.more » « less