Abstract BackgroundElectrocorticography (ECoG) language mapping is often performed extraoperatively, frequently involves offline processing, and relationships with direct cortical stimulation (DCS) remain variable. We sought to determine the feasibility and preliminary utility of an intraoperative language mapping approach guided by real-time visualization of electrocorticograms. MethodsA patient with astrocytoma underwent awake craniotomy with intraoperative language mapping, utilizing a dual iPad stimulus presentation system coupled to a real-time neural signal processing platform capable of both ECoG recording and delivery of DCS. Gamma band modulations in response to 4 language tasks at each electrode were visualized in real-time. Next, DCS was conducted for each neighboring electrode pair during language tasks. ResultsAll language tasks resulted in strongest heat map activation at an electrode pair in the anterior to mid superior temporal gyrus. Consistent speech arrest during DCS was observed for Object and Action naming tasks at these same electrodes, indicating good correspondence with ECoG heat map recordings. This region corresponded well with posterior language representation via preoperative functional MRI. ConclusionsIntraoperative real-time visualization of language task-based ECoG gamma band modulation is feasible and may help identify targets for DCS. If validated, this may improve the efficiency and accuracy of intraoperative language mapping.
more »
« less
). Exploring language: The lourney from Logo to Snap!
The original functions first described in Paul Goldenberg's "Exploring Language with Logo" are compared with modern-day implementations of the same concepts in Snap!, a block computing language developed at the University of California at Berkeley.
more »
« less
- Award ID(s):
- 1842342
- PAR ID:
- 10196418
- Date Published:
- Journal Name:
- Contemporary issues in technology and teacher education
- Volume:
- 20
- Issue:
- 3
- ISSN:
- 1528-5804
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
A large body of research shows connections between infants’ and toddlers’ home language input and a wide range of receptive and expressive early language skills. Some facets of caretaker input and early language skills are associated with socioeconomic status (SES), though not all. Given the complexity of language learning, language use, and its many pathways of connection to SES, testing causal links between these dimensions is difficult at best. Interventions aimed at changing parent language use have seen mixed success, in part because “language infusions” generally fail to target underlying challenges facing underresourced families, and perhaps because parent language is the wrong target. System-level interventions such as paid parental leave and expansion and enrichment of childcare and early education options hold greater promise for improving families’ lives, with positive repercussions for a broad range of family and child outcomes, including linguistic ones.more » « less
-
In this article, we share examples from our project, Participating in Literacies and Computer Science (PiLaCS), which focuses on how students' language practices shape their participation and engagement in language arts projects that integrate code. Integrating Code into Language Arts: Ashley's Multimodal Translanguaging Approach Tasked with fulfilling her school's commitment to CS for All within her sixth grade bilingual language arts class, Ashley chose to teach a unit with a software and programming language called Scratch, created at the Massachusetts Institute of Technology, to support creative approaches to code (https://scratch.mit.edu/). What follows are examples from Ashley's class that demonstrate how a CS-integrated language arts curriculum provided her students with space to engage, create, and communicate using language, text, and their bodies in dynamic expressions. Álvaro's dynamic expression of sliding across the room animated his understanding of the connection between the text and the code, showing how integrating code into language arts provides a forum for students' language practices to be integrated and validated.more » « less
-
We propose TuringAdvice, a new challenge task and dataset for language understanding models. Given a written situation that a real person is currently facing, a model must generate helpful advice in natural language. Our evaluation framework tests a fundamental aspect of human language understanding: our ability to use language to resolve open-ended situations by communicating with each other. Empirical results show that today’s models struggle at TuringAdvice, even multibillion parameter models finetuned on 600k in-domain training examples. The best model, T5, writes advice that is at least as helpful as human-written advice in only 14% of cases; a much larger non-finetunable GPT3 model does even worse at 4%. This low performance reveals language understanding errors that are hard to spot outside of a generative setting, showing much room for progress.more » « less
-
Oh, Alice; Naumann, Tristan; Globerson, Amir; Saenko, Kate; Hardt, Moritz; Levine, Sergey (Ed.)Diffusion models have achieved great success in modeling continuous data modalities such as images, audio, and video, but have seen limited use in discrete domains such as language. Recent attempts to adapt diffusion to language have presented diffusion as an alternative to existing pretrained language models. We view diffusion and existing language models as complementary. We demonstrate that encoder-decoder language models can be utilized to efficiently learn high-quality language autoencoders. We then demonstrate that continuous diffusion models can be learned in the latent space of the language autoencoder, enabling us to sample continuous latent representations that can be decoded into natural language with the pretrained decoder. We validate the effectiveness of our approach for unconditional, class-conditional, and sequence-to-sequence language generation. We demonstrate across multiple diverse data sets that our latent language diffusion models are significantly more effective than previous diffusion language models. Our code is available at https://github.com/justinlovelace/latent-diffusion-for-language .more » « less
An official website of the United States government

