Many people have claimed that sleep has helped them solve a difficult problem, but empirical support for this assertion remains tentative. The current experiment tested whether manipulating information processing during sleep impacts problem incubation and solving. In memory studies, delivering learning-associated sound cues during sleep can reactivate memories. We therefore predicted that reactivating previously unsolved problems could help people solve them. In the evening, we presented 57 participants with puzzles, each arbitrarily associated with a different sound. While participants slept overnight, half of the sounds associated with the puzzles they had not solved were surreptitiously presented. The next morning, participants solved 31.7% of cued puzzles, compared with 20.5% of uncued puzzles (a 55% improvement). Moreover, cued-puzzle solving correlated with cued-puzzle memory. Overall, these results demonstrate that cuing puzzle information during sleep can facilitate solving, thus supporting sleep’s role in problem incubation and establishing a new technique to advance understanding of problem solving and sleep cognition. 
                        more » 
                        « less   
                    
                            
                            Overlearning of non-native speech sounds does not result in superior consolidation after a period of sleep
                        
                    
    
            Recent studies suggest that sleep-mediated consolidation processes help adults learn non-native speech sounds. However, overnight improvement was not seen when participants learned in the morning, perhaps resulting from native-language interference. The current study trained participants to perceive the Hindi dental/retroflex contrast in the morning and tested whether increased training can lead to overnight improvement. Results showed overnight effects regardless of training amount. In contrast to previous studies, participants in this study heard sounds in limited contexts (i.e., one talker and one vowel context), corroborating other findings, suggesting that overnight improvement is seen in non-native phonetic learning when variability is limited. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1735225
- PAR ID:
- 10593610
- Publisher / Repository:
- Acoustical Society of America (ASA)
- Date Published:
- Journal Name:
- The Journal of the Acoustical Society of America
- Volume:
- 147
- Issue:
- 3
- ISSN:
- 0001-4966
- Format(s):
- Medium: X Size: p. EL289-EL294
- Size(s):
- p. EL289-EL294
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the “dorsal stream”) to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus (IFG), the middle temporal gyrus (MTG), and the supplementary motor area (SMA) provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.more » « less
- 
            Adults struggle to learn non-native speech categories in many experimental settings (Goto, 1971), but learn efficiently in a video game paradigm where non-native speech sounds have functional significance (Lim and Holt, 2011). Behavioral and neural evidence from this and other paradigms point toward the involvement of reinforcement learning mechanisms in speech category learning. We formalize this hypothesis computationally and present two simulations. The first simulates the findings of Lim et al. (2019), providing proof in principle that a reinforcement learning algorithm can successfully capture human results in a video game where people are learning novel categories of noise tokens. Our second simulation extends this to speech sounds and demonstrates that our algorithm mimics second language learners’ improvement on discrimination of a non-native speech contrast. Together these two simulations show that reinforcement learning provides an accurate model of human learning in this paradigm and provide evidence supporting the hypothesis that this mechanism could play a key role in effective speech category learning in adults. Being able to identify the algorithms employed in this paradigm could provide many avenues for pedagogical changes in second language learning and let teachers harness the processes that allow for efficient learning and improvement of non-native perceptual ability.more » « less
- 
            Pinter-Wollman, Noa (Ed.)Abstract Daily foraging activity of small wintering birds is classically thought to be driven by the need to gather enough energy reserves to survive each night. A separate line of research has shown that sociality is a major driver in winter foraging activities in many species. Here, we used wintering birds as a study system to move toward an integrative understanding of the influence of energy requirements and sociality on foraging ecology. We used RFID-enabled feeders in Lincoln, Nebraska, USA in January–March 2019 to measure foraging activity in two species (downy woodpeckers, Dryobates pubescens, and white-breasted nuthatches, Sitta carolinensis). We analyzed the relationship between overnight temperature and morning foraging activity and found that lowest overnight temperature was weakly correlated with morning visitation at feeders. We then used a network approach to ask if flock associations explain similarity in birds’ foraging activity. In both species, individuals with stronger associations in a social network were more likely to share similar feeder activity, and an index of social partners’ activity explained foraging activity better than overnight temperature. This brings forth new questions about the interplay between individual response to temperature and social factors in shaping how small animals cope with harsh winter conditions.more » « less
- 
            Lovable robots in movies regularly beep, chirp, and whirr, yet robots in the real world rarely deploy such sounds. Despite preliminary work supporting the perceptual and objective benefits of intentionally-produced robot sound, relatively little research is ongoing in this area. In this paper, we systematically evaluate transformative robot sound across multiple robot archetypes and behaviors. We conducted a series of five online video-based surveys, each with N ≈ 100 participants, to better understand the effects of musician-designed transformative sounds on perceptions of personal, service, and industrial robots. Participants rated robot videos with transformative sound as significantly happier, warmer, and more competent in all five studies, as more energetic in four studies, and as less discomforting in one study. Overall, results confirmed that transformative sounds consistently improve subjective ratings but may convey affect contrary to the intent of affective robot behaviors. In future work, we will investigate the repeatability of these results through in-person studies and develop methods to automatically generate transformative robot sound. This work may benefit researchers and designers who aim to make robots more favorable to human users.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
