This corpus was collected in the Language Sciences Research Lab, a working lab embedded inside of a science museum: the Center of Science and Industry in Columbus, Ohio, USA. Participants were recruited from the floor of the museum and run in a semi-public space. Three distinctive features of the corpus are: (1) an interactive social robot (specifically, a Jibo robot) was present and participated in the sessions for roughly half the children; (2) all children were recorded with a lapel mic generating high quality audio (available through CHILDES), as well as a distal table mic generating low quality audio (available on request) to facilitate strong tests of automated speech processing on the data; and (3) the data were collected in the peri-pandemic period, beginning in the summer of 2021 just after COVID-19 restrictions were being eased and ending in the summer of 2022 – thus providing a snapshot of language development in a distinctive time of the world. A YouTube video on the Jibo robot is available here .
more »
« less
Mechatronic Generation of Datasets for Acoustics Research
We address the challenge of making spatial audio datasets by proposing a shared mechanized recording space that can run custom acoustic experiments: a Mechatronic Acoustic Research System (MARS). To accommodate a wide variety of experiments, we implement an extensible architecture for wireless multi-robot coordination which enables synchronized robot motion for dynamic scenes with moving speakers and microphones. Using a virtual control interface, we can remotely design automated experiments to collect large-scale audio data. This data is shown to be similar across repeated runs, demonstrating the reliability of MARS. We discuss the potential for MARS to make audio data collection accessible for researchers without dedicated acoustic research spaces.
more »
« less
- Award ID(s):
- 1919257
- PAR ID:
- 10475916
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- 2022 International Workshop on Acoustic Signal Enhancement (IWAENC)
- ISBN:
- 978-1-6654-6867-1
- Page Range / eLocation ID:
- 1 to 5
- Format(s):
- Medium: X
- Location:
- Bamberg, Germany
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Annotated IMU sensor data from smart devices and wearables are essential for developing supervised models for fine-grained human activity recognition, albeit generating sufficient annotated data for diverse human activities under different environments is challenging. Existing approaches primarily use human-in-the-loop based techniques, including active learning; however, they are tedious, costly, and time-consuming. Leveraging the availability of acoustic data from embedded microphones over the data collection devices, in this paper, we propose LASO, a multimodal approach for automated data annotation from acoustic and locomotive information. LASO works over the edge device itself, ensuring that only the annotated IMU data is collected, discarding the acoustic data from the device itself, hence preserving the audio-privacy of the user. In the absence of any pre-existing labeling information, such an auto-annotation is challenging as the IMU data needs to be sessionized for different time-scaled activities in a completely unsupervised manner. We use a change-point detection technique while synchronizing the locomotive information from the IMU data with the acoustic data, and then use pre-trained audio-based activity recognition models for labeling the IMU data while handling the acoustic noises. LASO efficiently annotates IMU data, without any explicit human intervention, with a mean accuracy of 0.93 ($$\pm 0.04$$) and 0.78 ($$\pm 0.05$$) for two different real-life datasets from workshop and kitchen environments, respectively.more » « less
-
null (Ed.)Research in creative robotics continues to expand across all creative domains, including art, music and language. Creative robots are primarily designed to be task specific, with limited research into the implications of their design outside their core task. In the case of a musical robot, this includes when a human sees and interacts with the robot before and after the performance, as well as in between pieces. These non-musical interaction tasks such as the presence of a robot during musical equipment set up, play a key role in the human perception of the robot however have received only limited attention. In this paper, we describe a new audio system using emotional musical prosody, designed to match the creative process of a musical robot for use before, between and after musical performances. Our generation system relies on the creation of a custom dataset for musical prosody. This system is designed foremost to operate in real time and allow rapid generation and dialogue exchange between human and robot. For this reason, the system combines symbolic deep learning through a Conditional Convolution Variational Auto-encoder, with an emotion-tagged audio sampler. We then compare this to a SOTA text-to-speech system in our robotic platform, Shimon the marimba player.We conducted a between-groups study with 100 participants watching a musician interact for 30 s with Shimon. We were able to increase user ratings for the key creativity metrics; novelty and coherence, while maintaining ratings for expressivity across each implementation. Our results also indicated that by communicating in a form that relates to the robot’s core functionality, we can raise likeability and perceived intelligence, while not altering animacy or anthropomorphism. These findings indicate the variation that can occur in the perception of a robot based on interactions surrounding a performance, such as initial meetings and spaces between pieces, in addition to the core creative algorithms.more » « less
-
null (Ed.)Children’s early numerical knowledge establishes a foundation for later development of mathematics achievement and playing linear number board games is effective in improving basic numeri- cal abilities. Besides the visuo-spatial cues provided by traditional number board games, learning companion robots can integrate multi-sensory information and offer social cues that can support children’s learning experiences. We explored how young children experience sensory feedback (audio and visual) and social expressions from a robot when playing a linear number board game, “RoboMath.” We present the interaction design of the game and our investigation of children’s (n = 19, aged 4) and parents’ experiences under three conditions: (1) visual-only, (2) audio-visual, and (3) audio- visual-social robot interaction. We report our qualitative analysis, including the themes observed from interviews with families on their perceptions of the game and the interaction with the robot, their child’s experiences, and their design recommendations.more » « less
-
Abstract Speech neuroprosthetics aim to provide a natural communication channel to individuals who are unable to speak due to physical or neurological impairments. Real-time synthesis of acoustic speech directly from measured neural activity could enable natural conversations and notably improve quality of life, particularly for individuals who have severely limited means of communication. Recent advances in decoding approaches have led to high quality reconstructions of acoustic speech from invasively measured neural activity. However, most prior research utilizes data collected during open-loop experiments of articulated speech, which might not directly translate to imagined speech processes. Here, we present an approach that synthesizes audible speech in real-time for both imagined and whispered speech conditions. Using a participant implanted with stereotactic depth electrodes, we were able to reliably generate audible speech in real-time. The decoding models rely predominately on frontal activity suggesting that speech processes have similar representations when vocalized, whispered, or imagined. While reconstructed audio is not yet intelligible, our real-time synthesis approach represents an essential step towards investigating how patients will learn to operate a closed-loop speech neuroprosthesis based on imagined speech.more » « less
An official website of the United States government
