A common challenge faced in practical supervised learning, such as medical image processing and robotic interactions, is that there are plenty of tasks but each task cannot afford to collect enough labeled examples to be learned in isolation. However, by exploiting the similarities across those tasks, one can hope to overcome such data scarcity. Under a canonical scenario where each task is drawn from a mixture of k linear regressions, we study a fundamental question: can abundant small-data tasks compensate for the lack of big-data tasks? Existing second moment based approaches show that such a trade-off is efficiently achievable, with the help of medium-sized tasks with k^1/2 examples each. However, this algorithm is brittle in two important scenarios. The predictions can be arbitrarily bad even with only a few outliers in the dataset; or even if the medium-sized tasks are slightly smaller with. We introduce a spectral approach that is simultaneously robust under both scenarios. To this end, we first design a novel outlier-robust principal component analysis algorithm that achieves an optimal accuracy. This is followed by a sum-of-squares algorithm to exploit the information from higher order moments. Together, this approach is robust against outliers and achieves a graceful statistical trade-off.
more »
« less
BraidFlow: A Flow-annotated Dataset of Kumihimo Braidmaking Activity
Entering a cognitive state of flow is a natural response of the mind that allows people to fully concentrate and cope with tedious, and often repetitive tasks. Understanding how to trigger or sustain flow remains limited by retrospective surveys, presenting a need to better document flow. Through a validation study, we first establish braidmaking as a flow-inducing task. We then study how braidmaking can be used to unpack the experience of flow on a moment-by-moment basis. Using an instrumented Kumihimo braidmaking tool and off-the-shelf biosignal wristbands, we record the experiences of 24 users engaged in 3 different braidmaking tasks. Feature vectors motivated from flow literature were extracted from activity data (IMU, EMG, EDA, heart rate, skin temperature, braiding telemetry) and annotated with Flow Short Scale (FSS) scores. Together, this dataset and data-capture system form the first open-access and holistic platform for mining flow data and synthesizing flow-aware design principles.
more »
« less
- Award ID(s):
- 2105054
- PAR ID:
- 10514417
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Designing Interactive Systems
- ISBN:
- 9781450398930
- Page Range / eLocation ID:
- 839 to 855
- Format(s):
- Medium: X
- Location:
- Pittsburgh PA USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Drifters deployed in close proximity collectively provide a unique observational data set with which to separate mesoscale and submesoscale flows. In this paper we provide a principled approach for doing so by fitting observed velocities to a local Taylor expansion of the velocity flow field. We demonstrate how to estimate mesoscale and submesoscale quantities that evolve slowly over time, as well as their associated statistical uncertainty. We show that in practice the mesoscale component of our model can explain much first and second-moment variability in drifter velocities, especially at low frequencies. This results in much lower and more meaningful measures of submesoscale diffusivity, which would otherwise be contaminated by unresolved mesoscale flow. We quantify these effects theoretically via computing Lagrangian frequency spectra, and demonstrate the usefulness of our methodology through simulations as well as with real observations from the LatMix deployment of drifters. The outcome of this method is a full Lagrangian decomposition of each drifter trajectory into three components that represent the background, mesoscale, and submesoscale flow.more » « less
-
null (Ed.)Abstract A listener's interpretation of a given speech sound can vary probabilistically from moment to moment. Previous experience (i.e., the contexts in which one has encountered an ambiguous sound) can further influence the interpretation of speech, a phenomenon known as perceptual learning for speech. This study used multivoxel pattern analysis to query how neural patterns reflect perceptual learning, leveraging archival fMRI data from a lexically guided perceptual learning study conducted by Myers and Mesite [Myers, E. B., & Mesite, L. M. Neural systems underlying perceptual adjustment to non-standard speech tokens. Journal of Memory and Language, 76, 80–93, 2014]. In that study, participants first heard ambiguous /s/–/∫/ blends in either /s/-biased lexical contexts (epi_ode) or /∫/-biased contexts (refre_ing); subsequently, they performed a phonetic categorization task on tokens from an /asi/–/a∫i/ continuum. In the current work, a classifier was trained to distinguish between phonetic categorization trials in which participants heard unambiguous productions of /s/ and those in which they heard unambiguous productions of /∫/. The classifier was able to generalize this training to ambiguous tokens from the middle of the continuum on the basis of individual participants' trial-by-trial perception. We take these findings as evidence that perceptual learning for speech involves neural recalibration, such that the pattern of activation approximates the perceived category. Exploratory analyses showed that left parietal regions (supramarginal and angular gyri) and right temporal regions (superior, middle, and transverse temporal gyri) were most informative for categorization. Overall, our results inform an understanding of how moment-to-moment variability in speech perception is encoded in the brain.more » « less
-
Small group interactions and interactions with near‐peer instructors such as learning assistants serve as fertile opportunities for student learning in undergraduate active learning classrooms. To understand what students take away from these interactions, we need to understand how and what they learn during the moment of their interaction. This study builds on practical epistemology analysis to develop a framework to study this in‐the‐moment learning during interactions by operationalizing it through the lens of discourse change and continuity toward three ends. Using video recordings of students and learning assistants interacting in a variety of contexts including remote, in‐person, and hybrid classrooms in introductory chemistry and physics at two universities, we developed an analytical framework that can characterize learning in the moment of interaction, is sensitive to different kinds of learning, and can be used to compare interactions. The framework and its theoretical underpinnings are described in detail. In‐depth examples demonstrate how the framework can be applied to classroom data to identify and differentiate different ways in which in‐the‐moment learning occurs.more » « less
-
Objective: Real-time measurement of biological joint moment could enhance clinical assessments and generalize exoskeleton control. Accessing joint moments outside clinical and laboratory settings requires harnessing non-invasive wearable sensor data for indirect estimation. Previous approaches have been primarily validated during cyclic tasks, such as walking, but these methods are likely limited when translating to non-cyclic tasks where the mapping from kinematics to moments is not unique. Methods: We trained deep learning models to estimate hip and knee joint moments from kinematic sensors, electromyography (EMG), and simulated pressure insoles from a dataset including 10 cyclic and 18 non-cyclic activities. We assessed estimation error on combinations of sensor modalities during both activity types. Results: Compared to the kinematics-only baseline, adding EMG reduced RMSE by 16.9% at the hip and 30.4% at the knee (p<0.05) and adding insoles reduced RMSE by 21.7% at the hip and 33.9% at the knee (p<0.05). Adding both modalities reduced RMSE by 32.5% at the hip and 41.2% at the knee (p<0.05) which was significantly higher than either modality individually (p<0.05). All sensor additions improved model performance on non-cyclic tasks more than cyclic tasks (p<0.05). Conclusion: These results demonstrate that adding kinetic sensor information through EMG or insoles improves joint moment estimation both individually and jointly. These additional modalities are most important during non-cyclic tasks, tasks that reflect the variable and sporadic nature of the real-world. Significance: Improved joint moment estimation and task generalization is pivotal to developing wearable robotic systems capable of enhancing mobility in everyday life.more » « less
An official website of the United States government

