Humans, even from infancy, are capable of unsupervised (“statistical”) learning of linguistic information. However, it remains unclear which of the myriad algorithms for unsupervised learning captures human abilities. This matters because unsupervised learning algorithms vary greatly in how much can be learned how quickly. Thus, which algorithm(s) humans use may place a strong bound on how much of language can actually be learned in an unsupervised fashion. As a step towards more precisely characterizing human unsupervised learning capabilities, we quantitatively synthesize the literature on adult unsupervised (“statistical”) word segmentation. Unfortunately, most confidence intervals were very large, and few moderators were found to be significant. These findings are consistent with prior work suggesting low power and precision in the literature. Constraining theory will require more, higher-powered studies.
more »
« less
Evaluating unsupervised word segmentation in adults: a meta-analysis
Humans, even from infancy, are capable of unsupervised (“sta- tistical”) learning of linguistic information. However, it re- mains unclear which of the myriad algorithms for unsuper- vised learning captures human abilities. This matters because unsupervised learning algorithms vary greatly in how much can be learned how quickly. Thus, which algorithm(s) humans use may place a strong bound on how much of language can ac- tually be learned in an unsupervised fashion. As a step towards more precisely characterizing human unsupervised learning capabilities, we quantitatively synthesize the literature on adult unsupervised (“statistical”) word segmentation. Unfortunately, most confidence intervals were very large, and few moderators were found to be significant. These findings are consistent with prior work suggesting low power and precision in the litera- ture. Constraining theory will require more, higher-powered studies.
more »
« less
- Award ID(s):
- 1918813
- PAR ID:
- 10354100
- Date Published:
- Journal Name:
- Proceedings of the Annual Meeting of the Cognitive Science Society
- Volume:
- 44
- Issue:
- 44
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
When interacting with a robot, humans form con-ceptual models (of varying quality) which capture how the robot behaves. These conceptual models form just from watching or in-teracting with the robot, with or without conscious thought. Some methods select and present robot behaviors to improve human conceptual model formation; nonetheless, these methods and HRI more broadly have not yet consulted cognitive theories of human concept learning. These validated theories offer concrete design guidance to support humans in developing conceptual models more quickly, accurately, and flexibly. Specifically, Analogical Transfer Theory and the Variation Theory of Learning have been successfully deployed in other fields, and offer new insights for the HRI community about the selection and presentation of robot behaviors. Using these theories, we review and contextualize 35 prior works in human-robot teaching and learning, and we assess how these works incorporate or omit the design implications of these theories. From this review, we identify new opportunities for algorithms and interfaces to help humans more easily learn conceptual models of robot behaviors, which in turn can help humans become more effective robot teachers and collaborators.more » « less
-
Assistive robot arms can help humans by partially automating their desired tasks. Consider an adult with motor impairments controlling an assistive robot arm to eat dinner. The robot can reduce the number of human inputs — and how precise those inputs need to be — by recognizing what the human wants (e.g., a fork) and assisting for that task (e.g., moving towards the fork). Prior research has largely focused on learning the human’s task and providing meaningful assistance. But as the robot learns and assists, we also need to ensure that the human understands the robot’s intent (e.g., does the human know the robot is reaching for a fork?). In this paper, we study the effects of communicating learned assistance from the robot back to the human operator. We do not focus on the specific interfaces used for communication. Instead, we develop experimental and theoretical models of a) how communication changes the way humans interact with assistive robot arms, and b) how robots can harness these changes to better align with the human’s intent. We first conduct online and in-person user studies where participants operate robots that provide partial assistance, and we measure how the human’s inputs change with and without communication. With communication, we find that humans are more likely to intervene when the robot incorrectly predicts their intent, and more likely to release control when the robot correctly understands their task. We then use these findings to modify an established robot learning algorithm so that the robot can correctly interpret the human’s inputs when communication is present. Our results from a second in-person user study suggest that this combination of communication and learning outperforms assistive systems that isolate either learning or communication. See videos here: https://youtu.be/BET9yuVTVU4more » « less
-
Algorithms provide powerful tools for detecting and dissecting human bias and error. Here, we develop machine learning methods to to analyze how humans err in a particular high-stakes task: image interpretation. We leverage a unique dataset of 16,135,392 human predictions of whether a neighborhood voted for Donald Trump or Joe Biden in the 2020 US election, based on a Google Street View image. We show that by training a machine learning estimator of the Bayes optimal decision for each image, we can provide an actionable decomposition of human error into bias, variance, and noise terms, and further identify specific features (like pickup trucks) which lead humans astray. Our methods can be applied to ensure that human-in-the-loop decision-making is accurate and fair and are also applicable to black-box algorithmic systems.more » « less
-
For robots to seamlessly interact with humans, we first need to make sure that humans and robots understand one another. Diverse algorithms have been developed to enable robots to learn from humans (i.e., transferring information from humans to robots). In parallel, visual, haptic, and auditory communication interfaces have been designed to convey the robot’s internal state to the human (i.e., transferring information from robots to humans). Prior research often separates these two directions of information transfer, and focuses primarily on either learning algorithms or communication interfaces. By contrast, in this survey we take an interdisciplinary approach to identify common themes and emerging trends that close the loop between learning and communication. Specifically, we survey state-of-the-art methods and outcomes for communicating a robot’s learning back to the human teacher during human-robot interaction. This discussion connects human-in-the-loop learning methods and explainable robot learning with multimodal feedback systems and measures of human-robot interaction. We find that—when learning and communication are developed together—the resulting closed-loop system can lead to improved human teaching, increased human trust, and human-robot co-adaptation. The paper includes a perspective on several of the interdisciplinary research themes and open questions that could advance how future robots communicate their learning to everyday operators. Finally, we implement a selection of the reviewed methods in a case study where participants kinesthetically teach a robot arm. This case study documents and tests an integrated approach for learning in ways that can be communicated, conveying this learning across multimodal interfaces, and measuring the resulting changes in human and robot behavior.more » « less
An official website of the United States government

