skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Simulated Language Learning from Communicative Goals and Linguistic Input
Children do not learn language from passively analyzing correlations between language and observations, but from interaction with caregivers or peers. The non-nativist approach claims that the main driver of language learning should be to achieve communicative goals. Imitation, on the other hand, is another natural desire that many argue influences language learning. However, there are still gaps in the research on what roles communicative goals and imitating linguistic input play in language acquisition, due to the difficulty of performing comprehensive experiments with human learners. In this paper, we propose a computational framework using simulated experiments that allows us to compare the roles of the two drivers. Specifically, we simulate a two-way communication game between a speaker, corresponding to a language learner, and a listener, corresponding to a caregiver or teacher. The speaker's communicative goals are modeled as rewards for successful completion of a referential game, and imitation is performed by mimicking feedback from the listener. The listener adaptively chooses to give feedback and makes choices based on the speaker's utterances. With empirical results on naturalistic visual and language data, we find that communicative goals play an important role in driving language learning, whereas imitation accelerates the learning process. We also find that (1) models trained with communicative goals tend to use minimal vocabulary and utterances and overextend them to concepts outside the original word meanings; (2) the strategy with which the listener provides feedback also influences the learning results and speed. Code and data for replicating the experiments are available (https://bit.ly/interactgym) to spur future research on models for computational studies of language learning.  more » « less
Award ID(s):
2141751
PAR ID:
10356961
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Annual Meeting of the Cognitive Science Society
Volume:
44
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users’ intents from instructions and feedback and as an expressive speaker to explain its decision processes to users. Here, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment—collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI) system in which a group of robots predicts users’ values by taking in situ feedback into consideration while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core components of the proposed computational framework. Our results show that real-time human-robot mutual understanding in complex cooperative tasks is achievable with a learning model based on bidirectional communication. We believe that this interaction framework can shed light on bidirectional value alignment in communicative XAI systems and, more broadly, in future human-machine teaming systems. 
    more » « less
  2. We present a game-theoretic model of pragmatics that we call ReCo (for Regularized Conventions). This model formulates pragmatic communication as a game in which players are rewarded for communicating successfully and penalized for deviating from a shared, “default” semantics. As a result, players assign utterances context-dependent meanings that jointly optimize communicative success and naturalness with respect to speakers’ and listeners’ background knowledge of language. By using established game-theoretic tools to compute equilibrium strategies for this game, we obtain principled pragmatic language generation procedures with formal guarantees of communicative success. Across several datasets capturing real and idealized human judgments about pragmatic implicature, ReCo matches, or slightly improves upon, predictions made by Iterated Best Response and Rational Speech Acts models of language understanding. 
    more » « less
  3. null (Ed.)
    Human communication involves far more than words; speak- ers’ utterances are often accompanied by various kinds of emo- tional expressions. How do listeners represent and integrate these distinct sources of information to make communicative inferences? We first show that people, as listeners, integrate both verbal and emotional information when inferring true states of the world and others’ communicative goals, and then present computational models that formalize these inferences by considering different ways in which these signals might be generated. Results suggest that while listeners understand that utterances and emotional expressions are generated by a bal- ance of speakers’ informational and social goals, they addi- tionally consider the possibility that emotional expressions are noncommunicative signals that directly reflect the speaker’s in- ternal states. These results are consistent with the predictions of a probabilistic model that integrates goal inferences with linguistic and emotional signals, moving us towards a more complete formal theory of human communicative reasoning. 
    more » « less
  4. We propose a novel task, G4C (Goal-driven Guidance Generation in Grounded Communication), for studying goal-driven and grounded natural language interactions. Specifically, we choose Dungeons and Dragons (D&D) -- a role-playing game consisting of multiple player characters and a Dungeon Master (DM) who collaborate to achieve a set of goals that are beneficial to the players -- as a testbed for this task. Here, each of the player characters is a student, with their own personas and abilities, and the DM is the teacher, an arbitrator of the rules of the world and responsible for assisting and guiding the students towards a global goal. We propose a theory-of-mind-inspired methodology for training such a DM with reinforcement learning (RL), where a DM: (1) learns to predict how the players will react to its utterances using a dataset of D&D dialogue transcripts; and (2) uses this prediction as a reward function providing feedback on how effective these utterances are at guiding the players towards a goal. Human and automated evaluations show that a DM trained with RL to generate guidance by incorporating a theory-of-mind of the players significantly improves the players' ability to achieve goals grounded in their shared world. 
    more » « less
  5. This work studies the alignment of large language models with preference data from an imitation learning perspective. We establish a close theoretical connection between reinforcement learning from human feedback (RLHF) and imitation learning (IL), revealing that RLHF implicitly performs imitation learning on the preference data distribution. Building on this connection, we propose DIL, a principled framework that directly optimizes the imitation learning objective. DIL provides a unified imitation learning perspective on alignment, encompassing existing alignment algorithms as special cases while naturally introducing new variants. By bridging IL and RLHF, DIL offers new insights into alignment with RLHF. Extensive experiments demonstrate that DIL outperforms existing methods on various challenging benchmarks. The code for DIL is available at https://github.com/tengxiao1/DIL. 
    more » « less