skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Goal Bias in Language and Memory: Explaining the Asymmetry
AbstractIn language, speakers are more likely to mention the goals, or endpoints, of motion events than they are to mention sources, or starting points (e.g. Lakusta & Landau, 2005). This phenomenon has been explained in cognitive terms, but may also be affected by discourse-communicative factors: For participants in prior work, sources can be characterized as given, already-known information, while goals are new, relevant information to communicate. We investigate to what extent the goal bias in language (and memory) is affected when the source is or is not in common ground between speaker and hearer, and thus whether it is discourse-given or -new. We find that the goal bias in language is severely diminished when source and goal are discourse-new. We suggest that the goal bias in language can be attributed to discourse-communicative factors in addition to any cognitive goal bias. Discourse factors cannot fully account for the bias in memory.  more » « less
Award ID(s):
1632849
PAR ID:
10147304
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Annual Conference of the Cognitive Science Society
ISSN:
1069-7977
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Human communication involves far more than words; speak- ers’ utterances are often accompanied by various kinds of emo- tional expressions. How do listeners represent and integrate these distinct sources of information to make communicative inferences? We first show that people, as listeners, integrate both verbal and emotional information when inferring true states of the world and others’ communicative goals, and then present computational models that formalize these inferences by considering different ways in which these signals might be generated. Results suggest that while listeners understand that utterances and emotional expressions are generated by a bal- ance of speakers’ informational and social goals, they addi- tionally consider the possibility that emotional expressions are noncommunicative signals that directly reflect the speaker’s in- ternal states. These results are consistent with the predictions of a probabilistic model that integrates goal inferences with linguistic and emotional signals, moving us towards a more complete formal theory of human communicative reasoning. 
    more » « less
  2. null (Ed.)
    Language is a remarkably efficient tool for transmitting information. Yet human speakers make statements that are inefficient, imprecise, or even contrary to their own beliefs, all in the service of being polite. What rational machinery underlies polite language use? Here, we show that polite speech emerges from the competition of three communicative goals: to convey information, to be kind, and to present oneself in a good light. We formalize this goal tradeoff using a probabilistic model of utterance production, which predicts human utterance choices in socially sensitive situations with high quantitative accuracy, and we show that our full model is superior to its variants with subsets of the three goals. This utility-theoretic approach to speech acts takes a step toward explaining the richness and subtlety of social language use. 
    more » « less
  3. Episodic memories are records of personally experienced events, coded neurally via the hippocampus and sur- rounding medial temporal lobe cortex. Information about the neural signal corresponding to a memory representation can be measured in fMRI data when the pattern across voxels is examined. Prior studies have found that similarity in the voxel patterns across repetition of a to-be-remembered stimulus predicts later memory retrieval, but the results are inconsistent across studies. The current study investigates the possibility that cognitive goals (defined here via the task instructions given to participants) during encoding affect the voxel pattern that will later support memory retrieval, and therefore that neural representations cannot be interpreted based on the stimulus alone. The behavioral results showed that exposure to variable cognitive tasks across repetition of events benefited subsequent memory retrieval. Voxel patterns in the hippocampus indicated a significant interaction between cognitive tasks (variable vs. consistent) and memory (remembered vs. forgotten) such that reduced voxel pattern similarity for repeated events with variable cognitive tasks, but not consistent cognitive tasks, sup- ported later memory success. There was no significant interaction in neural pattern similarity between cognitive tasks and memory success in medial temporal cortices or lateral occipital cortex. Instead, higher similarity in voxel patterns in right medial temporal cortices was associated with later memory retrieval, regardless of cognitive task. In conclusion, we found that the relationship between pattern similarity across repeated encoding and memory success in the hippocampus (but not medial temporal lobe cortex) changes when the cognitive task during encoding does or does not vary across repetitions of the event. 
    more » « less
  4. It is now well established that memory representations of words are acoustically rich. Alongside this development, a related line of work has shown that the robustness of memory encoding varies widely depending on who is speaking. In this dissertation, I explore the cognitive basis of memory asymmetries at a larger linguistic level (spoken sentences), using the mechanism of socially guided attention allocation to explain how listeners dynamically shift cognitive resources based on the social characteristics of speech. This dissertation consists of three empirical studies designed to investigate the factors that pattern asymmetric memory for spoken language. In the first study, I explored specificity effects at the level of the sentence. While previous research on specificity has centralized the lexical item as the unit of study, I showed that talker-specific memory patterns are also robust at a larger linguistic level, making it likely that acoustic detail is fundamental to human speech perception more broadly. In the second study, I introduced a set of diverse talkers and showed that memory patterns vary widely within this group, and that the memorability of individual talkers is somewhat consistent across listeners. In the third study, I showed that memory behaviors do not depend merely on the speech characteristics of the talker or on the content of the sentence, but on the unique relationship between these two. Memory dramatically improved when semantic content of sentences was congruent with widely held social associations with talkers based on their speech, and this effect was particularly pronounced when listeners had a high cognitive load during encoding. These data collectively provide evidence that listeners allocate attentional resources on an ad hoc, socially guided basis. Listeners subconsciously draw on fine-grained phonetic information and social associations to dynamically adapt low-level cognitive processes while understanding spoken language and encoding it to memory. This approach positions variation in speech not as an obstacle to perception, but as an information source that humans readily recruit to aid in the seamless understanding of spoken language. 
    more » « less
  5. null (Ed.)
    Significant research has provided robust task and evaluation languages for the analysis of exploratory visualizations. Unfortunately, these taxonomies fail when applied to communicative visualizations. Instead, designers often resort to evaluating communicative visualizations from the cognitive efficiency perspective: "can the recipient accurately decode my message/insight?" However, designers are unlikely to be satisfied if the message went 'in one ear and out the other.' The consequence of this inconsistency is that it is difficult to design or select between competing options in a principled way. The problem we address is the fundamental mismatch between how designers want to describe their intent, and the language they have. We argue that visualization designers can address this limitation through a learning lens: that the recipient is a student and the designer a teacher. By using learning objectives, designers can better define, assess, and compare communicative visualizations. We illustrate how the learning-based approach provides a framework for understanding a wide array of communicative goals. To understand how the framework can be applied (and its limitations), we surveyed and interviewed members of the Data Visualization Society using their own visualizations as a probe. Through this study we identified the broad range of objectives in communicative visualizations and the prevalence of certain objective types. 
    more » « less