skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Social Meme-ing: Measuring Linguistic Variation in Memes
Much work in the space of NLP has used computational methods to explore sociolinguistic variation in text. In this paper, we argue that memes, as multimodal forms of language comprised of visual templates and text, also exhibit meaningful social variation. We construct a computational pipeline to cluster individual instances of memes into templates and semantic variables, taking advantage of their multimodal structure in doing so. We apply this method to a large collection of meme images from Reddit and make available the resulting SEMANTICMEMES dataset of 3.8M images clustered by their semantic function. We use these clusters to analyze linguistic variation in memes, discovering not only that socially meaningful variation in meme usage exists between subreddits, but that patterns of meme innovation and acculturation within these communities align with previous findings on written language.  more » « less
Award ID(s):
1942591
PAR ID:
10513722
Author(s) / Creator(s):
; ;
Publisher / Repository:
ACL
Date Published:
Journal Name:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    This study tested the effect of visual attention on decision-making in digital environments. Fifty-nine individuals were asked how likely they would be to share 40 memes (photos with superimposed captions) on social media while their eye movements were tracked. The likelihood of sharing memes increased as attention to the text of the meme increased; conversely, the likelihood of sharing decreased as attention to the image of the meme increased. In addition, increased trait levels of agreeableness predicted a greater likelihood of sharing memes. These results indicate that individual differences in personality and eye movements predict the likelihood of sharing photo-memes on social media platforms. 
    more » « less
  2. Abstract In recent years, large language models (LLMs) and vision language models (VLMs) have excelled at tasks requiring human-like reasoning, inspiring researchers in engineering design to use language models (LMs) as surrogate evaluators of design concepts. But do these models actually evaluate designs like humans? While recent work has shown that LM evaluations sometimes fall within human variance on Likert-scale grading tasks, those tasks often obscure the reasoning and biases behind the scores. To address this limitation, we compare LM word embeddings (trained to capture semantic similarity) with human-rated similarity embeddings derived from triplet comparisons (“is A closer to B than C?”) on a dataset of design sketches and descriptions. We assess alignment via local tripletwise similarity and embedding distances, allowing for deeper insights than raw Likert-scale scores provide. We also explore whether describing the designs to LMs through text or images improves alignment with human judgments. Our findings suggest that text alone may not fully capture the nuances humans key into, yet text-based embeddings outperform their multimodal counterparts on satisfying local triplets. On the basis of these insights, we offer recommendations for effectively integrating LMs into design evaluation tasks. 
    more » « less
  3. ‘Interdependent’ privacy violations occur when users share private photos and information about other people in social media without permission. This research investigated user characteristics associated with interdependent privacy perceptions, by asking social media users to rate photo-based memes depicting strangers on the degree to which they were too private to share. Users also completed questionnaires measuring social media usage and personality. Separate groups rated the memes on shareability, valence, and entertainment value. Users were less likely to share memes that were rated as private, except when the meme was entertaining or when users exhibited dark triad characteristics. Users with dark triad characteristics demonstrated a heightened awareness of interdependent privacy and increased sharing of others’ photos. A model is introduced that highlights user types and characteristics that correspond to different privacy preferences: privacy preservers, ignorers, and violators. We discuss how interventions to support interdependent privacy must effectively influence diverse users. 
    more » « less
  4. While developing a story, novices and published writers alike have had to look outside themselves for inspiration. Language models have recently been able to generate text fluently, producing new stochastic narratives upon request. However, effectively integrating such capabilities with human cognitive faculties and creative processes remains challenging. We propose to investigate this integration with a multimodal writing support interface that offers writing suggestions textually, visually, and aurally. We conduct an extensive study that combines elicitation of prior expectations before writing, observation and semi-structured interviews during writing, and outcome evaluations after writing. Our results illustrate individual and situational variation in machine-in-the-loop writing approaches, suggestion acceptance, and ways the system is helpful. Centrally, we report how participants perform integrative leaps , by which they do cognitive work to integrate suggestions of varying semantic relevance into their developing stories. We interpret these findings, offering modeling and design recommendations for future creative writing support technologies. 
    more » « less
  5. Work in computer vision and natural language processing involving images and text has been experiencing explosive growth over the past decade, with a particular boost coming from the neural network revolution. The present volume brings together five research articles from several different corners of the area: multilingual multimodal image description (Frank et al. ), multimodal machine translation (Madhyastha et al. , Frank et al. ), image caption generation (Madhyastha et al. , Tanti et al. ), visual scene understanding (Silberer et al. ), and multimodal learning of high-level attributes (Sorodoc et al. ). In this article, we touch upon all of these topics as we review work involving images and text under the three main headings of image description (Section 2), visually grounded referring expression generation (REG) and comprehension (Section 3), and visual question answering (VQA) (Section 4). 
    more » « less