skip to main content


This content will become publicly available on June 17, 2025

Title: Social Meme-ing: Measuring Linguistic Variation in Memes
Much work in the space of NLP has used computational methods to explore sociolinguistic variation in text. In this paper, we argue that memes, as multimodal forms of language comprised of visual templates and text, also exhibit meaningful social variation. We construct a computational pipeline to cluster individual instances of memes into templates and semantic variables, taking advantage of their multimodal structure in doing so. We apply this method to a large collection of meme images from Reddit and make available the resulting SEMANTICMEMES dataset of 3.8M images clustered by their semantic function. We use these clusters to analyze linguistic variation in memes, discovering not only that socially meaningful variation in meme usage exists between subreddits, but that patterns of meme innovation and acculturation within these communities align with previous findings on written language.  more » « less
Award ID(s):
1942591
NSF-PAR ID:
10513722
Author(s) / Creator(s):
; ;
Publisher / Repository:
ACL
Date Published:
Journal Name:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    This study tested the effect of visual attention on decision-making in digital environments. Fifty-nine individuals were asked how likely they would be to share 40 memes (photos with superimposed captions) on social media while their eye movements were tracked. The likelihood of sharing memes increased as attention to the text of the meme increased; conversely, the likelihood of sharing decreased as attention to the image of the meme increased. In addition, increased trait levels of agreeableness predicted a greater likelihood of sharing memes. These results indicate that individual differences in personality and eye movements predict the likelihood of sharing photo-memes on social media platforms. 
    more » « less
  2. While developing a story, novices and published writers alike have had to look outside themselves for inspiration. Language models have recently been able to generate text fluently, producing new stochastic narratives upon request. However, effectively integrating such capabilities with human cognitive faculties and creative processes remains challenging. We propose to investigate this integration with a multimodal writing support interface that offers writing suggestions textually, visually, and aurally. We conduct an extensive study that combines elicitation of prior expectations before writing, observation and semi-structured interviews during writing, and outcome evaluations after writing. Our results illustrate individual and situational variation in machine-in-the-loop writing approaches, suggestion acceptance, and ways the system is helpful. Centrally, we report how participants perform integrative leaps , by which they do cognitive work to integrate suggestions of varying semantic relevance into their developing stories. We interpret these findings, offering modeling and design recommendations for future creative writing support technologies. 
    more » « less
  3. Work in computer vision and natural language processing involving images and text has been experiencing explosive growth over the past decade, with a particular boost coming from the neural network revolution. The present volume brings together five research articles from several different corners of the area: multilingual multimodal image description (Frank et al. ), multimodal machine translation (Madhyastha et al. , Frank et al. ), image caption generation (Madhyastha et al. , Tanti et al. ), visual scene understanding (Silberer et al. ), and multimodal learning of high-level attributes (Sorodoc et al. ). In this article, we touch upon all of these topics as we review work involving images and text under the three main headings of image description (Section 2), visually grounded referring expression generation (REG) and comprehension (Section 3), and visual question answering (VQA) (Section 4). 
    more » « less
  4. ‘Interdependent’ privacy violations occur when users share private photos and information about other people in social media without permission. This research investigated user characteristics associated with interdependent privacy perceptions, by asking social media users to rate photo-based memes depicting strangers on the degree to which they were too private to share. Users also completed questionnaires measuring social media usage and personality. Separate groups rated the memes on shareability, valence, and entertainment value. Users were less likely to share memes that were rated as private, except when the meme was entertaining or when users exhibited dark triad characteristics. Users with dark triad characteristics demonstrated a heightened awareness of interdependent privacy and increased sharing of others’ photos. A model is introduced that highlights user types and characteristics that correspond to different privacy preferences: privacy preservers, ignorers, and violators. We discuss how interventions to support interdependent privacy must effectively influence diverse users. 
    more » « less
  5. Traditionally, many text-mining tasks treat individual word-tokens as the finest meaningful semantic granularity. However, in many languages and specialized corpora, words are composed by concatenating semantically meaningful subword structures. Word-level analysis cannot leverage the semantic information present in such subword structures. With regard to word embedding techniques, this leads to not only poor embeddings for infrequent words in long-tailed text corpora but also weak capabilities for handling out-of-vocabulary words. In this paper we propose MorphMine for unsupervised morpheme segmentation. MorphMine applies a parsimony criterion to hierarchically segment words into the fewest number of morphemes at each level of the hierarchy. This leads to longer shared morphemes at each level of segmentation. Experiments show that MorphMine segments words in a variety of languages into human-verified morphemes. Additionally, we experimentally demonstrate that utilizing MorphMine morphemes to enrich word embeddings consistently improves embedding quality on a variety of of embedding evaluations and a downstream language modeling task. 
    more » « less