AI language technologies increasingly assist and expand human communication. While AI-mediated communication reduces human effort, its societal consequences are poorly understood. In this study, we investigate whether using an AI writing assistant in personal self-presentation changes how people talk about themselves. In an online experiment, we asked participants (N=200) to introduce themselves to others. An AI language assistant supported their writing by suggesting sentence completions. The language model generating suggestions was fine-tuned to preferably suggest either interest, work, or hospitality topics. We evaluate how the topic preference of a language model affected users’ topic choice by analyzing the topics participants discussed in their self-presentations. Our results suggest that AI language technologies may change the topics their users talk about. We discuss the need for a careful debate and evaluation of the topic priors built into AI language technologies.
more »
« less
Co-Writing with Opinionated Language Models Affects Users’ Views
If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write – and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.
more »
« less
- Award ID(s):
- 1901151
- PAR ID:
- 10562907
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9781450394215
- Page Range / eLocation ID:
- 1 to 15
- Format(s):
- Medium: X
- Location:
- Hamburg Germany
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.more » « less
-
AI technologies such as Large Language Models (LLMs) are increasingly used to make suggestions to autocomplete text as people write. Can these suggestions impact people’s writing and attitudes? In two large-scale preregistered experiments (N=2,582), we expose participants who are writing about important societal issues to biased AI-generated suggestions. The attitudes participants expressed in their writing and in a post-task survey converged towards the AI’s position. Yet, a majority of participants were unaware of the AI suggestions’ bias and their influence. Further, awareness of the task or of the AI’s bias, e.g. warning participants about potential bias before or after exposure to the treatment, did not mitigate the influence effect. Moreover, the AI’s influence is not fully explained by the additional information provided by the suggestions.more » « less
-
Al-Nofaie, H (Ed.)Prior research has demonstrated relationships between personality traits of social media users and the language used in their posts. Few studies have examined whether there are relationships between personality traits of users and how they use emojis in their social media posts. Emojis are digital pictographs used to express ideas and emotions. There are thousands of emojis, which depict faces with expressions, objects, animals, and activities. We conducted a study with two samples (n = 76 andn = 245) in which we examined how emoji use on X (formerly Twitter) related to users’ personality traits and language use in posts. Personality traits were assessed from participants in an online survey. With participants’ consent, we analyzed word usage in posts. Word frequencies were calculated using the Linguistic Inquiry Word Count (LIWC). In both samples, the results showed that those who used the most emojis had the lowest levels of openness to experience. Emoji use was unrelated to the other personality traits. In sample 1, emoji use was also related to use of words related to family, positive emotion, and sadness and less frequent use of articles and words related to insight. In sample 2, more frequent use of emojis in posts was related to more frequent use ofyoupronouns,Ipronouns, and more frequent use of negative function words and words related to time. The results support the view that social media users’ characteristics may be gleaned from the content of their social media posts.more » « less
-
Social media feed ranking algorithms fail when they too narrowly focus on engagement as their objective. The literature has asserted a wide variety of values that these algorithms should account for as well -- ranging from well-being to productive discourse -- far more than can be encapsulated by a single topic or theory. In response, we present a library of values for social media algorithms: a pluralistic set of 78 values as articulated across the literature, implemented into LLM-powered content classifiers that can be installed individually or in combination for real-time re-ranking of social media feeds. We investigate this approach by developing a browser extension, Alexandria, that re-ranks the X/Twitter feed in real time based on the user's desired values. Through two user studies, both qualitative (N=12) and quantitative (N=257), we found that diverse user needs require a large library of values, enabling more nuanced preferences and greater user control. With this work, we argue that the values criticized as missing from social media ranking algorithms can be operationalized and deployed today through end-user tools.more » « less
An official website of the United States government

