skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Profile update: the effects of identity disclosure on network connections and language
Abstract Our social identities determine how we interact and engage with the world surrounding us. In online settings, individuals can make these identities explicit by including them in their public biography, possibly signaling a change in what is important to them and how they should be viewed. While there is evidence suggesting the impact of intentional identity disclosure in online social platforms, its actual effect on engagement activities at the user level has yet to be explored. Here, we perform the first large-scale study on Twitter that examines behavioral changes following identity disclosure on Twitter profiles. Combining social networks with methods from natural language processing and quasi-experimental analyses, we discover that after disclosing an identity on their profiles, users (1) tweet and retweet more in a way that aligns with their respective identities, and (2) connect more with users that disclose similar identities. We also examine whether disclosing the identity increases the chance of being targeted for offensive comments and find that in fact (3) the combined effect of disclosing identity via both tweets and profiles is associated with a reduced number of offensive replies from others. Our findings highlight that the decision to disclose one’s identity in online spaces can lead to substantial changes in how they express themselves or forge connections, with a lesser degree of negative consequences than anticipated.  more » « less
Award ID(s):
2007251
PAR ID:
10518621
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
EPJ Data Science
Volume:
13
Issue:
1
ISSN:
2193-1127
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Occupational identity concerns the self-image of an individual’s affinities and socioeconomic class, and directs how a person should behave in certain ways. Understanding the establishment of occupational identity is important to studywork-related behaviors. However, large-scale quantitative studies of occupational identity are difficult to perform due to its indirect observable nature. But profile biographies on social media contain concise yet rich descriptions about self- identity. Analysis of these self-descriptions provides powerful insights concerning how people see themselves and how they change over time.In this paper, we present and analyze a longitudinal corpus recording the self-authored public biographies of 51.18 million Twitter users as they evolve over a six-year period from 2015-2021. In particular, we investigate the social approval (e.g., job prestige and salary) effects in how people self-disclose occupational identities, quantifying over-represented occupations as well as the occupational transitions w.r.t. job prestige over time. We show that self-reported jobs and job transitions are biased toward more prestigious occupations. We also present an intriguing case study about how self-reported jobs changed amid COVID-19 and the subsequent Great Resignation trend with the latest full year data in 2022. These results demonstrate that social media biographies are a rich source of data for quantitative social science studies, allowing unobtrusive observation of the intersectionsand transitions obtained in online self-presentation. 
    more » « less
  2. Online harassment is pervasive. While substantial research has examined the nature of online harassment and how to moderate it, little work has explored how social media users evaluate the profiles of online harassers. This is important for helping people who may be experiencing or observing harassment to quickly and efficiently evaluate the user doing the harassing. We conducted a lab experiment (N=45) that eye-tracked participants while they viewed profiles of users who engaged in online harassment on mock Facebook, Twitter, and Instagram profiles. We evaluated what profile elements they looked at and for how long relative to a control group, and their qualitative attitudes about harasser profiles. Results showed that participants look at harassing users' post history more quickly than non-harassing users. They are also somewhat more likely to recall harassing profiles than non-harassing profiles. However, they do not spend more time on harassing profiles. Understanding what users pay attention to and recall may offer new design opportunities for supporting people who experience or observe harassment online. 
    more » « less
  3. On Twitter, so-called verified accounts represent celebrities and organizations of public interest, selected by Twitter based on criteria for both activity and notability. Our work seeks to understand the involvement and influence of these accounts in patterns of self-disclosure, namely, voluntary sharing of personal information. In a study of 3 million COVID-19 related tweets, we present a comparison of self-disclosure in verified vs ordinary users. We discuss evidence of peer effects on self-disclosing behaviors and analyze topics of conversation associated with these practices. 
    more » « less
  4. Galak, Jeff (Ed.)
    We investigate perceptions of tweets marked with the #BlackLivesMatter and #AllLivesMatter hashtags, as well as how the presence or absence of those hashtags changed the meaning and subsequent interpretation of tweets in U.S. participants. We found a strong effect of partisanship on perceptions of the tweets, such that participants on the political left were more likely to view #AllLivesMatter tweets as racist and offensive, while participants on the political right were more likely to view #BlackLivesMatter tweets as racist and offensive. Moreover, we found that political identity explained evaluation results far better than other measured demographics. Additionally, to assess the influence of hashtags themselves, we removed them from tweets in which they originally appeared and added them to selected neutral tweets. Our results have implications for our understanding of how social identity, and particularly political identity, shapes how individuals perceive and engage with the world. 
    more » « less
  5. As generative artificial intelligence (AI) has found its way into various work tasks, questions about whether its usage should be disclosed and the consequences of such disclosure have taken center stage in public and academic discourse on digital transparency. This article addresses this debate by asking: Does disclosing the usage of AI compromise trust in the user? We examine the impact of AI disclosure on trust across diverse tasks—from communications via analytics to artistry—and across individual actors such as supervisors, subordinates, professors, analysts, and creatives, as well as across organizational actors such as investment funds. Thirteen experiments consistently demonstrate that actors who disclose their AI usage are trusted less than those who do not. Drawing on micro-institutional theory, we argue that this reduction in trust can be explained by reduced perceptions of legitimacy, as shown across various experimental designs (Studies 6–8). Moreover, we demonstrate that this negative effect holds across different disclosure framings, above and beyond algorithm aversion, regardless of whether AI involvement is known, and regardless of whether disclosure is voluntary or mandatory, though it is comparatively weaker than the effect of third-party exposure (Studies 9–13). A within-paper meta-analysis suggests this trust penalty is attenuated but not eliminated among evaluators with favorable technology attitudes and perceptions of high AI accuracy. This article contributes to research on trust, AI, transparency, and legitimacy by showing that AI disclosure can harm social perceptions, emphasizing that transparency is not straightforwardly beneficial, and highlighting legitimacy’s central role in trust formation. 
    more » « less