skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Case for Establishing a Collective Perspective to Address the Harms of Platform Personalization
Personalization on digital platforms drives a broad range of harms, including misinformation, manipulation, social polarization, subversion of autonomy, and discrimination. In recent years, policy makers, civil society advocates, and researchers have proposed a wide range of interventions to address these challenges. This Article argues that the emerging toolkit reflects an individualistic view of both personal data and data-driven harms that will likely be inadequate to address growing harms in the global data ecosystem. It maintains that interventions must be grounded in an understanding of the fundamentally collective nature of data, wherein platforms leverage complex patterns of behaviors and characteristics observed across a large population to draw inferences and make predictions about individuals. Using the lens of the collective nature of data, this Article evaluates various approaches to addressing personalization-driven harms under current consideration. It also frames concrete guidance for future legislation in this space and for meaningful transparency that goes far beyond current transparency proposals. It offers a roadmap for what meaningful transparency must constitute: a collective perspective providing a third party with ongoing insight into the information gathered and observed about individuals and how it correlates with any personalized content they receive across a large, representative population. These insights would enable the third party to understand, identify, quantify, and address cases of personalization-driven harms. This Article discusses how such transparency can be achieved without sacrificing privacy and provides guidelines for legislation to support the development of such transparency.  more » « less
Award ID(s):
2217680
PAR ID:
10466982
Author(s) / Creator(s):
Publisher / Repository:
Vanderbilt Journal of Entertainment and Technology Law, Social Science Research Network
Date Published:
Journal Name:
Vanderbilt Journal of Entertainment and Technology Law
ISSN:
1942-6771
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Bots have become critical for managing online communities on platforms, especially to match the increasing technical sophistication of online harms. However, community leaders often adoptthird-party bots, creating room for misalignment in their assumptions, expectations, and understandings (i.e., their technological frames) about them. On platforms where sharing bots can be extremely valuable, how community leaders can revise their frames about bots to more effectively adopt them is unclear. In this work, we conducted a qualitative interview study with 16 community leaders on Discord examining how they adopt third-party bots. We found that participants addressed challenges stemming from uncertainties about a bot's security, reliability, and fit through emergent social ecosystems. Formal and informal opportunities to discuss bots with others across communities enabled participants to revise their technological frames over time, closing gaps in bot-specific skills and knowledge. This social process of learning shifted participants' perspectives of the labor of bot adoption into something that was satisfying and fun, underscoring the value of collaborative and communal approaches to adopting bots. Finally, by shaping participants' mental models of the nature, value, and use of bots, social ecosystems also raise some practical tensions in how they support user creativity and customization in third-party bot use. Together, the social nature of adopting third-party bots in our interviews offers insight into how we can better support the sharing of valuable user-facing tools across online communities. 
    more » « less
  2. Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse today. The opaque nature of the algorithms these platforms use to curate content raises societal questions. Prior studies have used black-box methods led by experts or collaborative audits driven by everyday users to show that these algorithms can lead to biased or discriminatory outcomes. However, existing auditing methods face fundamental limitations because they function independent of the platforms. Concerns of potential harmful outcomes have prompted proposal of legislation in both the U.S. and the E.U. to mandate a new form of auditing where vetted external researchers get privileged access to social media platforms. Unfortunately, to date there have been no concrete technical proposals to provide such auditing, because auditing at scale risks disclosure of users' private data and platforms' proprietary algorithms. We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation. The first contribution of our work is to enumerate the challenges and the limitations of existing auditing methods to implement these policies at scale. Second, we suggest that limited, privileged access to relevance estimators is the key to enabling generalizable platform-supported auditing of social media platforms by external researchers. Third, we show platform-supported auditing need not risk user privacy nor disclosure of platforms' business interests by proposing an auditing framework that protects against these risks. For a particular fairness metric, we show that ensuring privacy imposes only a small constant factor increase (6.34x as an upper bound, and 4× for typical parameters) in the number of samples required for accurate auditing. Our technical contributions, combined with ongoing legal and policy efforts, can enable public oversight into how social media platforms affect individuals and society by moving past the privacy-vs-transparency hurdle. 
    more » « less
  3. An essential component of initiatives that aim to address pervasive inequalities of any kind is the ability to collect empirical evidence of both the status quo baseline and of any improvement that can be attributed to prescribed and deployed interventions. Unfortunately, two substantial barriers can arise preventing the collection and analysis of such empirical evidence: (1) the sensitive nature of the data itself and (2) a lack of technical sophistication and infrastructure available to both an initiative’s beneficiaries and to those spearheading it. In the last few years, it has been shown that a cryptographic primitive called secure multi-party computation (MPC) can provide a natural technological resolution to this conundrum. MPC allows an otherwise disinterested third party to contribute its technical expertise and resources, to avoid incurring any additional liabilities itself, and (counterintuitively) to reduce the level of data exposure that existing parties must accept to achieve their data analysis goals. However, achieving these benefits requires the deliberate design of MPC tools and frameworks whose level of accessibility to non-technical users with limited infrastructure and expertise is state-of-the-art. We describe our own experiences designing, implementing, and deploying such usable web applications for secure data analysis within the context of two real-world initiatives that focus on promoting economic equality. 
    more » « less
  4. In an era marked by ubiquitous reliance on mobile applications for nearly every need, the opacity of apps’ behavior poses significant threats to their users’ privacy. Although major data protection regulations require apps to disclose their data practices transparently, previous studies have pointed out difficulties in doing so. To further delve into this issue, this article describes an automated method to capture data-sharing practices in Android apps and assess their proper disclosure according to the EU General Data Protection Regulation. We applied the method to 9,000 random Android apps, unveiling an uncomfortable reality: over 80% of Android applications that transfer personal data off device potentially fail to meet GDPR transparency requirements. We further investigate the role of third-party libraries, shedding light on the source of this problem and pointing towards measures to address it. 
    more » « less
  5. The growth of technologies promising to infer emotions raises political and ethical concerns, including concerns regarding their accuracy and transparency. A marginalized perspective in these conversations is that of data subjects potentially affected by emotion recognition. Taking social media as one emotion recognition deployment context, we conducted interviews with data subjects (i.e., social media users) to investigate their notions about accuracy and transparency in emotion recognition and interrogate stated attitudes towards these notions and related folk theories. We find that data subjects see accurate inferences as uncomfortable and as threatening their agency, pointing to privacy and ambiguity as desired design principles for social media platforms. While some participants argued that contemporary emotion recognition must be accurate, others raised concerns about possibilities for contesting the technology and called for better transparency. Furthermore, some challenged the technology altogether, highlighting that emotions are complex, relational, performative, and situated. In interpreting our findings, we identify new folk theories about accuracy and meaningful transparency in emotion recognition. Overall, our analysis shows an unsatisfactory status quo for data subjects that is shaped by power imbalances and a lack of reflexivity and democratic deliberation within platform governance. 
    more » « less