To enable targeted ads, companies profile Internet users, automatically inferring potential interests and demographics. While current profiling centers on users' web browsing data, smartphones and other devices with rich sensing capabilities portend profiling techniques that draw on methods from ubiquitous computing. Unfortunately, even existing profiling and ad-targeting practices remain opaque to users, engendering distrust, resignation, and privacy concerns. We hypothesized that making profiling visible at the time and place it occurs might help users better understand and engage with automatically constructed profiles. To this end, we built a technology probe that surfaces the incremental construction of user profiles from both web browsing and activities in the physical world. The probe explores transparency and control of profile construction in real time. We conducted a two-week field deployment of this probe with 25 participants. We found that increasing the visibility of profiling helped participants anticipate how certain actions can trigger specific ads. Participants' desired engagement with their profile differed in part based on their overall attitudes toward ads. Furthermore, participants expected algorithms would automatically determine when an inference was inaccurate, no longer relevant, or off-limits. Current techniques typically do not do this. Overall, our findings suggest that leveraging opportunistic moments within pervasive computing to engage users with their own inferred profiles can create more trustworthy and positive experiences with targeted ads. 
                        more » 
                        « less   
                    
                            
                            What Does It Mean to Be Creepy? Responses to Visualizations of Personal Browsing Activity, Online Tracking, and Targeted Ads
                        
                    
    
            Internet companies routinely follow users around the web, building profiles for ad targeting based on inferred attributes. Prior work has shown that these practices, generally, are creepy—but what does that mean? To help answer this question, we substantially revised an open-source browser extension built to observe a user's browsing behavior and present them with a tracker's perspective of that behavior. Our updated extension models possible interest inferences far more accurately, integrates data scraped from the user's Google ad dashboard, and summarizes ads the user was shown. Most critically, it introduces ten novel visualizations that show implications of the collected data, both the mundane (e.g., total number of ads you've been served) and the provocative (e.g., your interest in reproductive health, a potentially sensitive topic). We use our extension as a design probe in a week-long field study with 200 participants. We find that users do perceive online tracking as creepy—but that the meaning of creepiness is far from universal. Participants felt differently about creepiness even when their data presented similar visualizations, and even when responding to the most potentially provocative visualizations—in no case did more than 66% of participants agree that any one visualization was creepy. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10578101
- Publisher / Repository:
- Proceedings on Privacy Enhancing Technologies
- Date Published:
- Journal Name:
- Proceedings on Privacy Enhancing Technologies
- Volume:
- 2024
- Issue:
- 3
- ISSN:
- 2299-0984
- Page Range / eLocation ID:
- 715 to 743
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Recently, Meta has shifted towards AI-mediated ad targeting mechanisms that do not require advertisers to provide detailed targeting criteria. The shift is likely driven by excitement over AI capabilities as well as the need to address new data privacy policies and targeting changes agreed upon in civil rights settlements. At the same time, in response to growing public concern about the harms of targeted advertising, Meta has touted their ad preference controls as an effective mechanism for users to exert control over the advertising they see. Furthermore, Meta markets their Why this ad targeting explanation as a transparency tool that allows users to understand the reasons for seeing particular ads and inform their actions to control what ads they see in the future. Our study evaluates the effectiveness of Meta's See less ad control, as well as the actionability of ad targeting explanations following the shift to AI-mediated targeting. We conduct a large-scale study, randomly assigning participants the intervention of marking See less to either Body Weight Control or Parenting topics, and collecting the ads Meta shows to participants and their targeting explanations before and after the intervention. We find that utilizing the See less ad control for the topics we study does not significantly reduce the number of ads shown by Meta on these topics, and that the control is less effective for some users whose demographics are correlated with the topic. Furthermore, we find that the majority of ad targeting explanations for local ads made no reference to location-specific targeting criteria, and did not inform users why ads related to the topics they requested to See less of continued to be delivered. We hypothesize that the poor effectiveness of controls and lack of actionability and comprehensiveness in explanations are the result of the shift to AI-mediated targeting, for which explainability and transparency tools have not yet been developed by Meta. Our work thus provides evidence for the need of new methods for transparency and user control, suitable and reflective of how the increasingly complex and AI-mediated ad delivery systems operate.more » « less
- 
            null (Ed.)Political campaigns are increasingly turning to targeted advertising platforms to inform and mobilize potential voters. The appeal of these platforms stems from their promise to empower advertisers to select (or "target") users who see their messages with great precision, including through inferences about those users' interests and political affiliations. However, prior work has shown that the targeting may not work as intended, as platforms' ad delivery algorithms play a crucial role in selecting which subgroups of the targeted users see the ads. In particular, the platforms can selectively deliver ads to subgroups within the target audiences selected by advertisers in ways that can lead to demographic skews along race and gender lines, and do so without the advertiser's knowledge. In this work we demonstrate that ad delivery algorithms used by Facebook, the most advanced targeted advertising platform, shape the political ad delivery in ways that may not be beneficial to the political campaigns and to societal discourse. In particular, the ad delivery algorithms lead to political messages on Facebook being shown predominantly to people who Facebook thinks already agree with the ad campaign's message even if the political advertiser targets an ideologically diverse audience. Furthermore, an advertiser determined to reach ideologically non-aligned users is non-transparently charged a high premium compared to their more aligned competitor, a difference from traditional broadcast media. Our results demonstrate that Facebook exercises control over who sees which political messages beyond the control of those who pay for them or those who are exposed to them. Taken together, our findings suggest that the political discourse's increased reliance on profit-optimized, non-transparent algorithmic systems comes at a cost of diversity of political views that voters are exposed to. Thus, the work raises important questions of fairness and accountability desiderata for ad delivery algorithms applied to political ads.more » « less
- 
            Targeted advertising remains an important part of the free web browsing experience, where advertisers' targeting and personalization algorithms together find the most relevant audience for millions of ads every day. However, given the wide use of advertising, this also enables using ads as a vehicle for problematic content, such as scams or clickbait. Recent work that explores people's sentiments toward online ads, and the impacts of these ads on people's online experiences, has found evidence that online ads can indeed be problematic. Further, there is the potential for personalization to aid the delivery of such ads, even when the advertiser targets with low specificity. In this paper, we study Facebook--one of the internet's largest ad platforms--and investigate key gaps in our understanding of problematic online advertising: (a) What categories of ads do people find problematic? (b) Are there disparities in the distribution of problematic ads to viewers? and if so, (c) Who is responsible--advertisers or advertising platforms? To answer these questions, we empirically measure a diverse sample of user experiences with Facebook ads via a 3-month longitudinal panel. We categorize over 32,000 ads collected from this panel (n = 132); and survey participants' sentiments toward their own ads to identify four categories of problematic ads. Statistically modeling the distribution of problematic ads across demographics, we find that older people and minority groups are especially likely to be shown such ads. Further, given that 22% of problematic ads had no specific targeting from advertisers, we infer that ad delivery algorithms (advertising platforms themselves) played a significant role in the biased distribution of these ads.more » « less
- 
            Advertising companies and data brokers often provide consumers access to a dashboard summarizing attributes they have collected or inferred about that user. These attributes can be used for targeted advertising. Several studies have examined the accuracy of these collected attributes or users’ reactions to them. However, little is known about how these dashboards, and the associated attributes, change over time. Here, we report data from a week-long, longitudinal study (𝑛=158) in which participants used a browser extension automatically capturing data from one dashboard, Google Ads Settings, after every fifth website the participant visited. The results show that Ads Settings is frequently updated, includes many attributes unique to only a single participant in our sample, and is approximately 90% accurate when assigning age and gender. We also find evidence that Ads Settings attributes may dynamically impact browsing behavior and may be filtered to remove sensitive interests.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    